Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90791
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Health Technology and Informatics-
dc.creatorWan, KW-
dc.creatorWong, CH-
dc.creatorIp, HF-
dc.creatorFan, D-
dc.creatorYuen, PL-
dc.creatorFong, HY-
dc.creatorYing, M-
dc.date.accessioned2021-09-03T02:33:59Z-
dc.date.available2021-09-03T02:33:59Z-
dc.identifier.issn2223-4292-
dc.identifier.urihttp://hdl.handle.net/10397/90791-
dc.language.isoenen_US
dc.publisherAME Publishing Companyen_US
dc.rights© Quantitative Imaging in Medicine and Surgery. |This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Wan KW, Wong CH, Ip HF, Fan D, Yuen PL, Fong HY, Ying M. Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: a comparative study. Quant Imaging Med Surg 2021;11(4):1381-1393 is available at https://doi.org/10.21037/qims-20-922en_US
dc.subjectAutoML Visionen_US
dc.subjectBreast canceren_US
dc.subjectComputer-aided diagnosis (CAD)en_US
dc.subjectMachine learningen_US
dc.subjectUltrasonographyen_US
dc.titleEvaluation of the performance of traditional machine learning algorithms, convolutional neural network and automl vision in ultrasound breast lesions classification : a comparative studyen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1381-
dc.identifier.epage1393-
dc.identifier.volume11-
dc.identifier.issue4-
dc.identifier.doi10.21037/qims-20-922-
dcterms.abstractBackground: In recent years, there was an increasing popularity in applying artificial intelligence in the medical field from computer-aided diagnosis (CAD) to patient prognosis prediction. Given the fact that not all healthcare professionals have the required expertise to develop a CAD system, the aim of this study was to investigate the feasibility of using AutoML Vision, a highly automatic machine learning model, for future clinical applications by comparing AutoML Vision with some commonly used CAD algorithms in the differentiation of benign and malignant breast lesions on ultrasound.-
dcterms.abstractMethods: A total of 895 breast ultrasound images were obtained from the two online open-access ultrasound breast images datasets. Traditional machine learning models (comprising of seven commonly used CAD algorithms) with three content-based radiomic features (Hu Moments, Color Histogram, Haralick Texture) extracted, and a convolutional neural network (CNN) model were built using python language. AutoML Vision was trained in Google Cloud Platform. Sensitivity, specificity, F1 score and average precision (AUCPR) were used to evaluate the diagnostic performance of the models. Cochran's Q test was used to evaluate the statistical significance between all studied models and McNemar test was used as the post-hoc test to perform pairwise comparisons. The proposed AutoML model was also compared with the current related studies that involve similar medical imaging modalities in characterizing benign or malignant breast lesions.-
dcterms.abstractResults: There was significant difference in the diagnostic performance among all studied traditional machine learning classifiers (P<0.05). Random Forest achieved the best performance in the differentiation of benign and malignant breast lesions (accuracy: 90%; sensitivity: 71%; specificity: 100%; F1 score: 0.83; AUCPR: 0.90) which was statistically comparable to the performance of CNN (accuracy: 91%; sensitivity: 82%; specificity: 96%; F1 score: 0.87; AUCPR: 0.88) and AutoML Vision (accuracy: 86%; sensitivity: 84%; specificity: 88%; F1 score: 0.83; AUCPR: 0.95) based on Cochran's Q test (P>0.05).-
dcterms.abstractConclusions: In this study, the performance of AutoML Vision was not significantly different from that of Random Forest (the best classifier among traditional machine learning models) and CNN. AutoML Vision showed relatively high accuracy and comparable to current commonly used classifiers which may prompt for future application in clinical practice.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationQuantitative imaging in medicine and surgery, Apr. 2021, v. 11, no. 4, p. 1381-1393-
dcterms.isPartOfQuantitative imaging in medicine and surgery-
dcterms.issued2021-04-
dc.identifier.scopus2-s2.0-85101005080-
dc.identifier.eissn2223-4306-
dc.description.validate202109 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
59009-PB3-5421-R2.pdf1.34 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

54
Last Week
0
Last month
Citations as of Jun 30, 2024

Downloads

14
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

43
Citations as of Jul 4, 2024

WEB OF SCIENCETM
Citations

32
Citations as of Jun 20, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.