Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/88319
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building and Real Estate-
dc.creatorKim, J-
dc.creatorHwang, J-
dc.creatorChi, S-
dc.creatorSeo, J-
dc.date.accessioned2020-10-29T01:02:24Z-
dc.date.available2020-10-29T01:02:24Z-
dc.identifier.issn0926-5805-
dc.identifier.urihttp://hdl.handle.net/10397/88319-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2020 The Author(s). Published by Elsevier BV. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).en_US
dc.rightsThe following publication Kim, J., Hwang, J., Chi, S., & Seo, J. (2020). Towards database-free vision-based monitoring on construction sites: A deep active learning approach. Automation in Construction, 120, 103376, is available at https://doi.org/10.1016/j.autcon.2020.103376en_US
dc.subjectActive learningen_US
dc.subjectConstruction siteen_US
dc.subjectDatabase-freeen_US
dc.subjectDeep learningen_US
dc.subjectObject detectionen_US
dc.subjectVision-based monitoringen_US
dc.titleTowards database-free vision-based monitoring on construction sites : a deep active learning approachen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume120-
dc.identifier.doi10.1016/j.autcon.2020.103376-
dcterms.abstractIn order to achieve database-free (DB-free) vision-based monitoring on construction sites, this paper proposes a deep active learning approach that automatically evaluates the uncertainty of unlabeled training data, selects the most meaningful-to-learn instances, and eventually trains a deep learning model with the selected data. The proposed approach thus involves three sequential processes: (1) uncertainty evaluation of unlabeled data, (2) training data sampling and user-interactive labeling, and (3) model design and training. Two experiments were performed to validate the proposed method and confirm the positive effects of active learning: one experiment with active learning and the other without active learning (i.e., with random learning). In the experiments, the research team used a total of 17,000 images collected from actual construction sites. To achieve 80% mean Average Precision (mAP) for construction object detection, the random learning method required 720 training images, whereas only 180 images were sufficient when exploiting active learning. Moreover, the active learning could build a deep learning model with the mAP of 93.0%, while that of the random learning approach was limited to 89.1%. These results demonstrate the potential of the proposed method and highlight the considerable positive impacts of uncertainty-based data sampling on the model's performance. This research can improve the practicality of vision-based monitoring on construction sites, and the findings of this study can provide valuable insights and new research directions for construction researchers.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationAutomation in construction, 2020, v. 120, 103376-
dcterms.isPartOfAutomation in construction-
dcterms.issued2020-
dc.identifier.scopus2-s2.0-85089844094-
dc.identifier.eissn1872-7891-
dc.identifier.artn103376-
dc.description.validate202010 bcma-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Kim_Towards_database-free_vision-based.pdf3.3 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

71
Last Week
0
Last month
Citations as of Oct 13, 2024

Downloads

25
Citations as of Oct 13, 2024

SCOPUSTM   
Citations

60
Citations as of Oct 17, 2024

WEB OF SCIENCETM
Citations

57
Citations as of Oct 17, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.