Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/88081
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorHuang, C-
dc.creatorWang, XC-
dc.creatorCao, JN-
dc.creatorWang, SH-
dc.creatorZhang, Y-
dc.date.accessioned2020-09-18T02:12:34Z-
dc.date.available2020-09-18T02:12:34Z-
dc.identifier.urihttp://hdl.handle.net/10397/88081-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication Huang, C., Wang, X. C., Cao, J. N, Wang, S. H., & Zhang, Y. (2020). HCF: A hybrid CNN framework for behavior detection of distracted drivers. IEEE access, 8, 109335-109349 is available at https://dx.doi.org/10.1109/ACCESS.2020.3001159en_US
dc.subjectFeature extractionen_US
dc.subjectVehiclesen_US
dc.subjectAccidentsen_US
dc.subjectTrainingen_US
dc.subjectSafetyen_US
dc.subjectCamerasen_US
dc.subjectRoadsen_US
dc.subjectDistracted driversen_US
dc.subjectConvolutional neural networken_US
dc.subjectTransfer learningen_US
dc.subjectFusion modelen_US
dc.titleHCF : a hybrid CNN framework for behavior detection of distracted driversen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage109335-
dc.identifier.epage109349-
dc.identifier.volume8-
dc.identifier.doi10.1109/ACCESS.2020.3001159-
dcterms.abstractDistracted driving causes a large number of traffic accident fatalities and is becoming an increasingly important issue in recent research on traffic safety. Gesture patterns are less distinguishable in vehicles due to in-vehicle physical constraints and body occlusions from the drivers. However, by capitalizing on modern camera technology, convolutional neural network (CNN) can be used for visual analysis. In this paper, we present a hybrid CNN framework (HCF) to detect the behaviors of distracted drivers by using deep learning to process image features. To improve the accuracy of the driving activity detection system, we first apply a cooperative pretrained model that combines ResNet50, Inception V3 and Xception to extract driver behavior features based on transfer learning. Second, because the features extracted by pretrained models are independent, we concatenate the extracted features to obtain comprehensive information. Finally, we train the fully connected layers of the HCF to filter out anomalies and hand movements associated with non-distracted driving. We apply an improved dropout algorithm to prevent the proposed HCF from overfitting to the training data. During the evaluation, we apply the class activation mapping (CAM) technique to highlight the feature area involving ten tested classes of typical distracted driving behaviors. The experimental results show that the proposed HCF achieves the classification accuracy of 96.74% when detecting distracted driving behaviors, demonstrating that it can potentially help drivers maintain safe driving habits.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE access, 2020, v. 8, p. 109335-109349-
dcterms.isPartOfIEEE access-
dcterms.issued2020-
dc.identifier.isiWOS:000549854800003-
dc.identifier.eissn2169-3536-
dc.description.validate202009 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Huang_HCF_CNN_Drivers.pdf2.78 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

91
Last Week
6
Last month
Citations as of May 19, 2024

Downloads

89
Citations as of May 19, 2024

SCOPUSTM   
Citations

64
Citations as of May 17, 2024

WEB OF SCIENCETM
Citations

45
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.