Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/108503
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building and Real Estate-
dc.creatorKhanmohammadi, S-
dc.creatorGolafshani, E-
dc.creatorBai, Y-
dc.creatorLi, H-
dc.creatorBazli, M-
dc.creatorArashpour, M-
dc.date.accessioned2024-08-19T01:58:48Z-
dc.date.available2024-08-19T01:58:48Z-
dc.identifier.urihttp://hdl.handle.net/10397/108503-
dc.language.isoenen_US
dc.publisherElsevier BVen_US
dc.rights© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Khanmohammadi, S., Golafshani, E., Bai, Y., Li, H., Bazli, M., & Arashpour, M. (2023). Multi-modal mining of crowd-sourced data: Efficient provision of humanitarian aid to remote regions affected by natural disasters. International Journal of Disaster Risk Reduction, 96, 103972 is available at https://doi.org/10.1016/j.ijdrr.2023.103972.en_US
dc.subjectArtificial intelligence (AI)en_US
dc.subjectData miningen_US
dc.subjectDeep neural networks (DNN)en_US
dc.subjectGenerative adversarial networks (GAN)en_US
dc.subjectHumanitarian aiden_US
dc.subjectSHapley additive exPlanations (SHAP)en_US
dc.titleMulti-modal mining of crowd-sourced data : efficient provision of humanitarian aid to remote regions affected by natural disastersen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume96-
dc.identifier.doi10.1016/j.ijdrr.2023.103972-
dcterms.abstractData mining applications have the potential to address current deficiencies in the provision of humanitarian aid in natural disasters. Simultaneous text and image analysis in crowd-sourced data can improve the quality of humanitarian aid information. Specifically, we select Bidirectional Encoder Representations from Transformers (BERT) and its descendant ALBERT as pre-trained deep networks for the text modality, while we choose ConvNeXt, RegNet, and Faster RCNN for the image modality. The developed framework demonstrates its application in classifying humanitarian aid through three key aspects. Firstly, it illustrates the effective performance of ConvNeXt and BERT in the classification of humanitarian aid. Secondly, it investigates the efficiency of generative adversarial networks (GAN) in generating synthetic images for imbalanced input datasets. This approach improves the accuracy, precision, recall, and F1-score of the framework when applied to unseen test data. Finally, the study highlights the potential use of SHapley Additive exPlanations (SHAP) for interpreting the behaviour of the developed framework, supporting the timely classification of humanitarian aid information from crowd-sourced data after natural disasters.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationInternational journal of disaster risk reduction, 1 Oct. 2023, v. 96, 103972-
dcterms.isPartOfInternational journal of disaster risk reduction-
dcterms.issued2023-10-01-
dc.identifier.scopus2-s2.0-85170026500-
dc.identifier.eissn2212-4209-
dc.identifier.artn103972-
dc.description.validate202408 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextAustralian Research Council (ARC)en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S2212420923004521-main.pdf6.03 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

62
Citations as of Nov 10, 2025

Downloads

59
Citations as of Nov 10, 2025

SCOPUSTM   
Citations

8
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

6
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.