Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/108503
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Building and Real Estate | - |
| dc.creator | Khanmohammadi, S | - |
| dc.creator | Golafshani, E | - |
| dc.creator | Bai, Y | - |
| dc.creator | Li, H | - |
| dc.creator | Bazli, M | - |
| dc.creator | Arashpour, M | - |
| dc.date.accessioned | 2024-08-19T01:58:48Z | - |
| dc.date.available | 2024-08-19T01:58:48Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/108503 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier BV | en_US |
| dc.rights | © 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Khanmohammadi, S., Golafshani, E., Bai, Y., Li, H., Bazli, M., & Arashpour, M. (2023). Multi-modal mining of crowd-sourced data: Efficient provision of humanitarian aid to remote regions affected by natural disasters. International Journal of Disaster Risk Reduction, 96, 103972 is available at https://doi.org/10.1016/j.ijdrr.2023.103972. | en_US |
| dc.subject | Artificial intelligence (AI) | en_US |
| dc.subject | Data mining | en_US |
| dc.subject | Deep neural networks (DNN) | en_US |
| dc.subject | Generative adversarial networks (GAN) | en_US |
| dc.subject | Humanitarian aid | en_US |
| dc.subject | SHapley additive exPlanations (SHAP) | en_US |
| dc.title | Multi-modal mining of crowd-sourced data : efficient provision of humanitarian aid to remote regions affected by natural disasters | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 96 | - |
| dc.identifier.doi | 10.1016/j.ijdrr.2023.103972 | - |
| dcterms.abstract | Data mining applications have the potential to address current deficiencies in the provision of humanitarian aid in natural disasters. Simultaneous text and image analysis in crowd-sourced data can improve the quality of humanitarian aid information. Specifically, we select Bidirectional Encoder Representations from Transformers (BERT) and its descendant ALBERT as pre-trained deep networks for the text modality, while we choose ConvNeXt, RegNet, and Faster RCNN for the image modality. The developed framework demonstrates its application in classifying humanitarian aid through three key aspects. Firstly, it illustrates the effective performance of ConvNeXt and BERT in the classification of humanitarian aid. Secondly, it investigates the efficiency of generative adversarial networks (GAN) in generating synthetic images for imbalanced input datasets. This approach improves the accuracy, precision, recall, and F1-score of the framework when applied to unseen test data. Finally, the study highlights the potential use of SHapley Additive exPlanations (SHAP) for interpreting the behaviour of the developed framework, supporting the timely classification of humanitarian aid information from crowd-sourced data after natural disasters. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | International journal of disaster risk reduction, 1 Oct. 2023, v. 96, 103972 | - |
| dcterms.isPartOf | International journal of disaster risk reduction | - |
| dcterms.issued | 2023-10-01 | - |
| dc.identifier.scopus | 2-s2.0-85170026500 | - |
| dc.identifier.eissn | 2212-4209 | - |
| dc.identifier.artn | 103972 | - |
| dc.description.validate | 202408 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Australian Research Council (ARC) | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 1-s2.0-S2212420923004521-main.pdf | 6.03 MB | Adobe PDF | View/Open |
Page views
62
Citations as of Nov 10, 2025
Downloads
59
Citations as of Nov 10, 2025
SCOPUSTM
Citations
8
Citations as of Dec 19, 2025
WEB OF SCIENCETM
Citations
6
Citations as of Dec 18, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



