Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/108018
PIRA download icon_1.1View/Download Full Text
Title: Explainable deep learning for image-driven fire calorimetry
Authors: Wang, Z 
Zhang, T 
Huang, X 
Issue Date: Jan-2024
Source: Applied intelligence, Jan. 2024, v. 54, no. 1, p. 1047-1062
Abstract: The rapid advancement of deep learning and computer vision has driven the intelligent evolution of fire detection, quantification and fighting, although most AI models remain opaque black boxes. This work applies explainable deep learning methods to quantify fire power by flame images and aims to elucidate the underlying mechanism of computer-vision fire calorimetry. The process begins with the utilization of a pre-trained fire segmentation model to create a flame image database in various formats: (1) original RGB, (2) background-free RGB, (3) background-free grey, and (4) background-free binary. This diverse database accounts for factors such as background, color, and brightness. Then, the synthetic database is employed to train and test the fire-calorimetry AI model. Results highlight the dominant role of determining flame area in fire calorimetry, while other factors displaying minimal influence. Enhancing the accuracy of flame segmentation significantly reduces error in computer-vision fire calorimetry to less than 20%. Finally, the study incorporates the Gradient-weighted Class Activation Mapping (Grad-CAM) method to visualize the pixel-level contribution to fire image identification. This research deepens the understanding of vision-based fire calorimetry and providing scientific support for future AI applications in fire monitoring, digital twin, and smart firefighting.
Keywords: Fire Engineering
Flame images
Image analysis
Semantic segmentation
Smart firefighting
Publisher: Springer
Journal: Applied intelligence 
ISSN: 0924-669X
DOI: 10.1007/s10489-023-05231-x
Rights: © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023
This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use (https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms), but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/s10489-023-05231-x.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Wang_Explainable_Deep_Learning.pdfPre-Published version3.34 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

77
Citations as of Apr 14, 2025

Downloads

1
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

13
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

5
Citations as of Jan 9, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.