Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/102792
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building Environment and Energy Engineeringen_US
dc.contributorResearch Institute for Smart Energyen_US
dc.creatorLi, Aen_US
dc.creatorXiao, Fen_US
dc.creatorZhang, Cen_US
dc.creatorFan, Cen_US
dc.date.accessioned2023-11-17T02:57:49Z-
dc.date.available2023-11-17T02:57:49Z-
dc.identifier.issn0306-2619en_US
dc.identifier.urihttp://hdl.handle.net/10397/102792-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.rights© 2021 Elsevier Ltd. All rights reserved.en_US
dc.rights© 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Li, A., Xiao, F., Zhang, C., & Fan, C. (2021). Attention-based interpretable neural network for building cooling load prediction. Applied Energy, 299, 117238 is available at https://doi.org/10.1016/j.apenergy.2021.117238.en_US
dc.subjectAttention mechanismen_US
dc.subjectBuilding energy managementen_US
dc.subjectCooling load predictionen_US
dc.subjectInterpretable machine learningen_US
dc.subjectRecurrent neural networken_US
dc.titleAttention-based interpretable neural network for building cooling load predictionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume299en_US
dc.identifier.doi10.1016/j.apenergy.2021.117238en_US
dcterms.abstractMachine learning has gained increasing popularity in building energy management due to its powerful capability and flexibility in model development as well as the rich data available in modern buildings. While machine learning is becoming more powerful, the models developed, especially artificial neural networks like Recurrent Neural Networks (RNN), are becoming more complex, resulting in “darker models” with lower model interpretability. The sophisticated inference mechanism behind machine learning prevents ordinary building professionals from understanding the models, thereby lowering trust in the predictions made. To address this, attention mechanisms have been widely implemented to improve the interpretability of deep learning; these mechanisms enable a deep learning-based model to track how different inputs influence outputs at each step of inference.en_US
dcterms.abstractThis paper proposes a novel neural network architecture with an attention mechanism for developing RNN-based building energy prediction, and investigates the effectiveness of this attention mechanism in improving the interpretability of RNN models developed for 24-hour ahead building cooling load prediction. To better understand, explain and evaluate these neural network-based building energy prediction models, the obtained attention vectors (or metric) are used to visualize the influence of different parts of model inputs on the prediction result. This helps the users to understand why predictions are made by the model, as well as how input sequences proportionally influence the output sequences. Further analysis of attention vectors can provide interesting temporal information for understanding building thermal dynamics, like the thermal inertia of the building. The proposed attention-based architecture can be implemented in developing optimal operation control strategies and improving demand and supply management. The model developed based on this architecture is assessed using real building operational data, and shows improved accuracy and interpretability over baseline models (without adopting attention mechanisms). The research results help to bridge the gap between building professionals and advanced machine learning techniques. The insights obtained can be used as guidance for the development, fine-tuning, explanation and debugging of data-driven building energy prediction models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationApplied energy, 1 Oct. 2021, v. 299, 117238en_US
dcterms.isPartOfApplied energyen_US
dcterms.issued2021-10-01-
dc.identifier.scopus2-s2.0-85108602591-
dc.identifier.eissn1872-9118en_US
dc.identifier.artn117238en_US
dc.description.validate202310 bckwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberBEEE-0036-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS56345094-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Li_Attention-Based_Interpretable_Neural.pdfPre-Published version1.62 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

156
Last Week
5
Last month
Citations as of Nov 9, 2025

Downloads

286
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

201
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

172
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.