Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/111763
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLiao, W-
dc.creatorFang, J-
dc.creatorYe, L-
dc.creatorBakJensen, B-
dc.creatorYang, Z-
dc.creatorPorteAgel, F-
dc.date.accessioned2025-03-14T03:56:57Z-
dc.date.available2025-03-14T03:56:57Z-
dc.identifier.issn0306-2619-
dc.identifier.urihttp://hdl.handle.net/10397/111763-
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.rights© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Liao, W., Fang, J., Ye, L., Bak-Jensen, B., Yang, Z., & Porte-Agel, F. (2024). Can we trust explainable artificial intelligence in wind power forecasting? Applied Energy, 376, 124273 is available at https://doi.org/10.1016/j.apenergy.2024.124273.en_US
dc.subjectExplainable artificial intelligenceen_US
dc.subjectNeural networken_US
dc.subjectTime seriesen_US
dc.subjectTrustworthyen_US
dc.subjectWind power forecasten_US
dc.titleCan we trust explainable artificial intelligence in wind power forecasting?en_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume376-
dc.identifier.doi10.1016/j.apenergy.2024.124273-
dcterms.abstractAdvanced artificial intelligence (AI) models typically achieve high accuracy in wind power forecasting, but their internal mechanisms lack interpretability, which undermines user confidence in forecast value and strategy execution. To this end, this paper aims to investigate the interpretability of AI models, which is crucial but usually overlooked in wind power forecasting. Specifically, four model-agnostic explainable artificial intelligence (XAI) techniques (i.e., Shapley additive explanations, permutation feature importance, partial dependence plot, and local interpretable model-agnostic explanations) are tailored to provide global and instance interpretability for AI models in wind power forecasting. Then, several metrics are proposed to evaluate the trustworthiness of interpretations provided by XAI techniques. Simulation results demonstrate that the proposed XAI techniques can not only identify important features from wind power datasets, but also enable the understanding of the contribution of each feature to the forecast power output for a specific sample. Furthermore, the proposed evaluation metrics aid users in comprehensively assessing the trustworthiness of XAI techniques in wind power forecasting, enabling them to judiciously select suitable XAI techniques for their AI models.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationApplied energy, 15 Dec. 2024, v. 376, 124273-
dcterms.isPartOfApplied energy-
dcterms.issued2024-12-15-
dc.identifier.scopus2-s2.0-85201714696-
dc.identifier.eissn1872-9118-
dc.identifier.artn124273-
dc.description.validate202503 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S0306261924016568-main.pdf4.83 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

10
Citations as of Apr 14, 2025

Downloads

4
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

24
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.