Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/111763
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | - |
| dc.creator | Liao, W | - |
| dc.creator | Fang, J | - |
| dc.creator | Ye, L | - |
| dc.creator | BakJensen, B | - |
| dc.creator | Yang, Z | - |
| dc.creator | PorteAgel, F | - |
| dc.date.accessioned | 2025-03-14T03:56:57Z | - |
| dc.date.available | 2025-03-14T03:56:57Z | - |
| dc.identifier.issn | 0306-2619 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/111763 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.rights | © 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Liao, W., Fang, J., Ye, L., Bak-Jensen, B., Yang, Z., & Porte-Agel, F. (2024). Can we trust explainable artificial intelligence in wind power forecasting? Applied Energy, 376, 124273 is available at https://doi.org/10.1016/j.apenergy.2024.124273. | en_US |
| dc.subject | Explainable artificial intelligence | en_US |
| dc.subject | Neural network | en_US |
| dc.subject | Time series | en_US |
| dc.subject | Trustworthy | en_US |
| dc.subject | Wind power forecast | en_US |
| dc.title | Can we trust explainable artificial intelligence in wind power forecasting? | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 376 | - |
| dc.identifier.doi | 10.1016/j.apenergy.2024.124273 | - |
| dcterms.abstract | Advanced artificial intelligence (AI) models typically achieve high accuracy in wind power forecasting, but their internal mechanisms lack interpretability, which undermines user confidence in forecast value and strategy execution. To this end, this paper aims to investigate the interpretability of AI models, which is crucial but usually overlooked in wind power forecasting. Specifically, four model-agnostic explainable artificial intelligence (XAI) techniques (i.e., Shapley additive explanations, permutation feature importance, partial dependence plot, and local interpretable model-agnostic explanations) are tailored to provide global and instance interpretability for AI models in wind power forecasting. Then, several metrics are proposed to evaluate the trustworthiness of interpretations provided by XAI techniques. Simulation results demonstrate that the proposed XAI techniques can not only identify important features from wind power datasets, but also enable the understanding of the contribution of each feature to the forecast power output for a specific sample. Furthermore, the proposed evaluation metrics aid users in comprehensively assessing the trustworthiness of XAI techniques in wind power forecasting, enabling them to judiciously select suitable XAI techniques for their AI models. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Applied energy, 15 Dec. 2024, v. 376, 124273 | - |
| dcterms.isPartOf | Applied energy | - |
| dcterms.issued | 2024-12-15 | - |
| dc.identifier.scopus | 2-s2.0-85201714696 | - |
| dc.identifier.eissn | 1872-9118 | - |
| dc.identifier.artn | 124273 | - |
| dc.description.validate | 202503 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 1-s2.0-S0306261924016568-main.pdf | 4.83 MB | Adobe PDF | View/Open |
Page views
10
Citations as of Apr 14, 2025
Downloads
4
Citations as of Apr 14, 2025
SCOPUSTM
Citations
24
Citations as of Dec 19, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



