Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116936
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Building and Real Estate | - |
| dc.creator | Wang, X | - |
| dc.creator | Kee, T | - |
| dc.date.accessioned | 2026-01-21T03:54:07Z | - |
| dc.date.available | 2026-01-21T03:54:07Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/116936 | - |
| dc.language.iso | en | en_US |
| dc.publisher | MDPI AG | en_US |
| dc.rights | Copyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Wang, X., & Kee, T. (2025). Integrating Shapley Value and Least Core Attribution for Robust Explainable AI in Rent Prediction. Buildings, 15(17), 3133 is available at https://doi.org/10.3390/buildings15173133. | en_US |
| dc.subject | Least core | en_US |
| dc.subject | Rent prediction | en_US |
| dc.subject | Shapley value | en_US |
| dc.subject | XAI | en_US |
| dc.title | Integrating shapley value and least core attribution for robust explainable AI in rent prediction | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 15 | - |
| dc.identifier.issue | 17 | - |
| dc.identifier.doi | 10.3390/buildings15173133 | - |
| dcterms.abstract | With the widespread application of artificial intelligence in real estate price prediction, model explainability has become a critical factor influencing its acceptability and trustworthiness. The Shapley value, as a classic cooperative game theory method, quantifies the average marginal contribution of each feature, ensuring global fairness in the explanation allocation. However, its focus on average fairness lacks robustness under data perturbations, model changes, and adversarial attacks. To address this limitation, this paper proposes a hybrid explainability framework that integrates the Shapley value and Least Core attribution. The framework leverages the Least Core theory by formulating a linear programming problem to minimize the maximum dissatisfaction of feature subsets, providing bottom-line fairness. Furthermore, the attributions from the Shapley value and Least Core are combined through a weighted fusion approach, where the weight acts as a tunable hyperparameter to balance the global fairness and worst-case robustness. The proposed framework is seamlessly integrated into mainstream machine learning models such as XGBoost. Empirical evaluations on real-world real estate rental data demonstrate that this hybrid attribution method not only preserves the global fairness of the Shapley value but also significantly enhances the explanation consistency and trustworthiness under various data perturbations. This study provides a new perspective for robust explainable AI in high-risk decision-making scenarios and holds promising potential for practical applications. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Buildings, Sept 2025, v. 15, no. 17, 3133 | - |
| dcterms.isPartOf | Buildings | - |
| dcterms.issued | 2025-09 | - |
| dc.identifier.scopus | 2-s2.0-105015480420 | - |
| dc.identifier.eissn | 2075-5309 | - |
| dc.identifier.artn | 3133 | - |
| dc.description.validate | 202601 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| buildings-15-03133-v3.pdf | 2.12 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



