Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116936
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Building and Real Estate-
dc.creatorWang, X-
dc.creatorKee, T-
dc.date.accessioned2026-01-21T03:54:07Z-
dc.date.available2026-01-21T03:54:07Z-
dc.identifier.urihttp://hdl.handle.net/10397/116936-
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.rightsCopyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Wang, X., & Kee, T. (2025). Integrating Shapley Value and Least Core Attribution for Robust Explainable AI in Rent Prediction. Buildings, 15(17), 3133 is available at https://doi.org/10.3390/buildings15173133.en_US
dc.subjectLeast coreen_US
dc.subjectRent predictionen_US
dc.subjectShapley valueen_US
dc.subjectXAIen_US
dc.titleIntegrating shapley value and least core attribution for robust explainable AI in rent predictionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume15-
dc.identifier.issue17-
dc.identifier.doi10.3390/buildings15173133-
dcterms.abstractWith the widespread application of artificial intelligence in real estate price prediction, model explainability has become a critical factor influencing its acceptability and trustworthiness. The Shapley value, as a classic cooperative game theory method, quantifies the average marginal contribution of each feature, ensuring global fairness in the explanation allocation. However, its focus on average fairness lacks robustness under data perturbations, model changes, and adversarial attacks. To address this limitation, this paper proposes a hybrid explainability framework that integrates the Shapley value and Least Core attribution. The framework leverages the Least Core theory by formulating a linear programming problem to minimize the maximum dissatisfaction of feature subsets, providing bottom-line fairness. Furthermore, the attributions from the Shapley value and Least Core are combined through a weighted fusion approach, where the weight acts as a tunable hyperparameter to balance the global fairness and worst-case robustness. The proposed framework is seamlessly integrated into mainstream machine learning models such as XGBoost. Empirical evaluations on real-world real estate rental data demonstrate that this hybrid attribution method not only preserves the global fairness of the Shapley value but also significantly enhances the explanation consistency and trustworthiness under various data perturbations. This study provides a new perspective for robust explainable AI in high-risk decision-making scenarios and holds promising potential for practical applications.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationBuildings, Sept 2025, v. 15, no. 17, 3133-
dcterms.isPartOfBuildings-
dcterms.issued2025-09-
dc.identifier.scopus2-s2.0-105015480420-
dc.identifier.eissn2075-5309-
dc.identifier.artn3133-
dc.description.validate202601 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
buildings-15-03133-v3.pdf2.12 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.