Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115607
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Logistics and Maritime Studiesen_US
dc.creatorYang, Yen_US
dc.creatorWang, Sen_US
dc.creatorLaporte, Gen_US
dc.date.accessioned2025-10-08T01:17:00Z-
dc.date.available2025-10-08T01:17:00Z-
dc.identifier.issn0894-069Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/115607-
dc.language.isoenen_US
dc.publisherJohn Wiley & Sons, Inc.en_US
dc.rightsThis is an open access article under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.en_US
dc.rights© 2025 The Author(s). Naval Research Logistics published by Wiley Periodicals LLC.en_US
dc.rightsThe following publication Yang, Y., S. Wang, and G. Laporte. 2025. “Improved Regression Tree Models Using Generalization Error-Based Splitting Criteria.” Naval Research Logistics (NRL)72, no. 8: 1097–1116 is available at https://doi.org/10.1002/nav.22270.en_US
dc.subjectGeneralization erroren_US
dc.subjectLeave-one-out cross-validationen_US
dc.subjectMean squared erroren_US
dc.subjectRegression treeen_US
dc.titleImproved regression tree models using generalization error-based splitting criteriaen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1097en_US
dc.identifier.epage1116en_US
dc.identifier.volume72en_US
dc.identifier.issue8en_US
dc.identifier.doi10.1002/nav.22270en_US
dcterms.abstractDespite the widespread application of machine learning (ML) approaches such as the regression tree (RT) in the field of data-driven optimization, overfitting may impair the effectiveness of ML models and thus hinder the deployment of ML for decision-making. In particular, we address the overfitting issue of the traditional RT splitting criterion with a limited sample size, which considers only the training mean squared error, and we accurately specify the mathematical formula for the generalization error. We introduce two novel splitting criteria based on generalization error, which offer higher-quality approximations of the generalization error than the traditional training error does. One criterion is formulated through a mathematical derivation based on the RT model, and the second is established through leave-one-out cross-validation (LOOCV). We construct RT models using our proposed generalization error-based splitting criteria from extensive ML benchmark instances and report the experimental results, including the models' computational efficiency, prediction accuracy, and robustness. Our findings endorse the superior efficacy and robustness of the RT model based on the refined LOOCV-informed splitting criterion, marking substantial improvements over those of the traditional RT model. Additionally, our tree structure analysis provides insights into how our proposed LOOCV-informed splitting criterion guides the model in striking a balance between a complex tree structure and accurate predictions.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationNaval research logistics, Dec. 2025, v. 72, no. 8, p. 1097-1116en_US
dcterms.isPartOfNaval research logisticsen_US
dcterms.issued2025-12-
dc.identifier.scopus2-s2.0-105007764781-
dc.identifier.eissn1520-6750en_US
dc.description.validate202510 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.TAWiley (2025)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Yang_Improved_Regression_Tree.pdf1.48 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.