Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118294
DC FieldValueLanguage
dc.contributorDepartment of Civil and Environmental Engineeringen_US
dc.contributorResearch Centre for Resources Engineering towards Carbon Neutralityen_US
dc.contributorDepartment of Industrial and Systems Engineeringen_US
dc.creatorJiang, Yen_US
dc.creatorWang, Ren_US
dc.creatorXuan, Den_US
dc.creatorCheung, CFen_US
dc.creatorPoon, CSen_US
dc.date.accessioned2026-03-31T02:16:19Z-
dc.date.available2026-03-31T02:16:19Z-
dc.identifier.issn0956-053Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/118294-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.subjectComposition recognitionen_US
dc.subjectConstruction wasteen_US
dc.subjectDeep learningen_US
dc.subjectRecyclingen_US
dc.subjectSortingen_US
dc.titleComparative analysis of three methods for estimating the compositions of construction wasteen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage210en_US
dc.identifier.epage220en_US
dc.identifier.volume194en_US
dc.identifier.doi10.1016/j.wasman.2025.01.009en_US
dcterms.abstractDetermination of the relative compositions of the mixed construction waste is crucial and an important step to enhance resource management. This information influences the design of construction waste recycling and sorting facilities, and aids in formulating effective management strategies for recycled and sorted waste products. However, different methods for waste sorting and composition recognition possess distinct characteristics and only apply to specific practical scenarios. In this study, three methods are compared: (i) manual sorting as a reference method, (ii) a manual image recognition method using infrared thermal imaging, and (iii) a deep learning-based image recognition method based on the SegFormer semantic segmentation model. The comparison focuses on accuracy, preferences, prerequisites, socio-environmental impacts, costs, and improvement potential. Results show that both manual and deep learning-based image recognition methods yield comparable accuracy to manual sorting for inert waste, with relative errors below 5.2%, but relatively higher recognition errors for non-inert waste. Overall, manual sorting remains the most cost-effective and fastest method, despite its high labor demand, spatial constraints, environmental impacts, and limited improvement potential. In comparison, manual image recognition requires approximately 9.2 times the processing time and 2.3 times the cost of manual sorting, while deep learning-based image recognition incurs about 9.9 times the time and 2.5 times the cost. Nevertheless, both image recognition methods offer potential environmental benefits and long-term efficiency gains.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationWaste management, 15 Feb. 2025, v. 194, p. 210-220en_US
dcterms.isPartOfWaste managementen_US
dcterms.issued2025-02-15-
dc.identifier.scopus2-s2.0-85214891647-
dc.identifier.pmid39823854-
dc.identifier.eissn1879-2456en_US
dc.description.validate202603 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001373/2025-12-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe authors wish to thank the financial and technical support of the Civil Engineering and Development Department of the Hong Kong SAR Government and the Research Centre for Resources Engineering towards Carbon Neutrality of The Hong Kong Polytechnic University (Project code: 1-BBES).en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-02-15en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-02-15
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

4
Citations as of Apr 2, 2026

WEB OF SCIENCETM
Citations

4
Citations as of Apr 2, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.