Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109570
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textiles-
dc.creatorZhou, Y-
dc.creatorMok, PY-
dc.date.accessioned2024-11-08T06:09:47Z-
dc.date.available2024-11-08T06:09:47Z-
dc.identifier.issn1751-9632-
dc.identifier.urihttp://hdl.handle.net/10397/109570-
dc.language.isoenen_US
dc.publisherThe Institution of Engineering and Technologyen_US
dc.rights© 2023 The Authors. IET Computer Vision published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.en_US
dc.rightsThis is an open access article under the terms of the Creative Commons Attribution‐NonCommercial‐NoDerivs License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.en_US
dc.rightsThe following publication Zhou, Y., & Mok, P. Y. (2024). Enhancing human parsing with region-level learning. IET Computer Vision, 18(1), 60-71 is available at https://doi.org/10.1049/cvi2.12222.en_US
dc.subjectComputer visionen_US
dc.subjectImage processingen_US
dc.subjectImage segmentationen_US
dc.subjectPose estimationen_US
dc.titleEnhancing human parsing with region-level learningen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage60-
dc.identifier.epage71-
dc.identifier.volume18-
dc.identifier.issue1-
dc.identifier.doi10.1049/cvi2.12222-
dcterms.abstractHuman parsing is very important in a diverse range of industrial applications. Despite the considerable progress that has been achieved, the performance of existing methods is still less than satisfactory, since these methods learn the shared features of various parsing labels at the image level. This limits the representativeness of the learnt features, especially when the distribution of parsing labels is imbalanced or the scale of different labels is substantially different. To address this limitation, a Region-level Parsing Refiner (RPR) is proposed to enhance parsing performance by the introduction of region-level parsing learning. Region-level parsing focuses specifically on small regions of the body, for example, the head. The proposed RPR is an adaptive module that can be integrated with different existing human parsing models to improve their performance. Extensive experiments are conducted on two benchmark datasets, and the results demonstrated the effectiveness of our RPR model in terms of improving the overall parsing performance as well as parsing rare labels. This method was successfully applied to a commercial application for the extraction of human body measurements and has been used in various online shopping platforms for clothing size recommendations. The code and dataset are released at this link https://github.com/applezhouyp/PRP.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIET computer vision, Feb. 2024, v. 18, no. 1, p. 60-71-
dcterms.isPartOfIET computer vision-
dcterms.issued2024-02-
dc.identifier.scopus2-s2.0-85164533528-
dc.identifier.eissn1751-9640-
dc.description.validate202411 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextLaboratory for Artificial Intelligence in Design; Innovation and Technology Fund, Hong Kongen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhou_Enhancing_Human_Parsing.pdf1.12 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Nov 24, 2024

Downloads

7
Citations as of Nov 24, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.