Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributorInstitute of Textiles and Clothing-
dc.contributorChinese Mainland Affairs Office-
dc.creatorZhou, YH-
dc.creatorMok, PY-
dc.creatorZhou, SJ-
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
dc.rightsThe following publication Y. Zhou, P. Y. Mok and S. Zhou, "A Part-Based Deep Neural Network Cascade Model for Human Parsing," in IEEE Access, vol. 7, pp. 160101-160111, 2019 is available at
dc.subjectImage segmentationen_US
dc.subjectFeature extractionen_US
dc.subjectNeural networksen_US
dc.subjectHuman parsingen_US
dc.subjectDeep learningen_US
dc.subjectFashion parsingen_US
dc.subjectImage segmentationen_US
dc.subjectImage understandingen_US
dc.subjectConvolutional neural networksen_US
dc.titleA part-based deep neural network cascade model for human parsingen_US
dc.typeJournal/Magazine Articleen_US
dcterms.abstractHuman parsing is important for image-based human-centric and clothing analyses. With the development of deep neural networks, some deep human parsing methods were recently proposed, which substantially improve the parsing accuracy. However, some localized small regions (such as sunglasses) are not parsed well in these methods. In this paper, we propose a Part-based Human Parsing Cascade (PHPC) to segment human images, imitating the observational mechanism of how people, when first looking at a human image, quickly scan the entire photograph to first locate the face and then the body parts to see what clothing the person is wearing. The observational mechanism of human vision is used to establish a cascade relationship in designing our network, in which a head-parsing sub-network and a body-parsing sub-network are integrated to the cascade of human parsing networks. The head- and body-parsing sub-networks focus on the head and body classes, respectively, and add attention to the head and body in the final neural networks. Comprehensive evaluations on the ATR dataset have demonstrated the effectiveness of our method.-
dcterms.bibliographicCitationIEEE access, 4 Nov. 2019, v. 7, p. 160101-160111-
dcterms.isPartOfIEEE access-
dc.description.validate202002 bcrc-
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhou_Part-Based_Deep_Neural.pdf2.11 MBAdobe PDFView/Open
View full-text via PolyU eLinks SFX Query
Show simple item record
PIRA download icon_1.1View/Download Full Text

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.