Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/102769
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Fashion and Textiles | en_US |
| dc.contributor | Mainland Development Office | en_US |
| dc.creator | Zhou, Y | en_US |
| dc.creator | Li, R | en_US |
| dc.creator | Zhou, Y | en_US |
| dc.creator | Mok, PY | en_US |
| dc.date.accessioned | 2023-11-15T02:54:38Z | - |
| dc.date.available | 2023-11-15T02:54:38Z | - |
| dc.identifier.isbn | 978-989-8533-79-1 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/102769 | - |
| dc.language.iso | en | en_US |
| dc.rights | Posted with permission of the publisher. | en_US |
| dc.rights | This is a reprint from a paper published in the Proceedings of the IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2018 (part of MCCSIS 2018), https://www.iadisportal.org/digital-library/describing-clothing-in-human-images-a-parsing-pose-integrated-approach. | en_US |
| dc.rights | IADIS is available at http://www.iadis.org. | en_US |
| dc.subject | Clothing retrieval | en_US |
| dc.subject | Clothing recognition | en_US |
| dc.subject | Fine-grained classification | en_US |
| dc.subject | Pose estimation | en_US |
| dc.subject | Human parsing | en_US |
| dc.subject | Deep learning | en_US |
| dc.title | Describing clothing in human images : a parsing-pose integrated approach | en_US |
| dc.type | Conference Paper | en_US |
| dc.description.otherinformation | Multi Conference on Computer Science and Information Systems (MCCSIS 2018), Madrid, Spain, 17-20 July 2018 | en_US |
| dc.identifier.spage | 205 | en_US |
| dc.identifier.epage | 213 | en_US |
| dcterms.abstract | With the advent of information technology, digital product information grows exponentially. People are exposed to far too much information, and information overload can slow down, instead of speeding up, a simple decision-making process like searching for suitable clothing online. Traditional semantic-based product retrieval may not be effective due to human subjectivity and cognitive differences. In this paper, we propose a method by integrating the state-of-the-arts deep neural models in pose estimation, human parsing and category classification to recognise from human images all clothing items and their fine-grained product category information. The proposed fine-grained clothing classification model can facilitate a wide range of applications such as the automatic annotation of clothing images. The effectiveness of the proposed method is validated through experiment on a real-world dataset. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | In K Blashki & Y Xiao (Eds.), Proceedings of the IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2018 (part of MCCSIS 2018), p. 205-213. Madrid : IADIS Press, 2018 | en_US |
| dcterms.issued | 2018 | - |
| dc.relation.ispartofbook | Proceedings of the IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2018 (part of MCCSIS 2018) | en_US |
| dc.description.validate | 202311 bcwh | en_US |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | ITC-0597 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The supports from Guangdong Provincial Department of Science and Technology Shenzhen Science and Technology Innovation Commission | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 13246166 | - |
| dc.description.oaCategory | Publisher permission | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zhou_Describing_Clothing_Human.pdf | 1.38 MB | Adobe PDF | View/Open |
Page views
186
Citations as of Nov 10, 2025
Downloads
101
Citations as of Nov 10, 2025
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



