Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117711
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Fashion and Textiles | - |
| dc.creator | Zhu, Shumin | - |
| dc.identifier.uri | https://theses.lib.polyu.edu.hk/handle/200/14170 | - |
| dc.language.iso | English | - |
| dc.title | Aesthetic-aware intelligent fashion avatar | - |
| dc.type | Thesis | - |
| dcterms.abstract | With the global digital fashion market expanding rapidly, integrating human creativity and aesthetics into intelligent design systems capable of generating novel, trend-aligned designs remains a key challenge, despite progress in machine learning and generative models. Existing works are limited by: (1) ignoring complex hidden relationships between fashion attributes in recognition; (2) unsuitable datasets (high complexity, low resolution, and limited accessible full-body data) for attribute editing; (3) GAN inversion techniques suffering from information loss for rare or fine-grained attributes; (4) latent space manipulation methods being either confined to predefined attributes or limited by imperfect attribute disentanglement; (5) no support for full-body sketch-to-real product image translation. | - |
| dcterms.abstract | To address these gaps, this study develops aesthetically perceptive intelligent systems for fashion design assistance, with key contributions: (1) sRA-Net: a structured relationship-aware network that leverages multiple hidden attribute relationships to enhance fashion attribute recognition. (2) AFED Dataset: 830K high-quality sketch and product fashion images (AFED) for any fashion attribute editing (AFED). (3) Twin-Net: a GAN inversion framework balancing inversion and editing for high-fidelity fashion image inversion and subsequent attribute editing. (4) PairPCA: a few-shot latent manipulation method based on pretrained GAN inversion framework for accurate fashion attribute editing. (5) FSRI System: converts full-body fashion sketches into real product images (FSRI). | - |
| dcterms.abstract | These solutions advance fine-grained recognition, high-fidelity reconstruction, accurate attribute editing and fashion sketch to product image translation; AFED provides a robust dataset foundation. The work aims to accelerate garment design, reduce designer workload, and expand creativity in digital fashion. | - |
| dcterms.accessRights | open access | - |
| dcterms.educationLevel | Ph.D. | - |
| dcterms.extent | xiv, 126 pages : color illustrations | - |
| dcterms.issued | 2025 | - |
| dcterms.LCSH | Fashion design | - |
| dcterms.LCSH | Fashion drawing -- Data processing | - |
| dcterms.LCSH | Artificial intelligence | - |
| dcterms.LCSH | Hong Kong Polytechnic University -- Dissertations | - |
| Appears in Collections: | Thesis | |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/14170
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


