Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117711
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textiles-
dc.creatorZhu, Shumin-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/14170-
dc.language.isoEnglish-
dc.titleAesthetic-aware intelligent fashion avatar-
dc.typeThesis-
dcterms.abstractWith the global digital fashion market expanding rapidly, integrating human creativity and aesthetics into intelligent design systems capable of generating novel, trend-aligned designs remains a key challenge, despite progress in machine learning and generative models. Existing works are limited by: (1) ignoring complex hidden relationships between fashion attributes in recognition; (2) unsuitable datasets (high complexity, low resolution, and limited accessible full-body data) for attribute editing; (3) GAN inversion techniques suffering from information loss for rare or fine-grained attributes; (4) latent space manipulation methods being either confined to predefined attributes or limited by imperfect attribute disentanglement; (5) no support for full-body sketch-to-real product image translation.-
dcterms.abstractTo address these gaps, this study develops aesthetically perceptive intelligent systems for fashion design assistance, with key contributions: (1) sRA-Net: a structured relationship-aware network that leverages multiple hidden attribute relationships to enhance fashion attribute recognition. (2) AFED Dataset: 830K high-quality sketch and product fashion images (AFED) for any fashion attribute editing (AFED). (3) Twin-Net: a GAN inversion framework balancing inversion and editing for high-fidelity fashion image inversion and subsequent attribute editing. (4) PairPCA: a few-shot latent manipulation method based on pretrained GAN inversion framework for accurate fashion attribute editing. (5) FSRI System: converts full-body fashion sketches into real product images (FSRI).-
dcterms.abstractThese solutions advance fine-grained recognition, high-fidelity reconstruction, accurate attribute editing and fashion sketch to product image translation; AFED provides a robust dataset foundation. The work aims to accelerate garment design, reduce designer workload, and expand creativity in digital fashion.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxiv, 126 pages : color illustrations-
dcterms.issued2025-
dcterms.LCSHFashion design-
dcterms.LCSHFashion drawing -- Data processing-
dcterms.LCSHArtificial intelligence-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.