Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114774
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributorLaboratory for Artificial Intelligence in Design (AiDLab)en_US
dc.creatorZhu, Sen_US
dc.creatorZou, Xen_US
dc.creatorYang, Wen_US
dc.creatorWong, WKen_US
dc.date.accessioned2025-08-25T08:04:53Z-
dc.date.available2025-08-25T08:04:53Z-
dc.identifier.issn0162-8828en_US
dc.identifier.urihttp://hdl.handle.net/10397/114774-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectAttribute Editing in Latent Spaceen_US
dc.subjectEncoder-based GAN Inversionen_US
dc.subjectFashion Attribute Editing Dataseten_US
dc.titleAny fashion attribute editing : dataset and pretrained modelsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TPAMI.2025.3581793en_US
dcterms.abstractFashion attribute editing is essential for combining the expertise of fashion designers with the potential of generative artificial intelligence. In this work, we focus on ‘any’ fashion attribute editing: 1) the ability to edit 78 fine-grained design attributes commonly observed in daily life; 2) the capability to modify desired attributes while keeping the rest components still; and 3) the flexibility to continuously edit on the edited image. To this end, we present the Any Fashion Attribute Editing (AFED) dataset, which includes 830K high-quality fashion images from sketch and product domains, filling the gap for a large-scale, openly accessible fine-grained dataset. We also propose Twin-Net, a twin encoder-decoder GAN inversion method that offers diverse and precise information for high-fidelity image reconstruction. This inversion model, trained on the new dataset, serves as a robust foundation for attribute editing. Additionally, we introduce PairsPCA to identify semantic directions in latent space, enabling accurate editing without manual supervision. Comprehensive experiments, including comparisons with ten state-of-the-art image inversion methods and four editing algorithms, demonstrate the effectiveness of our Twin-Net and editing algorithm.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE transactions on pattern analysis and machine intelligence, Date of Publication: 20 June 2025, Early Access, https://dx.doi.org/10.1109/TPAMI.2025.3581793en_US
dcterms.isPartOfIEEE transactions on pattern analysis and machine intelligenceen_US
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105009036227-
dc.identifier.eissn1939-3539en_US
dc.identifier.artn0b000064941554bben_US
dc.description.validate202508 bcwcen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000085/2025-07-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work is partially supported by the Laboratory for Artificial Intelligence in Design (Project Code: RP3-1), Innovation and Technology Fund, Hong Kong, SAR. This work is also partially supported by a grant from the Research Grants Council of the Hong Kong, SAR.(Project No. PolyU/RGC Project PolyU 25211424)en_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
dc.relation.rdatahttps://github.com/ArtmeScienceLab/AnyFashionAttributeEditingen_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.