Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/114774
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Fashion and Textiles | en_US |
| dc.contributor | Laboratory for Artificial Intelligence in Design (AiDLab) | en_US |
| dc.creator | Zhu, S | en_US |
| dc.creator | Zou, X | en_US |
| dc.creator | Yang, W | en_US |
| dc.creator | Wong, WK | en_US |
| dc.date.accessioned | 2025-08-25T08:04:53Z | - |
| dc.date.available | 2025-08-25T08:04:53Z | - |
| dc.identifier.issn | 0162-8828 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/114774 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.subject | Attribute Editing in Latent Space | en_US |
| dc.subject | Encoder-based GAN Inversion | en_US |
| dc.subject | Fashion Attribute Editing Dataset | en_US |
| dc.title | Any fashion attribute editing : dataset and pretrained models | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.doi | 10.1109/TPAMI.2025.3581793 | en_US |
| dcterms.abstract | Fashion attribute editing is essential for combining the expertise of fashion designers with the potential of generative artificial intelligence. In this work, we focus on ‘any’ fashion attribute editing: 1) the ability to edit 78 fine-grained design attributes commonly observed in daily life; 2) the capability to modify desired attributes while keeping the rest components still; and 3) the flexibility to continuously edit on the edited image. To this end, we present the Any Fashion Attribute Editing (AFED) dataset, which includes 830K high-quality fashion images from sketch and product domains, filling the gap for a large-scale, openly accessible fine-grained dataset. We also propose Twin-Net, a twin encoder-decoder GAN inversion method that offers diverse and precise information for high-fidelity image reconstruction. This inversion model, trained on the new dataset, serves as a robust foundation for attribute editing. Additionally, we introduce PairsPCA to identify semantic directions in latent space, enabling accurate editing without manual supervision. Comprehensive experiments, including comparisons with ten state-of-the-art image inversion methods and four editing algorithms, demonstrate the effectiveness of our Twin-Net and editing algorithm. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | IEEE transactions on pattern analysis and machine intelligence, Date of Publication: 20 June 2025, Early Access, https://dx.doi.org/10.1109/TPAMI.2025.3581793 | en_US |
| dcterms.isPartOf | IEEE transactions on pattern analysis and machine intelligence | en_US |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-105009036227 | - |
| dc.identifier.eissn | 1939-3539 | en_US |
| dc.identifier.artn | 0b000064941554bb | en_US |
| dc.description.validate | 202508 bcwc | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000085/2025-07 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work is partially supported by the Laboratory for Artificial Intelligence in Design (Project Code: RP3-1), Innovation and Technology Fund, Hong Kong, SAR. This work is also partially supported by a grant from the Research Grants Council of the Hong Kong, SAR.(Project No. PolyU/RGC Project PolyU 25211424) | en_US |
| dc.description.pubStatus | Early release | en_US |
| dc.date.embargo | 0000-00-00 (to be updated) | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| dc.relation.rdata | https://github.com/ArtmeScienceLab/AnyFashionAttributeEditing | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



