Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116823
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Design-
dc.creatorWang, P-
dc.creatorZhang, X-
dc.creatorZhou, Z-
dc.creatorChilds, P-
dc.creatorLee, K-
dc.creatorKleinsmann, M-
dc.creatorWang, SJ-
dc.date.accessioned2026-01-21T03:52:59Z-
dc.date.available2026-01-21T03:52:59Z-
dc.identifier.isbn979-8-4007-1348-4-
dc.identifier.urihttp://hdl.handle.net/10397/116823-
dc.description19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, Nanjing, Guangdong Province, China, December 1-2, 2024en_US
dc.language.isoenen_US
dc.publisherThe Association for Computing Machineryen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/legalcode).en_US
dc.rights©2024 Copyright held by the owner/author(s).en_US
dc.rightsThe following publication Wang, P., Zhang, X., Zhou, Z., Childs, P., Lee, K., Kleinsmann, M., & Wang, S. J. (2025). Typeface Generation through Style Descriptions With Generative Models Proceedings of the 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, Nanjing, Guangdong Province, China is available at https://doi.org/10.1145/3703619.3706043.en_US
dc.subjectArtificial intelligenceen_US
dc.subjectComputer visionen_US
dc.subjectGenerative adversarial networksen_US
dc.subjectTypeface designen_US
dc.subjectTypeface generationen_US
dc.titleTypeface generation through style descriptions with generative modelsen_US
dc.typeConference Paperen_US
dc.identifier.doi10.1145/3703619.3706043-
dcterms.abstractTypeface design plays a vital role in graphic and communication design. Different typefaces are suitable for different contexts and can convey different emotions and messages. Typeface design still relies on skilled designers to create unique styles for specific needs. Recently, generative adversarial networks (GANs) have been applied to typeface generation, but these methods face challenges due to the high annotation requirements of typeface generation datasets, which are difficult to obtain. Furthermore, machine-generated typefaces often fail to meet designers’ specific requirements, as dataset annotations limit the diversity of the generated typefaces. In response to these limitations in current typeface generation models, we propose an alternative approach to the task. Instead of relying on dataset-provided annotations to define the typeface style vector, we introduce a transformer-based language model to learn the mapping between a typeface style description and the corresponding style vector. We evaluated the proposed model using both existing and newly created style descriptions. Results indicate that the model can generate high-quality, patent-free typefaces based on the input style descriptions provided by designers. The code is available at: https://github.com/tqxg2018/Description2Typeface-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings VRCAI 2024: The 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, December 1-2, 2024, Nanjing, China, 12. New York, New York: The Association for Computing Machinery, 2024-
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85218001423-
dc.relation.ispartofbookProceedings VRCAI 2024: The 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, December 1-2, 2024, Nanjing, China, 12-
dc.publisher.placeNew York, New Yorken_US
dc.identifier.artn12-
dc.description.validate202601 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe project was funded by the University s Research Centre for Future (Caring) Mobility, the Hong Kong Polytechnic University (P-ID: P0042701); the Collaborative Research Project with Worldleading Research Groups (P-ID: P0039528); and the Smart Traffic Fund (P-ID: P0045764). The research presented in this article was partially funded by grants from the Hong Kong Polytechnic University [Project No. P0042736].en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3703619.3706043.pdf26.61 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.