Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/116823
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | School of Design | - |
| dc.creator | Wang, P | - |
| dc.creator | Zhang, X | - |
| dc.creator | Zhou, Z | - |
| dc.creator | Childs, P | - |
| dc.creator | Lee, K | - |
| dc.creator | Kleinsmann, M | - |
| dc.creator | Wang, SJ | - |
| dc.date.accessioned | 2026-01-21T03:52:59Z | - |
| dc.date.available | 2026-01-21T03:52:59Z | - |
| dc.identifier.isbn | 979-8-4007-1348-4 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/116823 | - |
| dc.description | 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, Nanjing, Guangdong Province, China, December 1-2, 2024 | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | The Association for Computing Machinery | en_US |
| dc.rights | This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/legalcode). | en_US |
| dc.rights | ©2024 Copyright held by the owner/author(s). | en_US |
| dc.rights | The following publication Wang, P., Zhang, X., Zhou, Z., Childs, P., Lee, K., Kleinsmann, M., & Wang, S. J. (2025). Typeface Generation through Style Descriptions With Generative Models Proceedings of the 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, Nanjing, Guangdong Province, China is available at https://doi.org/10.1145/3703619.3706043. | en_US |
| dc.subject | Artificial intelligence | en_US |
| dc.subject | Computer vision | en_US |
| dc.subject | Generative adversarial networks | en_US |
| dc.subject | Typeface design | en_US |
| dc.subject | Typeface generation | en_US |
| dc.title | Typeface generation through style descriptions with generative models | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.doi | 10.1145/3703619.3706043 | - |
| dcterms.abstract | Typeface design plays a vital role in graphic and communication design. Different typefaces are suitable for different contexts and can convey different emotions and messages. Typeface design still relies on skilled designers to create unique styles for specific needs. Recently, generative adversarial networks (GANs) have been applied to typeface generation, but these methods face challenges due to the high annotation requirements of typeface generation datasets, which are difficult to obtain. Furthermore, machine-generated typefaces often fail to meet designers’ specific requirements, as dataset annotations limit the diversity of the generated typefaces. In response to these limitations in current typeface generation models, we propose an alternative approach to the task. Instead of relying on dataset-provided annotations to define the typeface style vector, we introduce a transformer-based language model to learn the mapping between a typeface style description and the corresponding style vector. We evaluated the proposed model using both existing and newly created style descriptions. Results indicate that the model can generate high-quality, patent-free typefaces based on the input style descriptions provided by designers. The code is available at: https://github.com/tqxg2018/Description2Typeface | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | In Proceedings VRCAI 2024: The 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, December 1-2, 2024, Nanjing, China, 12. New York, New York: The Association for Computing Machinery, 2024 | - |
| dcterms.issued | 2024 | - |
| dc.identifier.scopus | 2-s2.0-85218001423 | - |
| dc.relation.ispartofbook | Proceedings VRCAI 2024: The 19th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, December 1-2, 2024, Nanjing, China, 12 | - |
| dc.publisher.place | New York, New York | en_US |
| dc.identifier.artn | 12 | - |
| dc.description.validate | 202601 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | The project was funded by the University s Research Centre for Future (Caring) Mobility, the Hong Kong Polytechnic University (P-ID: P0042701); the Collaborative Research Project with Worldleading Research Groups (P-ID: P0039528); and the Smart Traffic Fund (P-ID: P0045764). The research presented in this article was partially funded by grants from the Hong Kong Polytechnic University [Project No. P0042736]. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 3703619.3706043.pdf | 26.61 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



