Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116818
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorShen, J-
dc.creatorHu, K-
dc.creatorBao, W-
dc.creatorChen, CW-
dc.creatorWang, Z-
dc.date.accessioned2026-01-21T03:52:54Z-
dc.date.available2026-01-21T03:52:54Z-
dc.identifier.isbn979-8-4007-0686-8-
dc.identifier.urihttp://hdl.handle.net/10397/116818-
dc.description32nd ACM International Conference on Multimedia, Melbourne VIC, Australia, 28 October 2024 - 1 November 2024en_US
dc.language.isoenen_US
dc.publisherThe Association for Computing Machineryen_US
dc.rightsThis work is licensed under a Creative Commons Attribution International 4.0 License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rights©2024 Copyright held by the owner/author(s).en_US
dc.rightsThe following publication Shen, J., Hu, K., Bao, W., Chen, C. W., & Wang, Z. (2024). Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne VIC, Australia is available at https://doi.org/10.1145/3664647.3681146.en_US
dc.subjectDataset std-12ken_US
dc.subjectHand-drawn traditional animationen_US
dc.subjectMulti-level correspondenceen_US
dc.subjectMulti-stream transformeren_US
dc.subjectSketch interpolationen_US
dc.titleBridging the gap : sketch-aware interpolation network for high-quality animation sketch inbetweeningen_US
dc.typeConference Paperen_US
dc.identifier.spage10287-
dc.identifier.epage10295-
dc.identifier.doi10.1145/3664647.3681146-
dcterms.abstractHand-drawn 2D animation workflow is typically initiated with the creation of sketch keyframes. Subsequent manual inbetweens are crafted for smoothness, which is a labor-intensive process and the prospect of automatic animation sketch interpolation has become highly appealing. Yet, common frame interpolation methods are generally hindered by two key issues: 1) limited texture and colour details in sketches, and 2) exaggerated alterations between two sketch keyframes. To overcome these issues, we propose a novel deep learning method - Sketch-Aware Interpolation Network (SAIN). This approach incorporates multi-level guidance that formulates region-level correspondence, stroke-level correspondence and pixel-level dynamics. A multi-stream U-Transformer is then devised to characterize sketch inbetweening patterns using these multi-level guides through the integration of self / cross-attention mechanisms. Additionally, to facilitate future research on animation sketch inbetweening, we constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles. Comprehensive experiments on this dataset convincingly show that our proposed SAIN surpasses the state-of-the-art interpolation methods. Our code and dataset are avaliable in https://github.com/none-master/SAIN.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn MM ’24: Proceedings of the 32nd ACM International Conference on Multimedia, p. 10287-10295. New York, NY: The Association for Computing Machinery, 2024-
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85209823642-
dc.relation.ispartofbookMM ’24: Proceedings of the 32nd ACM International Conference on Multimedia-
dc.relation.conferenceACM International Conference on Multimedia [MM]-
dc.publisher.placeNew York, NYen_US
dc.description.validate202601 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3664647.3681146.pdf17.74 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.