Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113740
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorHuang, Z-
dc.creatorChan, YL-
dc.creatorKwong, NW-
dc.creatorTsang, SH-
dc.creatorLam, KM-
dc.creatorLing, WK-
dc.date.accessioned2025-06-19T06:25:03Z-
dc.date.available2025-06-19T06:25:03Z-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10397/113740-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectDeep learningen_US
dc.subjectQuality enhancementen_US
dc.subjectScreen content videoen_US
dc.titleLong short-term fusion by multi-scale distillation for screen content video quality enhancementen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TCSVT.2025.3544314-
dcterms.abstractDifferent from natural videos, where artifacts distributed evenly, the artifacts of compressed screen content videos mainly occur in the edge areas. Besides, these videos often exhibit abrupt scene switches, resulting in noticeable distortions in video reconstruction. Existing multiple-frame models using a fixed range of neighbor frames face challenges in effectively enhancing frames during scene switches and lack efficiency in reconstructing high-frequency details. To address these limitations, we propose a novel method that effectively handles scene switches and reconstructs high-frequency information. In the feature extraction part, we develop long-term and short-term feature extraction streams, in which the long-term feature extraction stream learns the contextual information, and the short-term feature extraction stream extracts more related information from shorter input to assist the long-term stream to handle fast motion and scene switches. To further enhance the frame quality during scene switches, we incorporate a similarity-based neighbor frame selector before feeding frames into the short-term stream. This selector identifies relevant neighbor frames, aiding in the efficient handling of scene switches. To dynamically fuse the short-term feature and long-term features, the muti-scale feature distillation focuses on adaptively recalibrating channel-wise feature responses to achieve effective feature distillation. In the reconstruction part, a high-frequency reconstruction block is proposed for guiding the model to restore the high-frequency components. Experimental results demonstrate the significant advancements achieved by our proposed Long Short-term Fusion by Multi-Scale Distillation (LSFMD) method in enhancing the quality of compressed screen content videos, surpassing the current state-of-the-art methods.-
dcterms.accessRightsembaroged accessen_US
dcterms.bibliographicCitationIEEE transactions on circuits and systems for video technology, Date of Publication: 21 February 2025, Early Access, https://doi.org/10.1109/TCSVT.2025.3544314-
dcterms.isPartOfIEEE transactions on circuits and systems for video technology-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-85218793260-
dc.identifier.eissn1558-2205-
dc.description.validate202506 bcch-
dc.identifier.FolderNumbera3728ben_US
dc.identifier.SubFormID50888en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextInnovation and Technology Fund - Partnership Research Programme (ITF-PRP) under PRP/036/21FXen_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Open Access Information
Status embaroged access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.