Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113740
Title: Long short-term fusion by multi-scale distillation for screen content video quality enhancement
Authors: Huang, Z 
Chan, YL 
Kwong, NW 
Tsang, SH 
Lam, KM 
Ling, WK
Issue Date: 2025
Source: IEEE transactions on circuits and systems for video technology, Date of Publication: 21 February 2025, Early Access, https://doi.org/10.1109/TCSVT.2025.3544314
Abstract: Different from natural videos, where artifacts distributed evenly, the artifacts of compressed screen content videos mainly occur in the edge areas. Besides, these videos often exhibit abrupt scene switches, resulting in noticeable distortions in video reconstruction. Existing multiple-frame models using a fixed range of neighbor frames face challenges in effectively enhancing frames during scene switches and lack efficiency in reconstructing high-frequency details. To address these limitations, we propose a novel method that effectively handles scene switches and reconstructs high-frequency information. In the feature extraction part, we develop long-term and short-term feature extraction streams, in which the long-term feature extraction stream learns the contextual information, and the short-term feature extraction stream extracts more related information from shorter input to assist the long-term stream to handle fast motion and scene switches. To further enhance the frame quality during scene switches, we incorporate a similarity-based neighbor frame selector before feeding frames into the short-term stream. This selector identifies relevant neighbor frames, aiding in the efficient handling of scene switches. To dynamically fuse the short-term feature and long-term features, the muti-scale feature distillation focuses on adaptively recalibrating channel-wise feature responses to achieve effective feature distillation. In the reconstruction part, a high-frequency reconstruction block is proposed for guiding the model to restore the high-frequency components. Experimental results demonstrate the significant advancements achieved by our proposed Long Short-term Fusion by Multi-Scale Distillation (LSFMD) method in enhancing the quality of compressed screen content videos, surpassing the current state-of-the-art methods.
Keywords: Deep learning
Quality enhancement
Screen content video
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on circuits and systems for video technology 
ISSN: 1051-8215
EISSN: 1558-2205
DOI: 10.1109/TCSVT.2025.3544314
Appears in Collections:Conference Paper

Open Access Information
Status embaroged access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.