Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113740
PIRA download icon_1.1View/Download Full Text
Title: Long short-term fusion by multi-scale distillation for screen content video quality enhancement
Authors: Huang, Z 
Chan, YL 
Kwong, NW 
Tsang, SH 
Lam, KM 
Ling, WK
Issue Date: Aug-2025
Source: IEEE transactions on circuits and systems for video technology, Aug. 2025, v. 35, no. 8, p. 7762-7777
Abstract: Different from natural videos, where artifacts distributed evenly, the artifacts of compressed screen content videos mainly occur in the edge areas. Besides, these videos often exhibit abrupt scene switches, resulting in noticeable distortions in video reconstruction. Existing multiple-frame models using a fixed range of neighbor frames face challenges in effectively enhancing frames during scene switches and lack efficiency in reconstructing high-frequency details. To address these limitations, we propose a novel method that effectively handles scene switches and reconstructs high-frequency information. In the feature extraction part, we develop long-term and short-term feature extraction streams, in which the long-term feature extraction stream learns the contextual information, and the short-term feature extraction stream extracts more related information from shorter input to assist the long-term stream to handle fast motion and scene switches. To further enhance the frame quality during scene switches, we incorporate a similarity-based neighbor frame selector before feeding frames into the short-term stream. This selector identifies relevant neighbor frames, aiding in the efficient handling of scene switches. To dynamically fuse the short-term feature and long-term features, the muti-scale feature distillation focuses on adaptively recalibrating channel-wise feature responses to achieve effective feature distillation. In the reconstruction part, a high-frequency reconstruction block is proposed for guiding the model to restore the high-frequency components. Experimental results demonstrate the significant advancements achieved by our proposed Long Short-term Fusion by Multi-Scale Distillation (LSFMD) method in enhancing the quality of compressed screen content videos, surpassing the current state-of-the-art methods.
Keywords: Deep learning
Quality enhancement
Screen content video
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on circuits and systems for video technology 
ISSN: 1051-8215
EISSN: 1558-2205
DOI: 10.1109/TCSVT.2025.3544314
Rights: © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication Z. Huang, Y. -L. Chan, N. -W. Kwong, S. -H. Tsang, K. -M. Lam and W. -K. Ling, "Long Short-Term Fusion by Multi-Scale Distillation for Screen Content Video Quality Enhancement," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 35, no. 8, pp. 7762-7777, Aug. 2025 is available at https://doi.org/10.1109/TCSVT.2025.3544314.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
Huang_Long_Short_Term.pdfPre-Published version6.3 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of Oct 17, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.