Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113739
PIRA download icon_1.1View/Download Full Text
Title: Deep learning approach for no-reference screen content video quality assessment
Authors: Kwong, NW 
Chan, YL 
Tsang, SH 
Huang, Z 
Lam, KM 
Issue Date: Jun-2024
Source: IEEE transactions on broadcasting, June 2024, v. 70, no. 2, p. 555-569
Abstract: Screen content video (SCV) has drawn much more attention than ever during the COVID-19 period and has evolved from a niche to a mainstream due to the recent proliferation of remote offices, online meetings, shared-screen collaboration, and gaming live streaming. Therefore, quality assessments for screen content media are highly demanded to maintain service quality recently. Although many practical natural scene video quality assessment methods have been proposed and achieved promising results, these methods cannot be applied to the screen content video quality assessment (SCVQA) task directly since the content characteristics of SCV are substantially different from natural scene video. Besides, only one no-reference SCVQA (NR-SCVQA) method, which requires handcrafted features, has been proposed in the literature. Therefore, we propose the first deep learning approach explicitly designed for NR-SCVQA. First, a multi-channel convolutional neural network (CNN) model is used to extract spatial quality features of pictorial and textual regions separately. Since there is no human annotated quality for each screen content frame (SCF), the CNN model is pre-trained in a multi-task self-supervised fashion to extract spatial quality feature representation of SCF. Second, we propose a time-distributed CNN transformer model (TCNNT) to further process all SCF spatial quality feature representations of an SCV and learn spatial and temporal features simultaneously so that high-level spatiotemporal features of SCV can be extracted and used to assess the whole SCV quality. Experimental results demonstrate the robustness and validity of our model, which is closely related to human perception.
Keywords: Human visual experience
Multi-channel convolutional neural network
Multi-task learning
No reference video quality assessment
Screen content video quality assessment
Selfsupervised learning
Spatiotemporal features
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on broadcasting 
ISSN: 0018-9316
EISSN: 1557-9611
DOI: 10.1109/TBC.2024.3374042
Rights: © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication N. -W. Kwong, Y. -L. Chan, S. -H. Tsang, Z. Huang and K. -M. Lam, "Deep Learning Approach for No-Reference Screen Content Video Quality Assessment," in IEEE Transactions on Broadcasting, vol. 70, no. 2, pp. 555-569, June 2024 is available at https://doi.org/10.1109/TBC.2024.3374042.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
Kwong_Deep_Learning_Approach.pdfPre-Published version1.52 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.