Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/113739
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electrical and Electronic Engineering | - |
dc.creator | Kwong, NW | - |
dc.creator | Chan, YL | - |
dc.creator | Tsang, SH | - |
dc.creator | Huang, Z | - |
dc.creator | Lam, KM | - |
dc.date.accessioned | 2025-06-19T06:25:02Z | - |
dc.date.available | 2025-06-19T06:25:02Z | - |
dc.identifier.issn | 0018-9316 | - |
dc.identifier.uri | http://hdl.handle.net/10397/113739 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.rights | The following publication N. -W. Kwong, Y. -L. Chan, S. -H. Tsang, Z. Huang and K. -M. Lam, "Deep Learning Approach for No-Reference Screen Content Video Quality Assessment," in IEEE Transactions on Broadcasting, vol. 70, no. 2, pp. 555-569, June 2024 is available at https://doi.org/10.1109/TBC.2024.3374042. | en_US |
dc.subject | Human visual experience | en_US |
dc.subject | Multi-channel convolutional neural network | en_US |
dc.subject | Multi-task learning | en_US |
dc.subject | No reference video quality assessment | en_US |
dc.subject | Screen content video quality assessment | en_US |
dc.subject | Selfsupervised learning | en_US |
dc.subject | Spatiotemporal features | en_US |
dc.title | Deep learning approach for no-reference screen content video quality assessment | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 555 | - |
dc.identifier.epage | 569 | - |
dc.identifier.volume | 70 | - |
dc.identifier.issue | 2 | - |
dc.identifier.doi | 10.1109/TBC.2024.3374042 | - |
dcterms.abstract | Screen content video (SCV) has drawn much more attention than ever during the COVID-19 period and has evolved from a niche to a mainstream due to the recent proliferation of remote offices, online meetings, shared-screen collaboration, and gaming live streaming. Therefore, quality assessments for screen content media are highly demanded to maintain service quality recently. Although many practical natural scene video quality assessment methods have been proposed and achieved promising results, these methods cannot be applied to the screen content video quality assessment (SCVQA) task directly since the content characteristics of SCV are substantially different from natural scene video. Besides, only one no-reference SCVQA (NR-SCVQA) method, which requires handcrafted features, has been proposed in the literature. Therefore, we propose the first deep learning approach explicitly designed for NR-SCVQA. First, a multi-channel convolutional neural network (CNN) model is used to extract spatial quality features of pictorial and textual regions separately. Since there is no human annotated quality for each screen content frame (SCF), the CNN model is pre-trained in a multi-task self-supervised fashion to extract spatial quality feature representation of SCF. Second, we propose a time-distributed CNN transformer model (TCNNT) to further process all SCF spatial quality feature representations of an SCV and learn spatial and temporal features simultaneously so that high-level spatiotemporal features of SCV can be extracted and used to assess the whole SCV quality. Experimental results demonstrate the robustness and validity of our model, which is closely related to human perception. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | IEEE transactions on broadcasting, June 2024, v. 70, no. 2, p. 555-569 | - |
dcterms.isPartOf | IEEE transactions on broadcasting | - |
dcterms.issued | 2024-06 | - |
dc.identifier.scopus | 2-s2.0-85189167228 | - |
dc.identifier.eissn | 1557-9611 | - |
dc.description.validate | 202506 bcch | - |
dc.description.oa | Accepted Manuscript | en_US |
dc.identifier.FolderNumber | a3728b | en_US |
dc.identifier.SubFormID | 50879 | en_US |
dc.description.fundingSource | RGC | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Innovation and Technology Fund - Partnership Research Programme (ITF-PRP) under PRP/036/21FX | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Kwong_Deep_Learning_Approach.pdf | Pre-Published version | 1.52 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.