Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/99571
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorKwong, NWen_US
dc.creatorChan, YLen_US
dc.creatorTsang, SHen_US
dc.creatorLun, DPKen_US
dc.date.accessioned2023-07-14T02:50:21Z-
dc.date.available2023-07-14T02:50:21Z-
dc.identifier.urihttp://hdl.handle.net/10397/99571-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Kwong, Ngai-Wing; Chan, Yui-Lam; Tsang, Sik-Ho; Lun, Daniel Pak-Kong(2023). Quality Feature Learning via Multi-Channel CNN and GRU for No-Reference Video Quality Assessment. IEEE Access, 11, 28060-28075 is available at https://doi.org/10.1109/ACCESS.2023.3259101.en_US
dc.subjectFine-tuning strategyen_US
dc.subjectGated recurrent uniten_US
dc.subjectHuman visual perceptionen_US
dc.subjectMotion-aware informationen_US
dc.subjectMulti-channel convolutional neural networken_US
dc.subjectNo reference video quality assessmenten_US
dc.subjectSelf-supervised learningen_US
dc.subjectSemi-supervised learningen_US
dc.titleQuality feature learning via multi-channel CNN and GRU for no-reference video quality assessmenten_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage28060en_US
dc.identifier.epage28075en_US
dc.identifier.volume11en_US
dc.identifier.doi10.1109/ACCESS.2023.3259101en_US
dcterms.abstractNowadays, video quality assessment (VQA) plays a vital role in video-related industries to predict human perceived video quality to maintain the quality of service. Although many deep neural network-based VQA methods have been proposed, the robustness and performance are limited by small scale of available human-label data. Recently, some transfer learning-based methods and pre-trained models in other domains have been adopted in VQA to compensate for the lack of enormous training samples. However, they result in a domain gap between the source and target domains, which provides sub-optimal feature representation for VQA tasks and deteriorates the accuracy. Therefore, in the paper, we propose quality feature learning via a multi-channel convolutional neural network (CNN) with a gated recurrent unit (GRU), taking into account both the motion-aware information and human visual perception (HVP) characteristics to solve the above issue for no-reference VQA. First, inspired by self-supervised learning (SSL), the multi-channel CNN is pre-trained on the image quality assessment (IQA) domain without using human annotation labels. Then, semi-supervised learning is applied on top of the pre-trained multi-channel CNN to fine-tune the model to transfer the domain from IQA to VQA while considering motion-aware information for better frame-level quality feature representation. After that, several HVP features are extracted with frame-level quality feature representation as the input of the GRU model to obtain the final precise predicted video quality. Finally, the experimental results demonstrate the robustness and validity of our model, which is superior to the state-of-the-art approaches and is closely related to human perception.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE access, 2023, v. 11, p. 28060-28075en_US
dcterms.isPartOfIEEE accessen_US
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85151317806-
dc.identifier.eissn2169-3536en_US
dc.description.validate202307 bcwwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2262-
dc.identifier.SubFormID47261-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Kwong_Quality_Feature_Learning.pdf2.55 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

79
Citations as of May 11, 2025

Downloads

55
Citations as of May 11, 2025

SCOPUSTM   
Citations

6
Citations as of Jun 12, 2025

WEB OF SCIENCETM
Citations

6
Citations as of Jun 5, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.