Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/99575
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorKwong, NWen_US
dc.creatorChan, YLen_US
dc.creatorTsang, SHen_US
dc.creatorLun, DPKen_US
dc.date.accessioned2023-07-14T02:50:24Z-
dc.date.available2023-07-14T02:50:24Z-
dc.identifier.urihttp://hdl.handle.net/10397/99575-
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication N. -W. Kwong, Y. -L. Chan, S. -H. Tsang and D. P. -K. Lun, "Optimized Quality Feature Learning for Video Quality Assessment," ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5 is available at https://doi.org/10.1109/ICASSP49357.2023.10095975.en_US
dc.subjectMulti-channel convolutional neural networken_US
dc.subjectQuality feature learningen_US
dc.subjectNo reference video quality assessmenten_US
dc.subjectSelf-supervised learningen_US
dc.subjectSemi-supervised learningen_US
dc.titleOptimized quality feature learning for video quality assessmenten_US
dc.typeConference Paperen_US
dc.identifier.spage1en_US
dc.identifier.epage5en_US
dc.identifier.doi10.1109/ICASSP49357.2023.10095975en_US
dcterms.abstractRecently, some transfer learning-based methods have been adopted in video quality assessment (VQA) to compensate for the lack of enormous training samples and human annotation labels. But these methods induce a domain gap between source and target domains, resulting in a sub-optimal feature representation that deteriorates the accuracy. This paper proposes the optimized quality feature learning via a multi-channel convolutional neural network (CNN) with the gated recurrent unit (GRU) for no-reference (NR) VQA. First, the multi-channel CNN is pre-trained on the image quality assessment (IQA) domain using non-human annotation labels, which is inspired by self-supervised learning. Then, semi-supervised learning is used to fine-tune CNN and transfer the knowledge from IQA to VQA while considering motion information for optimized quality feature learning. Finally, all frame quality features are extracted as the input of GRU to obtain the final video quality. Experimental results demonstrate that our model achieves better performance than state-of-the-art VQA approaches.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4-10 June 2023, p. 1-5en_US
dcterms.issued2023-
dc.relation.conferenceIEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP]en_US
dc.description.validate202307 bcwwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera2262-
dc.identifier.SubFormID47263-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextCentre for Advances in Reliability and Safety (CAiRS) admitted under AIR@InnoHK Research Clusteren_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Kwong_Optimized_Quality_Feature.pdfPre-Published version1.43 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

131
Citations as of Apr 14, 2025

Downloads

62
Citations as of Apr 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.