Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90505
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.creatorKwong, NWen_US
dc.creatorTsang, SHen_US
dc.creatorChan, YLen_US
dc.creatorLun, DPKen_US
dc.creatorLee, TKen_US
dc.date.accessioned2021-07-15T02:11:58Z-
dc.date.available2021-07-15T02:11:58Z-
dc.identifier.issn0277-786Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/90505-
dc.descriptionInternational Workshop on Advanced Imaging Technology 2021 (IWAIT 2021), 2021, Online Onlyen_US
dc.language.isoenen_US
dc.publisherSPIE-International Society for Optical Engineeringen_US
dc.rightsCopyright 2021 Society of Photo‑Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.en_US
dc.rightsThe following publication Ngai-Wing Kwong, Sik-Ho Tsang, Yui-Lam Chan, Daniel Pak-Kong Lun, and Tsz-Kwan Lee "No-reference video quality assessment metric using spatiotemporal features through LSTM", Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021, 1176629 (13 March 2021) is available at https://dx.doi.org/10.1117/12.2590406en_US
dc.titleNo-reference video quality assessment metric using spatiotemporal features through LSTMen_US
dc.typeConference Paperen_US
dc.identifier.volume11766en_US
dc.identifier.doi10.1117/12.2590406en_US
dcterms.abstractNowadays, a precise video quality assessment (VQA) model is essential to maintain the quality of service (QoS). However, most existing VQA metrics are designed for specific purposes and ignore the spatiotemporal features of nature video. This paper proposes a novel general-purpose no-reference (NR) VQA metric adopting Long Short-Term Memory (LSTM) modules with the masking layer and pre-padding strategy, namely VQA-LSTM, to solve the above issues. First, we divide the distorted video into frames and extract some significant but also universal spatial and temporal features that could effectively reflect the quality of frames. Second, the data preprocessing stage and pre-padding strategy are used to process data to ease the training for our VQA-LSTM. Finally, a three-layer LSTM model incorporated with masking layer is designed to learn the sequence of spatial features as spatiotemporal features and learn the sequence of temporal features as the gradient of temporal features to evaluate the quality of videos. Two widely used VQA database, MCL-V and LIVE, are tested to prove the robustness of our VQA-LSTM, and the experimental results show that our VQA-LSTM has a better correlation with human perception than some state-of-the-art approaches. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of SPIE : the International Society for Optical Engineering, 2021, v. 11766, 1176629en_US
dcterms.isPartOfProceedings of SPIE : the International Society for Optical Engineeringen_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85103279536-
dc.relation.conferenceInternational Workshop on Advanced Imaging Technology [IWAIT]en_US
dc.identifier.eissn1996-756Xen_US
dc.identifier.artn1176629en_US
dc.description.validate202107 bcvcen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera0964-n01-
dc.identifier.SubFormID2235-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextCAiRS under project 5.2en_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2235_Kwong_No_Reference_Video.pdfPre-Published version977.28 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

117
Last Week
0
Last month
Citations as of May 5, 2024

Downloads

47
Citations as of May 5, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.