Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/288
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorHui, KC-
dc.creatorSiu, WC-
dc.date.accessioned2014-12-11T08:28:24Z-
dc.date.available2014-12-11T08:28:24Z-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10397/288-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.en_US
dc.rightsThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.en_US
dc.subjectAutocorrelation modelen_US
dc.subjectCompound covarianceen_US
dc.subjectHybrid video codingen_US
dc.subjectMotion compensationen_US
dc.subjectMotion modelen_US
dc.titleExtended analysis of motion-compensated frame difference for block-based motion prediction erroren_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1232-
dc.identifier.epage1245-
dc.identifier.volume16-
dc.identifier.issue5-
dc.identifier.doi10.1109/TIP.2007.894263-
dcterms.abstractIn the past, most design and optimization work on hybrid video codecs relied mainly on experimental evidence. A proper theoretical model is always desirable, since this allows us to explain the phenomena of existing codecs and to design better ones. In this paper, we make use of the first-order Markov model to derive an approximated separable autocorrelation model for the block-based motion compensation frame difference (MCFD) signal. A major assumption of our derivation is that the net deformation of pixels is directional, in general, rather than a uniform error distribution in a block. We have also shown that the imperfect block-based motion compensation is significant to the theoretical study and the behavior of motion-compensated codecs. Results of our experimental work show that the derived model can describe the statistical characteristics of the MCFD signals accurately. The model also shows that the imperfectly formulated block-based motion compensation can result in an incorrect MCFD autocorrelation function while, conversely, it can form a better block-based motion compensation scheme.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on image processing, May 2007, v. 16, no. 5, p. 1232-1245-
dcterms.isPartOfIEEE transactions on image processing-
dcterms.issued2007-05-
dc.identifier.isiWOS:000245838400005-
dc.identifier.scopus2-s2.0-34247394919-
dc.identifier.eissn1941-0042-
dc.identifier.rosgroupidr33417-
dc.description.ros2006-2007 > Academic research: refereed > Publication in refereed journal-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_IR/PIRAen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
motion-compensated_07.pdf1.16 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

119
Last Week
1
Last month
Citations as of Apr 21, 2024

Downloads

220
Citations as of Apr 21, 2024

SCOPUSTM   
Citations

32
Last Week
0
Last month
0
Citations as of Apr 26, 2024

WEB OF SCIENCETM
Citations

23
Last Week
0
Last month
0
Citations as of Apr 25, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.