Back to results list
Please use this identifier to cite or link to this item:
|Title:||Motion estimation techniques by exploiting motion history and depth maps in video coding||Authors:||Lee, Tsz Kwan||Advisors:||Chan, Yui-lam (EIE)
Siu Wan-chi (EIE)
Digital video -- Standards.
|Issue Date:||2016||Publisher:||The Hong Kong Polytechnic University||Abstract:||Video coding with low-delay hierarchical prediction structure is essentially introduced for real time video applications. This structure is currently adopted in various emerging video coding standards including MPEG4-Part 10 (H.264), high efficiency video coding (HEVC) and multi-view video coding (MVC). The only disadvantage of this structure is the requirement of motion estimation in distant reference frames. For maintaining high coding efficiency, a large search range in motion estimation can be employed in distant reference pictures. However, computational complexity will thus be increased dramatically. In addition to the hierarchical prediction structure, the vision of the latest HEVC video coding standard provides a more flexible framework by confronting the tradeoff between coding efficiency and computational complexity. It is able to gain coding efficiency up to 50% bitrate reduction comparatively to H.264. By this advantage, HEVC is the emerging standard in the industry for providing video streaming applications and online TV advancements. The achievement of HEVC is obtained by introducing the new coding quad-tree structure on block partitions in motion estimation. However, this flexibility of recursive block partitioning for coded video quality induces heavy computations in an HEVC encoder. Therefore, this work investigates computational complexity reduction algorithms in emerging video coding standards.
The work on this thesis then contrives a number of fast algorithms for motion estimation. The adoption of motion vector composition (MV composition) for a fast motion estimation scheme in a low-delay hierarchical P-frame structure is firstly proposed. It expedites the motion estimation process for distant reference frames in the hierarchical P structure. In addition, a vector selection algorithmis tailor-made with the proposed hierarchical P coding scheme to further improve the coding efficiency. Simulation results show that the proposed scheme can deliver a remarkable complexity savings and coding efficiency improvement on coding a frame in low temporal layers of the hierarchical P structure. The rest of this work proposes to perform motion locus prediction before motion estimation. By this motion locus prediction, a suitable search range can be adjusted adaptively for motion estimation. Thanks to the rapid development of MVC and 3D videos, the state-of-the-art 3D coding framework provides multi-view plus depth video (MVD) in which the depth map is additional information to be encoded in the coded bitstreams. Depth maps record the distances of various objects in the scene from a viewpoint. With the depth maps from MVD sequences, we reveal the depth variation and the spatial correlation between blocks as well as the temporal correlation between the depth maps and the motion in texture, motion locus perdition can be achieved for speeding up the texture coding in an HEVC encoder. The depth information brings new room for designing an efficient adaptive search range (ASR) algorithm in HEVC. Simulation results show that the proposed ASR algorithms can offer a significant complexity reduction with negligible loss of coded video quality.
|Description:||PolyU Library Call No.: [THS] LG51 .H577P EIE 2016 Lee
xxiv, 143 pages :illustrations
|URI:||http://hdl.handle.net/10397/67222||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b29350426_link.htm||For PolyU Users||208 B||HTML||View/Open|
|b29350426_ira.pdf||For All Users (Non-printable)||11.2 MB||Adobe PDF||View/Open|
Citations as of Feb 11, 2019
Citations as of Feb 11, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.