Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/82194
PIRA download icon_1.1View/Download Full Text
Title: Deep motion-appearance convolutions for robust visual tracking
Authors: Li, HJ
Wu, SH
Huang, SP
Lam, KM 
Xing, XF
Issue Date: 2019
Source: IEEE access, 9 Dec. 2019, v. 7, p. 180451-180466
Abstract: Visual tracking is a challenging task due to unconstrained appearance variations and dynamic surrounding backgrounds, which basically arise from the complex motion of the target object. Therefore, the information and the correlation between the target motion and its resulting appearance should be considered comprehensively to achieve robust tracking performance. In this paper, we propose a deep neural network for visual tracking, namely the Motion-Appearance Dual (MADual) network, which employs a dual-branch architecture, by using deep two-dimensional (2D) and deep three-dimensional (3D) convolutions to integrate the local and global information of the target object's motion and appearance synchronously. For each frame of a tracking video, 2D convolutional kernels of the deep 2D branch slide over the frame to extract its global spatial-appearance features. Meanwhile, 3D convolutional kernels of the deep 3D branch are used to collaboratively extract the appearance and the associated motion features of the visual target from successive frames. By sliding the 3D convolutional kernels along a video sequence, the model is able to learn the temporal features from previous frames, and therefore, generate the local patch-based motion patterns of the target. Sliding the 2D kernels on a frame and the 3D kernels on a frame cube synchronously enables a better hierarchical motion-appearance integration, and boosts the performance for the visual tracking task. To further improve the tracking precision, an extra ridge-regression model is trained for the tracking process, based not only on the bounding box given in the first frame, but also on its synchro-frame-cube using our proposed Inverse Temporal Training method (ITT). Extensive experiments on popular benchmark datasets, OTB2013, OTB50, OTB2015, UAV123, TC128, VOT2015 and VOT2016, demonstrate that the proposed MADual tracker performs favorably against many state-of-the-art methods.
Keywords: Visual tracking
3D convolutional kernels
Motion-appearance
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE access 
EISSN: 2169-3536
DOI: 10.1109/ACCESS.2019.2958405
Rights: This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/
The following publication H. Li, S. Wu, S. Huang, K. Lam and X. Xing, "Deep Motion-Appearance Convolutions for Robust Visual Tracking," in IEEE Access, vol. 7, pp. 180451-180466, 2019 is available at https://dx.doi.org/10.1109/ACCESS.2019.2958405
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Li_Motion-Appearance_Convolutions_Visual.pdf2.14 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

69
Last Week
1
Last month
Citations as of Mar 24, 2024

Downloads

64
Citations as of Mar 24, 2024

SCOPUSTM   
Citations

2
Citations as of Mar 28, 2024

WEB OF SCIENCETM
Citations

1
Citations as of Mar 28, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.