Full Text:   <2650>

Summary:  <2010>

CLC number: TP391

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2017-07-13

Cited: 0

Clicked: 7335

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.7 P.989-1001

http://doi.org/10.1631/FITEE.1601338


Robust object tracking with RGBD-based sparse learning


Author(s):  Zi-ang Ma, Zhi-yu Xiang

Affiliation(s):  College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):   kobebean@zju.edu.cn, xiangzy@zju.edu.cn

Key Words:  Object tracking, Sparse learning, Depth view, Occlusion templates, Occlusion detection


Zi-ang Ma, Zhi-yu Xiang. Robust object tracking with RGBD-based sparse learning[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(7): 989-1001.

@article{title="Robust object tracking with RGBD-based sparse learning",
author="Zi-ang Ma, Zhi-yu Xiang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="7",
pages="989-1001",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1601338"
}

%0 Journal Article
%T Robust object tracking with RGBD-based sparse learning
%A Zi-ang Ma
%A Zhi-yu Xiang
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 7
%P 989-1001
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1601338

TY - JOUR
T1 - Robust object tracking with RGBD-based sparse learning
A1 - Zi-ang Ma
A1 - Zhi-yu Xiang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 7
SP - 989
EP - 1001
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1601338


Abstract: 
Robust object tracking has been an important and challenging research area in the field of computer vision for decades. With the increasing popularity of affordable depth sensors, range data is widely used in visual tracking for its ability to provide robustness to varying illumination and occlusions. In this paper, a novel RGBD and sparse learning based tracker is proposed. The range data is integrated into the sparse learning framework in three respects. First, an extra depth view is added to the color image based visual features as an independent view for robust appearance modeling. Then, a special occlusion template set is designed to replenish the existing dictionary for handling various occlusion conditions. Finally, a depth-based occlusion detection method is proposed to efficiently determine an accurate time for the template update. Extensive experiments on both KITTI and Princeton data sets demonstrate that the proposed tracker outperforms the state-of-the-art tracking algorithms, including both sparse learning and RGBD based methods.

基于RGBD和稀疏学习的鲁棒目标跟踪

概要:鲁棒目标跟踪近年来成为计算机视觉领域一项重要的且极具挑战性的研究方向。随着深度传感器的普及,深度信息因其对光照变化与遮挡表现出一定的鲁棒性而被广泛应用于视觉目标跟踪算法中。本文提出了一种基于RGBD和稀疏学习的跟踪算法,从三个方面将深度信息应用到稀疏学习跟踪框架。首先将深度图像特征结合现有的基于彩色图像的视觉特征用于目标外观的鲁棒特征描述。为了适应跟踪过程中的各种遮挡情况,我们设计了一种特殊的遮挡物模板用于增广现有的超完备字典。最后,我们进一步提出了一种基于深度信息的遮挡物检测方法用于有效地指示模板更新。基于KITTI和Princeton数据集的大量实验证明了所提出算法的跟踪效果优于时下最先进的多种跟踪器,包括基于稀疏学习的跟踪以及基于RGBD的跟踪。

关键词:目标跟踪;稀疏学习;深度视角;遮挡物模板;深度图像特征

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Avidan, S., 2007. Ensemble tracking. IEEE Trans. Patt. Anal. Mach. Intell., 29(2):261-271.

[2]Babenko, B., Yang, M.H., Belongie, S., 2009. Visual tracking with online multiple instance learning. IEEE Conf. on Computer Vision and Pattern Recognition, p.983-990.

[3]Bao, C.L., Wu, Y., Ling, H.B., et al., 2012. Real time robust L1 tracker using accelerated proximal gradient approach. IEEE Conf. on Computer Vision and Pattern Recognition, p.1830-1837.

[4]Black, M.J., Jepson, A.D., 1998. EigenTracking: robust matching and tracking of articulated objects using a view-based representation. Int. J. Comput. Vis., 26(1): 63-84.

[5]Candes, E.J., Romberg, J.K., Tao, T., 2006. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math., 59(8):1207-1223.

[6]Chen, X., Pan, W.K., Kwok, J.T., et al., 2009. Accelerated gradient method for multi-task sparse learning problem. 9th IEEE Int. Conf. on Data Mining, p.746-751.

[7]Choi, W., Pantofaru, C., Savarese, S., 2011. Detecting and tracking people using an RGB-D camera via multiple detector fusion. IEEE Int. Conf. on Computer Vision Workshops, p.1076-1083.

[8]Comaniciu, D., Ramesh, V., Meer, P., 2003. Kernel-based object tracking. IEEE Trans. Patt. Anal. Mach. Intell., 25(5):564-577.

[9]Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.886-893.

[10]Donoho, D.L., 2006. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289-1306.

[11]Hong, Z.B., Mei, X., Prokhorov, D., et al., 2013. Tracking via robust multi-task multi-view joint sparse representation. IEEE Int. Conf. on Computer Vision, p.649-656.

[12]Lan, X.Y., Ma, A., Yuen, P., 2014. Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation. IEEE Int. Conf. on Computer Vision and Pattern Recognition, p.1194-1201.

[13]Ling, H.B., Bai, L., Blasch, E., et al., 2010. Robust infrared vehicle tracking across target pose change using L1 regularization. IEEE Conf. on Information Fusion, p.1-8.

[14]Liu, B.Y., Yang, L., Huang, J.Z., et al., 2010. Robust and fast collaborative tracking with two stage sparse optimization. European Conf. on Computer Vision, p.624-637.

[15]Luber, M., Spinello, L., Arras, K.O., 2011. People tracking in RGB-D data with on-line boosted target models. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.3844-3849.

[16]Ma, Z.A., Xiang, Z.Y., 2015. Robust visual tracking via binocular multi-task multi-view joint sparse representation. SAI Intelligent Systems Conf., p.714-722.

[17]Mei, X., Ling, H.B., 2009. Robust visual tracking using 1 minimization. IEEE 12th Int. Conf. on Computer Vision, p.1436-1443.

[18]Mei, X., Ling, H.B., 2011. Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Patt. Anal. Mach. Intell., 33(11):2259-2272.

[19]Mei, X., Ling, H.B., Wu, Y., et al., 2011. Minimum error bounded efficient 1 tracker with occlusion detection. IEEE Conf. on Computer Vision and Pattern Recognition, p.1257-1264.

[20]Ojala, T., Pietikäinen, M., Mäenpää, T., 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Patt. Anal. Mach. Intell., 24(7):971-987.

[21]Pei, S.C., Lin, C.N., 1995. Image normalization for pattern recognition. Image Vis. Comput., 13(10):711-723.

[22]Porikli, F., Tuzel, O., Meer, P., 2006. Covariance tracking using model update based on Lie algebra. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.728-735.

[23]Ross, D.A., Lim, J., Lin, R.S., et al., 2008. Incremental learning for robust visual tracking. Int. J. Comput. Vis., 77(1-3):125-141.

[24]Song, S.R., Xiao, J.X., 2013. Tracking revisited using RGBD camera: unified benchmark and baselines. IEEE Int. Conf. on Computer Vision, p.233-240.

[25]Williams, O., Blake, A., Cipolla, R., 2005. Sparse Bayesian learning for efficient visual tracking. IEEE Trans. Patt. Anal. Mach. Intell., 27(8):1292-1304.

[26]Wright, J., Yang, A.Y., Ganesh, A., et al., 2009. Robust face recognition via sparse representation. IEEE Trans. Patt. Anal. Mach. Intell., 31(2):210-227.

[27]Wu, Y., Lim, J., Yang, M.H., 2013. Online object tracking: a benchmark. IEEE Conf. on Computer Vision and Pattern Recognition, p.2411-2418.

[28]Yang, M., Zhang, L., 2010. Gabor feature-based sparse representation for face recognition with Gabor occlusion dictionary. European Conf. on Computer Vision, p.448-461.

[29]Yilmaz, A., Javed, O., Shah, M., 2006. Object tracking: a survey. ACM Comput. Surv., 38(4):43-56.

[30]Yin, Z.Z., Collins, R.T., 2008. Object tracking and detection after occlusion via numerical hybrid local and global mode-seeking. IEEE Conf. on Computer Vision and Pattern Recognition, p.1-8.

[31]Zhang, K., Zhang, L., Yang, M.H., 2012. Real-time compressive tracking. European Conf. on Computer Vision, p.864-877.

[32]Zhang, T.Z., Ghanem, B., Liu, S., et al., 2012a. Low-rank sparse learning for robust visual tracking. European Conf. on Computer Vision, p.470-484.

[33]Zhang, T.Z., Ghanem, B., Liu, S., et al., 2012b. Robust visual tracking via multi-task sparse learning. IEEE Conf. on Computer Vision and Pattern Recognition, p.2042-2049.

[34]Zhang, T.Z., Ghanem, B., Liu, S., et al., 2013. Robust visual tracking via structured multi-task sparse learning. Int. J. Comput. Vis., 101(2):367-383.

[35]Zhang, T.Z., Liu, S., Ahuja, N., et al., 2015a. Robust visual tracking via consistent low-rank sparse learning. Int. J. Comput. Vis., 111(2):171-190.

[36]Zhang, T.Z., Liu, S., Xu, C.S., et al., 2015b. Structural sparse tracking. IEEE Conf. on Computer Vision and Pattern Recognition, p.150-158.

[37]Zhang, Z.Y., 1994. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis., 13(2): 119-152.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE