Full Text:   <2259>

Summary:  <1595>

CLC number: TP391.4

On-line Access: 2017-05-24

Received: 2015-11-10

Revision Accepted: 2016-06-06

Crosschecked: 2017-04-13

Cited: 0

Clicked: 6785

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Lei Luo

http://orcid.org/0000-0002-9329-1411

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.5 P.667-679

http://doi.org/10.1631/FITEE.1500389


Exploiting a depth context model in visual tracking with correlation filter


Author(s):  Zhao-yun Chen, Lei Luo, Da-fei Huang, Mei Wen, Chun-yuan Zhang

Affiliation(s):  College of Computer, National University of Defense Technology, Changsha 410073, China; more

Corresponding email(s):   chenzhaoyun@nudt.edu.cn, l.luo@nudt.edu.cn

Key Words:  Visual tracking, Depth context model, Correlation filter, Region growing


Zhao-yun Chen, Lei Luo, Da-fei Huang, Mei Wen, Chun-yuan Zhang. Exploiting a depth context model in visual tracking with correlation filter[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(5): 667-679.

@article{title="Exploiting a depth context model in visual tracking with correlation filter",
author="Zhao-yun Chen, Lei Luo, Da-fei Huang, Mei Wen, Chun-yuan Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="5",
pages="667-679",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1500389"
}

%0 Journal Article
%T Exploiting a depth context model in visual tracking with correlation filter
%A Zhao-yun Chen
%A Lei Luo
%A Da-fei Huang
%A Mei Wen
%A Chun-yuan Zhang
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 5
%P 667-679
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1500389

TY - JOUR
T1 - Exploiting a depth context model in visual tracking with correlation filter
A1 - Zhao-yun Chen
A1 - Lei Luo
A1 - Da-fei Huang
A1 - Mei Wen
A1 - Chun-yuan Zhang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 5
SP - 667
EP - 679
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1500389


Abstract: 
Recently correlation filter based trackers have attracted considerable attention for their high computational efficiency. However, they cannot handle occlusion and scale variation well enough. This paper aims at preventing the tracker from failure in these two situations by integrating the depth information into a correlation filter based tracker. By using RGB-D data, we construct a depth context model to reveal the spatial correlation between the target and its surrounding regions. Furthermore, we adopt a region growing method to make our tracker robust to occlusion and scale variation. Additional optimizations such as a model updating scheme are applied to improve the performance for longer video sequences. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracker performs favourably against state-of-the-art algorithms.

相关滤波视觉跟踪方法中深度上下文模型的研究

概要:近来,基于相关滤波器的跟踪器因具有较高的计算效率而颇受关注,但这一方法不能很好地处理遮挡和尺度变化。本文旨在将深度信息整合到基于相关滤波器的跟踪器中,以解决跟踪器在上述两种情况下的跟踪失败。同时利用RGB-D数据构建了一个深度上下文模型,用来描述目标与周边区域之间的空间相关性。此外,本文采用了区域生长法使跟踪器对遮挡和尺度变化的场景具有更高鲁棒性,并利用模型更新等优化方法来改进较长视频序列的性能。通过对极具挑战性的基准图像序列测试集的定性和定量评估,本文提出的跟踪器比最先进的算法具有更好的性能。

关键词:视觉跟踪;深度上下文模型;相关滤波;区域生长

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Adam, A., Rivlin, E., Shimshoni, I., 2006. Robust fragments-based tracking using the integral histogram. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.798-805.

[2]Adams, R., Bischof, L., 1994. Seeded region growing. IEEE Trans. Patt. Anal. Mach. Intell., 16(6):641-647.

[3]Bolme, D.S., Beveridge, J.R., Draper, B.A., et al., 2010. Visual object tracking using adaptive correlation filters. IEEE Conf. on Computer Vision and Pattern Recognition, p.2544-2550.

[4]Cehovin, L., Kristan, M., Leonardis, A., 2011. An adaptive coupled-layer visual model for robust visual tracking. IEEE Int. Conf. on Computer Vision, p.1363-1370.

[5]Chen, K., Lai, Y., Wu, Y., et al., 2014. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information. ACM Trans. Graph., 33(6):208-219.

[6]Choi, C., Christensen, H.I., 2013. RGB-D object tracking: a particle filter approach on GPU. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.1084-1091.

[7]Danelljan, M., Häger, G., Khan, F.S., et al., 2014a. Accurate scale estimation for robust visual tracking. British Machine Vision Conf., p.1-11.

[8]Danelljan, M., Khan, F.S., Felsberg, M., et al., 2014b. Adaptive color attributes for real-time visual tracking. IEEE Conf. on Computer Vision and Pattern Recognition, p.1090-1097.

[9]Dinh, T.B., Vo, N., Medioni, G.G., 2011. Context tracker: exploring supporters and distracters in unconstrained environments. IEEE Conf. on Computer Vision and Pattern Recognition, p.1177-1184.

[10]Everingham, M., Gool, L.V., Williams, C.K., et al., 2010. The Pascal Visual Object Classes (VOC) challenge. Int. J. Comput. Vis., 88(2):303-338.

[11]Grabner, H., Matas, J., Gool, L.V., et al., 2010. Tracking the invisible: learning where the object might be. IEEE Conf. on Computer Vision and Pattern Recognition, p.1285-1292.

[12]Gupta, S., Girshick, R.B., Arbelaez, P., et al., 2014. Learning rich features from RGB-D images for object detection and segmentation. ECCV, p.345-360.

[13]Hare, S., Saffari, A., Torr, P., et al., 2011. Struck: structured output tracking with kernels. IEEE Trans. Patt. Anal. Mach. Intell., 38(10):263-270.

[14]Henriques, J.F., Caseiro, R., Martins, P., et al., 2012. Exploiting the circulant structure of tracking-by-detection with kernels. ECCV, p.702-715.

[15]Henriques, J.F., Caseiro, R., Martins, P., et al., 2015. High-speed tracking with kernelized correlation filters. IEEE Trans. Patt. Anal. Mach. Intell., 37(3):583-596.

[16]Hickson, S., Birchfield, S., Essa, I.A., et al., 2014. Efficient hierarchical graph-based segmentation of RGBD videos. IEEE Conf. on Computer Vision and Pattern Recognition, p.344-351.

[17]Izadinia, H., Saleemi, I., Li, W., et al., 2012. (MP) 2T: multiple people multiple parts tracker. IEEE Conf. on Computer Vision and Pattern Recognition, p.100-114.

[18]Kalal, Z., Mikolajczyk, K., Matas, J., 2012. Tracking-learning-detection. IEEE Trans. Patt. Anal. Mach. Intell., 34(7):1409-1422.

[19]Kristan, M., Pflugfelder, R., Leonardis, A., et al., 2015. The visual object tracking VOT2014 challenge results. IEEE Conf. on Computer Vision and Pattern Recognition, p.191-217.

[20]Kumar, B.V., Mahalanobis, A., Juday, R.D., 2010. Correlation Pattern Recognition. Cambridge University Press, Cambridge.

[21]Lee, D., Sim, J., Kim, C., 2014. Visual tracking using pertinent patch selection and masking. IEEE Conf. on Computer Vision and Pattern Recognition, p.3486-3493.

[22]Li, X., Hu, W., Shen, C., et al., 2013. A survey of appearance models in visual object tracking. ACM Intell. Syst. Technol., 4(4):58.

[23]Li, Y., Zhu, J., 2014. A scale adaptive kernel correlation filter tracker with feature integration. ECCV, p.254-265.

[24]Li, Y., Zhu, J., Hoi, S., et al., 2015. Reliable patch trackers: robust visual tracking by exploiting reliable patches. IEEE Conf. on Computer Vision and Pattern Recognition, p.353-361.

[25]Liu, T., Wang, G., Yang, Q., 2015. Real-time part-based visual tracking via adaptive correlation filters. IEEE Conf. on Computer Vision and Pattern Recognition, p.4902-4912.

[26]Luber, M., Spinello, L., Arras, K.O., 2011. People tracking in RGB-D data with on-line boosted target models. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.3844-3849.

[27]Ma, C., Yang, X., Zhang, C., et al., 2015. Long-term correlation tracking. IEEE Conf. on Computer Vision and Pattern Recognition, p.5388-5396.

[28]Park, Y., Lepetit, V., Woo, W., 2011. Texture-less object tracking with online training using an RGB-D camera. 10th IEEE Int. Symp. on Mixed and Augmented Reality, p.121-126.

[29]Ross, D.A., Lim, J., Lin, R.S., et al., 2008. Incremental learning for robust visual tracking. Int. J. Comput. Vis., 77(1-3):125-141.

[30]Shu, G., Dehghan, A., Oreifej, O., et al., 2012. Part-based multiple-person tracking with partial occlusion handling. IEEE Conf. on Computer Vision and Pattern Recognition, p.1815-1821.

[31]Smeulders, A.W., Chu, D., Cucchiara, R., et al., 2014. Visual tracking: an experimental survey. IEEE Trans. Patt. Anal. Mach. Intell., 36(7):1442-1468.

[32]Song, S., Xiao, J., 2013. Tracking revisited using RGBD camera: unified benchmark and baselines. IEEE Int. Conf. on Computer Vision, p.233-240.

[33]Teichman, A., Lussier, J.T., Thrun, S., 2013. Learning to segment and track in RGBD. IEEE Trans. Autom. Sci. Eng., 10(4):841-852.

[34]Wu, Y., Lim, J., Yang, M., 2013. Online object tracking: a benchmark. IEEE Conf. on Computer Vision and Pattern Recognition, p.2411-2418.

[35]Yang, B., Nevatia, R., 2012. Online learned discriminative part-based appearance models for multi-human tracking. ECCV, p.484-498.

[36]Yang, H., Shao, L., Zheng, F., et al., 2011. Recent advances and trends in visual tracking: a review. Neurocomputing, 74(18):3823-3831.

[37]Yang, M., Wu, Y., Hua, G., 2009. Context-aware visual tracking. IEEE Trans. Patt. Anal. Mach. Intell., 31(7):1195-1209.

[38]Yilmaz, A., Javed, O., Shah, M., 2006. Object tracking: a survey. ACM Comput. Surv., 38(4):13.

[39]Zhang, L., Maaten, L., 2014. Preserving structure in model-free tracking. IEEE Trans. Patt. Anal. Mach. Intell., 36(4):756-769.

[40]Zhang, K., Zhang, L., Liu, Q., et al., 2014. Fast visual tracking via dense spatio-temporal context learning. ECCV, p.127-141.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE