Full Text:   <2847>

Summary:  <1500>

CLC number: TP391

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2020-09-02

Cited: 0

Clicked: 5455

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Shui-wang Li

https://orcid.org/0000-0002-4587-513X

Li Lu

https://orcid.org/0000-0001-7904-8821

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2020 Vol.21 No.10 P.1467-1484

http://doi.org/10.1631/FITEE.1900507


Asymmetric discriminative correlation filters for visual tracking


Author(s):  Shui-wang Li, Qian-bo Jiang, Qi-jun Zhao, Li Lu, Zi-liang Feng

Affiliation(s):  National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu 610065, China

Corresponding email(s):   lishuiwang0721@163.com, jqianbo@163.com, qjzhao@scu.edu.cn, luli@scu.edu.cn, fengziliang@scu.edu.cn

Key Words:  Visual tracking, Discriminative correlation filter (DCF), Asymmetric DCF (ADCF)


Shui-wang Li, Qian-bo Jiang, Qi-jun Zhao, Li Lu, Zi-liang Feng. Asymmetric discriminative correlation filters for visual tracking[J]. Frontiers of Information Technology & Electronic Engineering, 2020, 21(10): 1467-1484.

@article{title="Asymmetric discriminative correlation filters for visual tracking",
author="Shui-wang Li, Qian-bo Jiang, Qi-jun Zhao, Li Lu, Zi-liang Feng",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="21",
number="10",
pages="1467-1484",
year="2020",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1900507"
}

%0 Journal Article
%T Asymmetric discriminative correlation filters for visual tracking
%A Shui-wang Li
%A Qian-bo Jiang
%A Qi-jun Zhao
%A Li Lu
%A Zi-liang Feng
%J Frontiers of Information Technology & Electronic Engineering
%V 21
%N 10
%P 1467-1484
%@ 2095-9184
%D 2020
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1900507

TY - JOUR
T1 - Asymmetric discriminative correlation filters for visual tracking
A1 - Shui-wang Li
A1 - Qian-bo Jiang
A1 - Qi-jun Zhao
A1 - Li Lu
A1 - Zi-liang Feng
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 21
IS - 10
SP - 1467
EP - 1484
%@ 2095-9184
Y1 - 2020
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1900507


Abstract: 
Discriminative correlation filters (DCF) are efficient in visual tracking and have advanced the field significantly. However, the symmetry of correlation (or convolution) operator results in computational problems and does harm to the generalized translation equivariance. The former problem has been approached in many ways, whereas the latter one has not been well recognized. In this paper, we analyze the problems with the symmetry of circular convolution and propose an asymmetric one, which as a generalization of the former has a weak generalized translation equivariance property. With this operator, we propose a tracker called the asymmetric discriminative correlation filter (ADCF), which is more sensitive to translations of targets. Its asymmetry allows the filter and the samples to have different sizes. This flexibility makes the computational complexity of ADCF more controllable in the sense that the number of filter parameters will not grow with the sample size. Moreover, the normal matrix of ADCF is a block matrix with each block being a two-level block Toeplitz matrix. With this well-structured normal matrix, we design an algorithm for multiplying an N×N two-level block Toeplitz matrix by a vector with time complexity O(NlogN) and space complexity O(N), instead of O(N2). Unlike DCF-based trackers, introducing spatial or temporal regularization does not increase the essential computational complexity of ADCF. Comparative experiments are performed on a synthetic dataset and four benchmarks, including OTB-2013, OTB-2015, VOT-2016, and Temple-Color, and the results show that our method achieves state-of-the-art visual tracking performance.

用于视频跟踪的非对称判别相关滤波器

李水旺,蒋权波,赵启军,卢莉,冯子亮
四川大学视觉合成图形图像技术国防重点学科实验室,中国成都市,610065

摘要:判别相关滤波器(DCF)是视频跟踪领域一种有效方法,显著推动了视频跟踪领域进展。然而,卷积算子的对称性会带来计算上的问题,并破坏广义的平移等变性。针对前一问题,人们提出许多解决方法,但对后一问题不够重视。本文分析循环卷积的对称性带来的问题,提出一种非对称卷积运算,且证明这种运算具有弱的广义平移等变性。利用提出的卷积运算,构造一个非对称判别相关滤波跟踪器(ADCF)。它对目标的平移更加敏感,且其非对称性允许滤波器和输入样本有不同空域大小,这使得ADCF的计算复杂性,从滤波器参数数量不随输入样本增大而增加的意义上说,更加可控。且ADCF对应的正规矩阵具有两级块Toeplitz矩阵结构,利用该结构可设计时间复杂度为O(NlogN)、空间复杂度为O(N)的矩阵-向量乘法。此外,有别于基于DCF的跟踪器,ADCF引进空域和时域正则化项,本质上不会增加计算复杂度。在4个公开基准数据集(OTB-2013,OTB-2015,VOT-2016和Temple-Color)和一个合成数据集上进行对比实验,结果表明所提方法取得最优视频跟踪性能。

关键词:视频跟踪;判别相关滤波器(DCF);非对称判别相关滤波器(ADCF)

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bertinetto L, Valmadre J, Golodetz S, et al., 2016. Staple: complementary learners for real-time tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1401-1409.

[2]Bolme DS, Beveridge JR, Draper BA, et al., 2010. Visual object tracking using adaptive correlation filters. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2544-2550.

[3]Chen BY, Wang D, Li PX, et al., 2018. Real-time ‘actor-critic’ tracking. Proc 15th European Conf on Computer Vision, p.318-334.

[4]Choi J, Chang HJ, Yun S, et al., 2017. Attentional correlation filter network for adaptive visual tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4828-4837.

[5]Dalal N, Triggs B, 2005. Histograms of oriented gradients for human detection. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.886-893.

[6]Danelljan M, Khan FS, Felsberg M, et al., 2014. Adaptive color attributes for real-time visual tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1090-1097.

[7]Danelljan M, Häger G, Khan FS, et al., 2015. Learning spatially regularized correlation filters for visual tracking. Proc IEEE Int Conf on Computer Vision, p.4310-4318.

[8]Danelljan M, Häger G, Khan FS, et al., 2016a. Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1430-1438.

[9]Danelljan M, Robinson A, Khan FS, et al., 2016b. Beyond correlation filters: learning continuous convolution operators for visual tracking. Proc 14th European Conf on Computer Vision, p.472-488.

[10]Danelljan M, Häger G, Khan FS, et al., 2017a. Discriminative scale space tracking. IEEE Trans Patt Anal Mach Intell, 39(8):1561-1575.

[11]Danelljan M, Bhat G, Khan FS, et al., 2017b. ECO: efficient convolution operators for tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.6638-6646.

[12]Dong XP, Shen JB, 2018. Triplet loss in Siamese network for object tracking. Proc 15th European Conf on Computer Vision, p.459-474.

[13]Galoogahi HK, Sim T, Lucey S, 2013. Multi-channel correlation filters. Proc IEEE Int Conf on Computer Vision, p.3072-3079.

[14]Galoogahi HK, Fagg A, Lucey S, 2017. Learning background-aware correlation filters for visual tracking. Proc IEEE Int Conf on Computer Vision, p.1135-1143.

[15]Henriques JF, Caseiro R, Martins P, et al., 2015. High-speed tracking with kernelized correlation filters. IEEE Trans Patt Anal Mach Intell, 37(3):583-596.

[16]Kart U, Lukezic A, Kristan M, et al., 2019. Object tracking by reconstruction with view-specific discriminative correlation filters. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1339-1348.

[17]Kristan M, Leonardis A, Matas J, et al., 2016. The visual object tracking VOT2016 challenge results. Proc Amsterdam on Computer Vision, p.191-217.

[18]Lee D, 1986. Fast multiplication of a recursive block Toeplitz matrix by a vector and its application. J Complex, 2(4):295-305.

[19]Li B, Yan JJ, Wu W, et al., 2018. High performance visual tracking with Siamese region proposal network. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8971-8980.

[20]Li F, Tian C, Zuo WM, et al., 2018. Learning spatial-temporal regularized correlation filters for visual tracking. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4904-4913.

[21]Li Y, Zhu JK, 2014. A scale adaptive kernel correlation filter tracker with feature integration. Proc European Conf on Computer Vision, p.254-265.

[22]Liang PP, Blasch E, Ling HB, 2015. Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans Image Process, 24(12):5630-5644.

[23]Lukezic A, VojíłT, Zajc LC, et al., 2017. Discriminative correlation filter with channel and spatial reliability. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4847-4856.

[24]Ma C, Huang JB, Yang XK, et al., 2015. Hierarchical convolutional features for visual tracking. Proc IEEE Int Conf on Computer Vision, p.3074-3082.

[25]Mueller M, Smith N, Ghanem B, 2017. Context-aware correlation filter tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1387-1395.

[26]Nam H, Han B, 2016. Learning multi-domain convolutional neural networks for visual tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4293-4302.

[27]Pu S, Song Y, Ma C, et al., 2018. Deep attentive tracking via reciprocative learning. Proc 32nd Conf on Neural Information Processing Systems, p.1931-1941.

[28]Qi YK, Zhang SP, Qin L, et al., 2016. Hedged deep tracking. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4303-4311.

[29]Sun C, Wang D, Lu HC, et al., 2018. Correlation tracking via joint discrimination and reliability learning. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.489-497.

[30]Sun YX, Sun C, Wang D, et al., 2019. ROI pooled correlation filters for visual tracking. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.5783-5791.

[31]Tang M, Yu B, Zhang F, et al., 2018. High-speed tracking with multi-kernel correlation filters. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4874-4883.

[32]Wang Q, Zhang L, Bertinetto L, et al., 2019. Fast online object tracking and segmentation: a unifying approach. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1328-1338.

[33]Wu Y, Lim J, Yang MH, 2013. Online object tracking: a benchmark. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2411-2418.

[34]Wu Y, Lim J, Yang MH, 2015. Object tracking benchmark. IEEE Trans Patt Anal Mach Intell, 37(9):1834-1848.

[35]Zhang JM, Ma SG, Sclaroff S, 2014. MEEM: robust tracking via multiple experts using entropy minimization. Proc 13th European Conf on Computer Vision, p.188-203.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE