Exploiting a depth contextmodel in visual tracking with correlation filter
Zhao-yun CHEN, Lei LUO, Da-fei HUANG, Mei WEN, Chun-yuan ZHANG
Exploiting a depth contextmodel in visual tracking with correlation filter
Recently correlation filter based trackers have attracted considerable attention for their high computational efficiency. However, they cannot handle occlusion and scale variation well enough. This paper aims at preventing the tracker from failure in these two situations by integrating the depth information into a correlation filter based tracker. By using RGB-D data, we construct a depth context model to reveal the spatial correlation between the target and its surrounding regions. Furthermore, we adopt a region growing method to make our tracker robust to occlusion and scale variation. Additional optimizations such as a model updating scheme are applied to improve the performance for longer video sequences. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracker performs favourably against state-of-the-art algorithms.
Visual tracking / Depth context model / Correlation filter / Region growing
[1] |
Adam,A., Rivlin, E., Shimshoni,I. , 2006. Robust fragmentsbased tracking using the integral histogram. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.798–805. http://dx.doi.org/10.1109/CVPR.2006.256
|
[2] |
Adams,R., Bischof, L., 1994. Seeded region growing. IEEE Trans. Patt. Anal. Mach. Intell., 16(6):641–647. http://dx.doi.org/10.1109/34.295913
|
[3] |
Bolme,D.S., Beveridge, J.R., Draper,B.A. ,
|
[4] |
Cehovin,L., Kristan, M., Leonardis,A. , 2011. An adaptivecoupled-layer visual model for robust visual tracking. IEEE Int. Conf. on Computer Vision, p.1363–1370. http://dx.doi.org/10.1109/ICCV.2011.6126390
|
[5] |
Chen,K., Lai,Y., Wu,Y.,
|
[6] |
Choi,C., Christensen, H.I., 2013. RGB-D object tracking: aparticle filter approach on GPU. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.1084–1091. http://dx.doi.org/10.1109/IROS.2013.6696485
|
[7] |
Danelljan,M., Häger, G., Khan,F.S. ,
|
[8] |
Danelljan,M., Khan,F.S., Felsberg,M. ,
|
[9] |
Dinh,T.B., Vo,N., Medioni,G.G. , 2011. Context tracker: exploring supporters and distracters in unconstrained environments. IEEE Conf. on Computer Vision and Pattern Recognition, p.1177–1184. http://dx.doi.org/10.1109/CVPR.2011.5995733
|
[10] |
Everingham,M., Gool,L.V., Williams,C.K. ,
|
[11] |
Grabner,H., Matas, J., Gool,L.V. ,
|
[12] |
Gupta,S., Girshick, R.B., Arbelaez,P. ,
|
[13] |
Hare,S., Saffari, A., Torr,P. ,
|
[14] |
Henriques,J.F., Caseiro, R., Martins,P. ,
|
[15] |
Henriques,J.F., Caseiro, R., Martins,P. ,
|
[16] |
Hickson,S., Birchfield, S., Essa,I.A. ,
|
[17] |
Izadinia,H., Saleemi, I., Li,W. ,
|
[18] |
Kalal,Z., Mikolajczyk, K., Matas,J. , 2012. Trackinglearning-detection. IEEE Trans. Patt. Anal. Mach. Intell., 34(7):1409–1422. http://dx.doi.org/10.1109/TPAMI.2011.239
|
[19] |
Kristan,M., Pflugfelder, R., Leonardis,A. ,
|
[20] |
Kumar,B.V., Mahalanobis, A., Juday,R.D. , 2010. Correlation Pattern Recognition. Cambridge University Press, Cambridge.
|
[21] |
Lee,D., Sim,J., Kim,C., 2014. Visual tracking using pertinent patch selection and masking. IEEE Conf. on Computer Vision and Pattern Recognition, p.3486–3493.
|
[22] |
Li,X., Hu,W., Shen,C.,
|
[23] |
Li,Y., Zhu,J., 2014. A scale adaptive kernel correlation filter tracker with feature integration. ECCV, p.254–265. http://dx.doi.org/10.1007/978-3-319-16181-5_18
|
[24] |
Li,Y., Zhu,J., Hoi,S.,
|
[25] |
Liu,T., Wang,G., Yang,Q., 2015. Real-time part-basedvisual tracking via adaptive correlation filters. IEEE Conf. on Computer Vision and Pattern Recognition, p.4902–4912.
|
[26] |
Luber,M., Spinello, L., Arras,K.O. , 2011. People trackingin RGB-D data with on-line boosted target models. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.3844–3849. http://dx.doi.org/10.1109/IROS.2011.6095075
|
[27] |
Ma,C., Yang,X., Zhang,C.,
|
[28] |
Park,Y., Lepetit, V., Woo,W. , 2011. Texture-less object tracking with online training using an RGB-D camera.10th IEEE Int. Symp. on Mixed and Augmented Reality, p.121–126. http://dx.doi.org/10.1109/ISMAR.2011.6092377
|
[29] |
Ross,D.A., Lim,J., Lin,R.S.,
|
[30] |
Shu,G., Dehghan, A., Oreifej,O. ,
|
[31] |
Smeulders,A.W., Chu, D., Cucchiara,R. ,
|
[32] |
Song,S., Xiao,J., 2013. Tracking revisited using RGBD camera: unified benchmark and baselines. IEEE Int. Conf. on Computer Vision, p.233–240.
|
[33] |
Teichman,A., Lussier, J.T., Thrun,S. , 2013. Learning to segment and track in RGBD. IEEE Trans. Autom. Sci. Eng., 10(4):841–852. http://dx.doi.org/10.1109/TASE.2013.2264286
|
[34] |
Wu,Y., Lim,J., Yang,M., 2013. Online object tracking: a benchmark. IEEE Conf. on Computer Vision and Pattern Recognition, p.2411–2418.
|
[35] |
Yang,B., Nevatia, R., 2012. Online learned discriminative part-based appearance models for multi-human tracking. ECCV, p.484–498. http://dx.doi.org/10.1007/978-3-642-33718-5_35
|
[36] |
Yang,H., Shao,L., Zheng,F.,
|
[37] |
Yang,M., Wu,Y., Hua,G., 2009. Context-aware visual tracking. IEEE Trans. Patt. Anal. Mach. Intell., 31(7):1195–1209. http://dx.doi.org/10.1109/TPAMI.2008.146
|
[38] |
Yilmaz,A., Javed, O., Shah,M. , 2006. Object tracking: a survey.ACM Comput. Surv., 38(4):13. http://dx.doi.org/10.1145/1177352.1177355
|
[39] |
Zhang,L., Maaten, L., 2014. Preserving structure in modelfree tracking. IEEE Trans. Patt. Anal. Mach. Intell., 36(4):756–769. http://dx.doi.org/10.1109/TPAMI.2013.221
|
[40] |
Zhang,K., Zhang, L., Liu,Q. ,
|
/
〈 | 〉 |