Fast Visual Tracking Using Spatial Temporal Background Context Learning
Keywords:Visual Tracking, Context Learning, Spatial Temporal, Confidence Map, Fast Fourier Transform
Visual Tracking by now has gained much provenience among researchers in recent years due to its vast variety of applications that occur in daily life. Various applications of visual tracking include counting of cars on a high way, analyzing the crowd intensity in a concert or a football ground or a surveillance camera tracking a single person to track its movements. Various techniques have been proposed and implemented in this research domain where researchers have analyzed various parameters. Still this area has a lot to offer. There are two common approaches that are currently deployed in visual tracking. One is discriminative tracking and the other one is generative tracking. Discriminative tracking requires a pre-trained model that requires the learning of the data and solves the object recognition as a binary classification problem. On the other hand, generative model in tracking makes use of the previous states so that next state can be predicted. In this paper, a novel tacking based on generative tracking method is proposed called as Illumination Inavariant Spatio Temporal Tracker (IISTC). The proposed technique takes into account of the nearby surrounding regions and performs context learning so that the state of the object under consideration and its surrounding regions can be estimated in the next frame. The learning model is deployed both in the spatial domain as well as the temporal domain. Spatial domain part of the tracker takes into consideration the nearby pixels in a frame while the temporal model takes account of the possible change of object location. The proposed tracker was tested on a set of 50 images against other state of the art four trackers. Experimental results reveal that our proposed tracker performs reasonably well as compared with other trackers. The proposed visual tracker is both efficiently with respect to computation power as well as accuracy. The proposed tracker takes only 4 fast Fourier transform computations thus making it reasonably faster. The proposed trackers perform exceptionally well when there is a sudden change in back ground illumination.
E. Maggio, and A. Cavallar, “Video Tracking: Theory and Practice”, John Wiley & Sons , 2011.
V. Vapnik, “Statistical Learning Theory”, John Wiley & Sons, 1998.
Y.Andrew, and M. Jordan, “On Discriminative vs. Generative Classifiers: A comparison of logistic Regression and Naive Bayes”, Neural Information Processing Systems, 2001.
H. Shengfeng, Y. Qingxiong, L. Rynson, J. Wang and M. Yang, “Visual Tracking via Locality Sensitive Histograms”, Computer Vision and Pattern Recognition, June, 2013.
M.Danellian, G.Hager, K.Fahad, and M.Felsbeg, “Accurate Scale Estimation for Robust Visual Tracking”,British Machine Vision Conference, 2014.
H. Shengfeng, Y. Qingxiong, L. Rynson, J. Wang and M. Yang, “Visual Tracking via Locality Sensitive Histograms”, 2013, Computer Vision and Pattern Recognition, June.
J. Kwon, and K. M. Lee. “Tracking by Sampling Trackers”, International Conference on Computer Vision, 2011.
B. Liu, J. Huang, L. Yang, and C. Kulikowsk, “Robust Tracking using Local Sparse Appearance Model and K-Selection”, International Conference on Computer Vision, 2011.
J. Ahmed, M. Jafri, M. Shah, and M. Akbar, “Real-Time Edge-Enhanced Dynamic Correlation and Predictive Open-Loop Car Following Control for Robust Tracking”, Machine Vision and Applications Journal, 2008.
P. Viola, and M. Jones, “Robust Real Time Object Detection”, International Journal of Computer Vision, 2004.
Y. Freund, and R. Schapire , “A Short Introduction to Boosting ”, Journal of Japanese Society for Artificial Intelligence, 1999.
R. Kalman, and R. Emil, "A New Approach to Linear Filtering and Prediction Problems", Transactions of the ASME Journal of Basic Engineering,1960.
Y. Wu, J. Lim and M. Yang, “Online Object Tracking: A Benchmark”, IEEE Conference on Computer Vision and Pattern Recognition, 2013.
How to Cite
Copyright (c) 2020 Asif Mukhtar, Arslan Majid, Kashif Fahim
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The articles published in International Journal of Computer and Information Technology (IJCIT) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.