(Special Paper) 23 2, 2018 3 (JBE Vol. 23, No. 2, March 2018) https://doi.org/10.5909/jbe.2018.23.2.186 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) a), a) Robust Online Object Tracking via Convolutional Neural Network Jong In Gil a) and Manbae Kim a)..,,.,.. 4. Abstract In this paper, we propose an on-line tracking method using convolutional neural network (CNN) for tracking object. It is well known that a large number of training samples are needed to train the model offline. To solve this problem, we use an untrained model and update the model by collecting training samples online directly from the test sequences. While conventional methods have been used to learn models by training samples offline, we demonstrate that a small group of samples are sufficient for online object tracking. In addition, we define a loss function containing color information, and prevent the model from being trained by wrong training samples. Experiments validate that tracking performance is equivalent to four comparative methods or outperforms them. Keyword : visual tracking, convolutional neural network, on-line tracking, probability map, color histogram a) (Department of Computer and Communications Eng., Kangwon National University) Corresponding Author : (Manbae Kim) E-mail: manbae@kangwon.ac.kr Tel: +82-33-250-6395 ORCID: http://orcid.org/0000-0002-4702-8276 2017. 2017 () (No. 2017R1D1A3B03028806). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2017R1D1A3B03028806). Manuscript received November 28, 2017; Revised January 24, 2018; Accepted January 24, 2018. Copyright 2016 Korean Institute of Broadcast and Media Engineers. All rights reserved. This is an Open-Access article distributed under the terms of the Creative Commons BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited and not altered.
1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network)., [1], [2,3], [4], [5]. [6].,.,.., (Tracking-by- Detection)...,,.,..,... (overfitting).. Kalal TLD [7,8].,, (Random Forest).,. Babenko AdaBoost.,. (drift). MIL(Multiple Instance Learning) [9]. (Convolutional Neural Network: CNN) [10-12]. CNN.... 1. CNN Fig. 1. Architecture of CNN for visual tracking
6.,, CNN CNN. 1 CNN. CNN (convolutional layer) (fully connected layer).,. 32x32x3. 3x3 8. 30x30x8, 15x15x8. 7x7x8, 392. (hidden layer) 100, 80.,....., (, ).,.,..... (), (),. 2. (:, : ) Fig. 2. Positive and negative bounding boxes obtained from a tracked object. (red: positive image patch, blue: negative image patch)
1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network) (positive).., ± ±., (negative)., 2 4. 1 6. CNN.. (whitening). (1).,.,.,. 3.. CNN CNN, CNN., CNN. CNN,. multiple instance learning [9]. MeanShift [13].,.,.. (probability map) BP. 4. BP (Integral Probability Map) P. BP p, BP n. P p BP p, P n. 3. (1 :, 2: ) Fig. 3. Whitening transformation (top row: original images, bottom row: transformed images)
4. Fig. 4. Procedure of generating Integral Probability Map 4.. (2).. (3) Cross entropy. ln ln,. ( ) n, ( )., CNN.,., CNN.,, CNN. (5) CNN 6 CNN., CNN,. (epoch).., (6). ln.,. 2-class.,. (ground truth), CNN. (2) (4). (3) (4) (5)..,,,..
1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network). CNN..,,. CNN, 2D..,.,..,., (score map). 5.,.. CNN,. 5. 1.0. (center of gravity).. 5. () () Fig. 5. Object search range (yellow) and candidate image patch(blue),, 6. Fig. 6. Imput images and score maps
. CNN,... 7.. 7(a). 7(b).. (a) Score map (b) tracking result 7. Fig. 7. Difference in score map and tracking result by whitening CNN CVPR2013 Visual Tracker Benchmark [14]. 8 4. 9 8. Fig. 8. Experimental results of proposed method
1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network) 9. Fig. 9. Tracking distance error (ground-truth).,.. boy 10.. Shaking.. Mhyang boy, car4.,. Couple Shaking. David2 boy. 4. CFP, MIL, OAB, TLD. Color-based Probabilistic Tracking [1], Multiple instance learning [9], On-line AdaBoost [15], Tracking- Learning-Detection [8]. 1 3. L2-norm.. Boy..
1. Table 1. Mean value of distance error Sequence Proposed method CFP MIL OAB TLD Boy 2.4725 2.0587 2.3752 1.0126 2.6546 Shaking 7.0906 60.9270 13.0536 113.7921 1.7704 Mhyang 3.3691 8.4343 2.5483 1.4461 1.6057 car4 13.6391 20.5605 23.9314 67.9511 11.6711 Couple 7.2579 5.1237 9.6042 33.6817 2.1000 David2 4.6815 3.6372 5.1816 18.3104 2.2713. Shaking, car4.,.. Mhyang CFP,. Couple, David2. groundtruth.,,,. 0 50. 9. 9 10. Fig. 10. Tracking precision
1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network). Boy. Shaking car4... Mhyang CFP MIL, OAB, TLD. OAB,...., CNN..,.,.,,.,. (References) [1] P. Perez, C. Hue, J. Vermaak, and M. Gangnet, Color-Based Probabilistic Tracking, Computer Vision-ECCV, pp. 661-675, 2002 [2] D. Bruch and K. Takeo, An Iterative Image Registration Technique with an Application to Stereo Vision, Int Joint Conf. on Artificial Intelligence, pp. 674-679, Aug. 1981 [3] T. Carlo and K. Takeo, Detection and Tracking of Point Features, Technical Report CMU-CS-91-132, 1991 [4] D. Comaniciu, V. Ramesh, and P. Meer, Kernel-Based Object Tracking, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 25, No. 5, pp. 564-577, May 2003 [5] K. Lee, S. Ryu, S. Lee, and K. Park, Motion based object tracking with mobile camera, Electronics Letters, Vol. 34, No. 3, pp. 256-258, 1998. [6] Y. Wu, J. Lim, and M. Y, Object Tracking Benchmark, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 37. No. 9, pp. 1834-1848, Sep. 2015 [7] Z. Kalal, J. Matas, and K. Mikolajczyk, P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 49-56, 2010 [8] Z. Kalal, K. Mikolajczyk, and J. Matas, Tracking-Learning- Detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 34, No. 7, pp. 1409-1422, July 2012 [9] B. Babenko, M. Yang, and S. Belongie, Robust Object Tracking with Online Multiple Instance Learning, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 33, No. 8, pp. 1619-1632. Aug. 2011 [10] H. Li, Y. Li, and F. Porikli, DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking, IEEE Trans. on Image Processing, Vol. 25, No. 4, pp. 1834-1848, April 2016. [11] K. Zhang, Q. Liu, and M. Yang, Robust Visual Tracking via Convolutional Networks Without Training, IEEE Trans. on Image Processing, Vol. 25, No. 4, pp. 1779-1792, April 2016. [12] X. Zhou, L. Xie, P. Zhang, and Y. Zhang, An Ensemble of Deep Neural Networks for Object Tracking, IEEE Conf. on Image Processing, pp. 843-847, 2014. [13] D. Comaniciu, V. Ramesh, and P. Meer, Real-Time Tracking of Non-Rigid Objects using Mean Shift, IEEE Conf. on Computer Vision and Pattern Recognition, Vol. 2, pp. 142-149, 2000. [14] Y. Wu, J. Lim, and M. H. Yang, Online object tracking: A benchmark, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2411-2418, 2013. [15] H. Grabner, C. Leistner, and H. Bischof, Semi-supervised On-Line Boosting for Robust Tracking, British Machine Vision Conf., Vol. 1, No. 5, pp. 6. 2006.
- 2010 8 : - 2012 8 : - 2012 9 ~ : IT - :,,, - 1983 : - 1986 : University of Washington, Seattle - 1992 : University of Washington, Seattle - 1992 ~ 1998 : - 1998 ~ : IT - 2016 ~ : - ORCID : http://orcid.org/0000-0002-4702-8276 - : 3D,,