(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

Similar documents
(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

2 : (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (Regular Paper) 24 4, (JBE

09권오설_ok.hwp

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

(JBE Vol. 22, No. 2, March 2017) (Regular Paper) 22 2, (JBE Vol. 22, No. 2, March 2017) ISSN

High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

2 : (JEM) QTBT (Yong-Uk Yoon et al.: A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM) (Special Paper) 22 5, (JBE Vol. 2

<30312DC1A4BAB8C5EBBDC5C7E0C1A4B9D7C1A4C3A52DC1A4BFB5C3B62E687770>

(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

07.045~051(D04_신상욱).fm

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

(JBE Vol. 24, No. 2, March 2019) (Special Paper) 24 2, (JBE Vol. 24, No. 2, March 2019) ISSN

08김현휘_ok.hwp

À±½Â¿í Ãâ·Â

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 10, Oct ,,. 0.5 %.., cm mm FR4 (ε r =4.4)

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 29(2), IS

(JBE Vol. 7, No. 4, July 0)., [].,,. [4,5,6] [7,8,9]., (bilateral filter, BF) [4,5]. BF., BF,. (joint bilateral filter, JBF) [7,8]. JBF,., BF., JBF,.

1 : (Sunmin Lee et al.: Design and Implementation of Indoor Location Recognition System based on Fingerprint and Random Forest)., [1][2]. GPS(Global P

19_9_767.hwp

(JBE Vol. 23, No. 1, January 2018). (VR),. IT (Facebook) (Oculus) VR Gear IT [1].,.,,,,..,,.. ( ) 3,,..,,. [2].,,,.,,. HMD,. HMD,,. TV.....,,,,, 3 3,,

<372DBCF6C1A42E687770>

1 : 360 VR (Da-yoon Nam et al.: Color and Illumination Compensation Algorithm for 360 VR Panorama Image) (Special Paper) 24 1, (JBE Vol. 24, No

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 26(11),

<4D F736F F D20B1E2C8B9BDC3B8AEC1EE2DC0E5C7F5>

(JBE Vol. 23, No. 1, January 2018) (Special Paper) 23 1, (JBE Vol. 23, No. 1, January 2018) ISSN 2287-

2 : (Jaeyoung Kim et al.: A Statistical Approach for Improving the Embedding Capacity of Block Matching based Image Steganography) (Regular Paper) 22

°í¼®ÁÖ Ãâ·Â

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 27(12),

DBPIA-NURIMEDIA

04 최진규.hwp

3 : 3D (Seunggi Kim et. al.: 3D Depth Estimation by a Single Camera) (Regular Paper) 24 2, (JBE Vol. 24, No. 2, March 2019)

04김호걸(39~50)ok

(JBE Vol. 24, No. 1, January 2019) (Regular Paper) 24 1, (JBE Vol. 24, No. 1, January 2019) ISSN 2287

DBPIA-NURIMEDIA

(JBE Vol. 23, No. 6, November 2018) (Special Paper) 23 6, (JBE Vol. 23, No. 6, November 2018) ISSN 2

02( ) SAV12-19.hwp

04 김영규.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

4 : (Hyo-Jin Cho et al.: Audio High-Band Coding based on Autoencoder with Side Information) (Special Paper) 24 3, (JBE Vol. 24, No. 3, May 2019

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

(JBE Vol. 20, No. 5, September 2015) (Special Paper) 20 5, (JBE Vol. 20, No. 5, September 2015) ISS

3 : (Won Jang et al.: Musical Instrument Conversion based Music Ensemble Application Development for Smartphone) (Special Paper) 22 2, (JBE Vol

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 6, Jun Rate). STAP(Space-Time Adaptive Processing)., -

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 26(12),

<30345F D F FC0CCB5BFC8F15FB5B5B7CEC5CDB3CEC0C720B0BBB1B8BACE20B0E6B0FCBCB3B0E8B0A120C5CDB3CE20B3BBBACEC1B6B8ED2E687770>

融合先验信息到三维重建 组会报 告[2]

¼º¿øÁø Ãâ·Â-1

DBPIA-NURIMEDIA

디지털포렌식학회 논문양식

1 : (Su-Min Hong et al.: Depth Upsampling Method Using Total Generalized Variation) (Regular Paper) 21 6, (JBE Vol. 21, No. 6, November 2016)

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 25(3),

45-51 ¹Ú¼ø¸¸

03-서연옥.hwp

<31325FB1E8B0E6BCBA2E687770>

R을 이용한 텍스트 감정분석

(JBE Vol. 23, No. 5, September 2018) (Special Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

2014_ pdf

(JBE Vol. 23, No. 6, November 2018) (Regular Paper) 23 6, (JBE Vol. 23, No. 6, November 2018) ISSN 2

김경재 안현철 지능정보연구제 17 권제 4 호 2011 년 12 월

3 : ATSC 3.0 (Jeongchang Kim et al.: Study on Synchronization Using Bootstrap Signals for ATSC 3.0 Systems) (Special Paper) 21 6, (JBE Vol. 21

*금안 도비라및목차1~9

(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

<30362E20C6EDC1FD2DB0EDBFB5B4EBB4D420BCF6C1A42E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 29(7),

김기남_ATDC2016_160620_[키노트].key

1. 서 론

<372E20B9DAC0B1C8F12DB0E62E687770>

11 함범철.hwp

DBPIA-NURIMEDIA

지능정보연구제 16 권제 1 호 2010 년 3 월 (pp.71~92),.,.,., Support Vector Machines,,., KOSPI200.,. * 지능정보연구제 16 권제 1 호 2010 년 3 월

Journal of Educational Innovation Research 2018, Vol. 28, No. 4, pp DOI: * A Research Trend

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jan.; 26(1),

8-VSB (Vestigial Sideband Modulation)., (Carrier Phase Offset, CPO) (Timing Frequency Offset),. VSB, 8-PAM(pulse amplitude modulation,, ) DC 1.25V, [2

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 30(9),

5 : HEVC GOP R-lambda (Dae-Eun Kim et al.: R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder) (Special Pa

???? 1

[ReadyToCameral]RUF¹öÆÛ(CSTA02-29).hwp

02손예진_ok.hwp

04_이근원_21~27.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

박선영무선충전-내지

<333820B1E8C8AFBFEB2D5A B8A620C0CCBFEBC7D120BDC7BFDC20C0A7C4A1C3DFC1A42E687770>

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 30(3),

3 Gas Champion : MBB : IBM BCS PO : 2 BBc : : /45

1 : UHD (Heekwang Kim et al.: Segment Scheduling Scheme for Efficient Bandwidth Utilization of UHD Contents Streaming in Wireless Environment) (Specia

(JBE Vol. 24, No. 4, July 2019) (Special Paper) 24 4, (JBE Vol. 24, No. 4, July 2019) ISSN

(JBE Vol. 20, No. 6, November 2015) (Regular Paper) 20 6, (JBE Vol. 20, No. 6, November 2015) ISSN

<C7A5C1F620BEE7BDC4>


THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 28(3),


05 목차(페이지 1,2).hwp

Transcription:

(Special Paper) 23 2, 2018 3 (JBE Vol. 23, No. 2, March 2018) https://doi.org/10.5909/jbe.2018.23.2.186 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) a), a) Robust Online Object Tracking via Convolutional Neural Network Jong In Gil a) and Manbae Kim a)..,,.,.. 4. Abstract In this paper, we propose an on-line tracking method using convolutional neural network (CNN) for tracking object. It is well known that a large number of training samples are needed to train the model offline. To solve this problem, we use an untrained model and update the model by collecting training samples online directly from the test sequences. While conventional methods have been used to learn models by training samples offline, we demonstrate that a small group of samples are sufficient for online object tracking. In addition, we define a loss function containing color information, and prevent the model from being trained by wrong training samples. Experiments validate that tracking performance is equivalent to four comparative methods or outperforms them. Keyword : visual tracking, convolutional neural network, on-line tracking, probability map, color histogram a) (Department of Computer and Communications Eng., Kangwon National University) Corresponding Author : (Manbae Kim) E-mail: manbae@kangwon.ac.kr Tel: +82-33-250-6395 ORCID: http://orcid.org/0000-0002-4702-8276 2017. 2017 () (No. 2017R1D1A3B03028806). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2017R1D1A3B03028806). Manuscript received November 28, 2017; Revised January 24, 2018; Accepted January 24, 2018. Copyright 2016 Korean Institute of Broadcast and Media Engineers. All rights reserved. This is an Open-Access article distributed under the terms of the Creative Commons BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited and not altered.

1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network)., [1], [2,3], [4], [5]. [6].,.,.., (Tracking-by- Detection)...,,.,..,... (overfitting).. Kalal TLD [7,8].,, (Random Forest).,. Babenko AdaBoost.,. (drift). MIL(Multiple Instance Learning) [9]. (Convolutional Neural Network: CNN) [10-12]. CNN.... 1. CNN Fig. 1. Architecture of CNN for visual tracking

6.,, CNN CNN. 1 CNN. CNN (convolutional layer) (fully connected layer).,. 32x32x3. 3x3 8. 30x30x8, 15x15x8. 7x7x8, 392. (hidden layer) 100, 80.,....., (, ).,.,..... (), (),. 2. (:, : ) Fig. 2. Positive and negative bounding boxes obtained from a tracked object. (red: positive image patch, blue: negative image patch)

1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network) (positive).., ± ±., (negative)., 2 4. 1 6. CNN.. (whitening). (1).,.,.,. 3.. CNN CNN, CNN., CNN. CNN,. multiple instance learning [9]. MeanShift [13].,.,.. (probability map) BP. 4. BP (Integral Probability Map) P. BP p, BP n. P p BP p, P n. 3. (1 :, 2: ) Fig. 3. Whitening transformation (top row: original images, bottom row: transformed images)

4. Fig. 4. Procedure of generating Integral Probability Map 4.. (2).. (3) Cross entropy. ln ln,. ( ) n, ( )., CNN.,., CNN.,, CNN. (5) CNN 6 CNN., CNN,. (epoch).., (6). ln.,. 2-class.,. (ground truth), CNN. (2) (4). (3) (4) (5)..,,,..

1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network). CNN..,,. CNN, 2D..,.,..,., (score map). 5.,.. CNN,. 5. 1.0. (center of gravity).. 5. () () Fig. 5. Object search range (yellow) and candidate image patch(blue),, 6. Fig. 6. Imput images and score maps

. CNN,... 7.. 7(a). 7(b).. (a) Score map (b) tracking result 7. Fig. 7. Difference in score map and tracking result by whitening CNN CVPR2013 Visual Tracker Benchmark [14]. 8 4. 9 8. Fig. 8. Experimental results of proposed method

1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network) 9. Fig. 9. Tracking distance error (ground-truth).,.. boy 10.. Shaking.. Mhyang boy, car4.,. Couple Shaking. David2 boy. 4. CFP, MIL, OAB, TLD. Color-based Probabilistic Tracking [1], Multiple instance learning [9], On-line AdaBoost [15], Tracking- Learning-Detection [8]. 1 3. L2-norm.. Boy..

1. Table 1. Mean value of distance error Sequence Proposed method CFP MIL OAB TLD Boy 2.4725 2.0587 2.3752 1.0126 2.6546 Shaking 7.0906 60.9270 13.0536 113.7921 1.7704 Mhyang 3.3691 8.4343 2.5483 1.4461 1.6057 car4 13.6391 20.5605 23.9314 67.9511 11.6711 Couple 7.2579 5.1237 9.6042 33.6817 2.1000 David2 4.6815 3.6372 5.1816 18.3104 2.2713. Shaking, car4.,.. Mhyang CFP,. Couple, David2. groundtruth.,,,. 0 50. 9. 9 10. Fig. 10. Tracking precision

1: (Jong In Gil et al.: Robust Online Object Tracking via Convolutional Neural Network). Boy. Shaking car4... Mhyang CFP MIL, OAB, TLD. OAB,...., CNN..,.,.,,.,. (References) [1] P. Perez, C. Hue, J. Vermaak, and M. Gangnet, Color-Based Probabilistic Tracking, Computer Vision-ECCV, pp. 661-675, 2002 [2] D. Bruch and K. Takeo, An Iterative Image Registration Technique with an Application to Stereo Vision, Int Joint Conf. on Artificial Intelligence, pp. 674-679, Aug. 1981 [3] T. Carlo and K. Takeo, Detection and Tracking of Point Features, Technical Report CMU-CS-91-132, 1991 [4] D. Comaniciu, V. Ramesh, and P. Meer, Kernel-Based Object Tracking, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 25, No. 5, pp. 564-577, May 2003 [5] K. Lee, S. Ryu, S. Lee, and K. Park, Motion based object tracking with mobile camera, Electronics Letters, Vol. 34, No. 3, pp. 256-258, 1998. [6] Y. Wu, J. Lim, and M. Y, Object Tracking Benchmark, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 37. No. 9, pp. 1834-1848, Sep. 2015 [7] Z. Kalal, J. Matas, and K. Mikolajczyk, P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 49-56, 2010 [8] Z. Kalal, K. Mikolajczyk, and J. Matas, Tracking-Learning- Detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 34, No. 7, pp. 1409-1422, July 2012 [9] B. Babenko, M. Yang, and S. Belongie, Robust Object Tracking with Online Multiple Instance Learning, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 33, No. 8, pp. 1619-1632. Aug. 2011 [10] H. Li, Y. Li, and F. Porikli, DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking, IEEE Trans. on Image Processing, Vol. 25, No. 4, pp. 1834-1848, April 2016. [11] K. Zhang, Q. Liu, and M. Yang, Robust Visual Tracking via Convolutional Networks Without Training, IEEE Trans. on Image Processing, Vol. 25, No. 4, pp. 1779-1792, April 2016. [12] X. Zhou, L. Xie, P. Zhang, and Y. Zhang, An Ensemble of Deep Neural Networks for Object Tracking, IEEE Conf. on Image Processing, pp. 843-847, 2014. [13] D. Comaniciu, V. Ramesh, and P. Meer, Real-Time Tracking of Non-Rigid Objects using Mean Shift, IEEE Conf. on Computer Vision and Pattern Recognition, Vol. 2, pp. 142-149, 2000. [14] Y. Wu, J. Lim, and M. H. Yang, Online object tracking: A benchmark, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2411-2418, 2013. [15] H. Grabner, C. Leistner, and H. Bischof, Semi-supervised On-Line Boosting for Robust Tracking, British Machine Vision Conf., Vol. 1, No. 5, pp. 6. 2006.

- 2010 8 : - 2012 8 : - 2012 9 ~ : IT - :,,, - 1983 : - 1986 : University of Washington, Seattle - 1992 : University of Washington, Seattle - 1992 ~ 1998 : - 1998 ~ : IT - 2016 ~ : - ORCID : http://orcid.org/0000-0002-4702-8276 - : 3D,,