... K-vision Fig.. K-vision camera tracking screen Drummond [3] 3. 3 (lines), (edge) 3. (target). (homography perspective transform) [4]. (drifting).

Similar documents
High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

3 : OpenCL Embedded GPU (Seung Heon Kang et al. : Parallelization of Feature Detection and Panorama Image Generation using OpenCL and Embedded GPU). e

À±½Â¿í Ãâ·Â

02( ) SAV12-19.hwp

19_9_767.hwp

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

09권오설_ok.hwp

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

(JBE Vol. 24, No. 2, March 2019) (Special Paper) 24 2, (JBE Vol. 24, No. 2, March 2019) ISSN

(JBE Vol. 23, No. 6, November 2018) (Regular Paper) 23 6, (JBE Vol. 23, No. 6, November 2018) ISSN 2

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

45-51 ¹Ú¼ø¸¸

07.045~051(D04_신상욱).fm

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

DBPIA-NURIMEDIA

<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

(JBE Vol. 24, No. 1, January 2019) (Regular Paper) 24 1, (JBE Vol. 24, No. 1, January 2019) ISSN 2287

untitled

(JBE Vol. 21, No. 1, January 2016) (Special Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

DBPIA-NURIMEDIA

26 이경승(394~400).hwp

63-69±è´ë¿µ

(JBE Vol. 23, No. 1, January 2018). (VR),. IT (Facebook) (Oculus) VR Gear IT [1].,.,,,,..,,.. ( ) 3,,..,,. [2].,,,.,,. HMD,. HMD,,. TV.....,,,,, 3 3,,

09김수현_ok.hwp

融合先验信息到三维重建 组会报 告[2]

3 : 3D (Seunggi Kim et. al.: 3D Depth Estimation by a Single Camera) (Regular Paper) 24 2, (JBE Vol. 24, No. 2, March 2019)

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun; 26(6),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 25(12),

03-16-김용일.indd

<372DBCF6C1A42E687770>

°í¼®ÁÖ Ãâ·Â

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 10, Oct ,,. 0.5 %.., cm mm FR4 (ε r =4.4)

정보기술응용학회 발표

LIDAR와 영상 Data Fusion에 의한 건물 자동추출

감각형 증강현실을 이용한

1 : 360 VR (Da-yoon Nam et al.: Color and Illumination Compensation Algorithm for 360 VR Panorama Image) (Special Paper) 24 1, (JBE Vol. 24, No

8-VSB (Vestigial Sideband Modulation)., (Carrier Phase Offset, CPO) (Timing Frequency Offset),. VSB, 8-PAM(pulse amplitude modulation,, ) DC 1.25V, [2

<5B D B3E220C1A634B1C720C1A632C8A320B3EDB9AEC1F628C3D6C1BE292E687770>

05 목차(페이지 1,2).hwp

2 : (JEM) QTBT (Yong-Uk Yoon et al.: A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM) (Special Paper) 22 5, (JBE Vol. 2

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

08원재호( )

(JBE Vol. 22, No. 2, March 2017) (Regular Paper) 22 2, (JBE Vol. 22, No. 2, March 2017) ISSN

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 6, Jun Rate). STAP(Space-Time Adaptive Processing)., -

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jan.; 26(1),

박선영무선충전-내지

09È«¼®¿µ 5~152s

지능정보연구제 16 권제 1 호 2010 년 3 월 (pp.71~92),.,.,., Support Vector Machines,,., KOSPI200.,. * 지능정보연구제 16 권제 1 호 2010 년 3 월

DBPIA-NURIMEDIA

1. 3DTV Fig. 1. Tentative terrestrial 3DTV broadcasting system. 3D 3DTV. 3DTV ATSC (Advanced Television Sys- tems Committee), 18Mbps [1]. 2D TV (High

10 이지훈KICS hwp

<4D F736F F D20B1E2C8B9BDC3B8AEC1EE2DC0E5C7F5>

<30312DC1A4BAB8C5EBBDC5C7E0C1A4B9D7C1A4C3A52DC1A4BFB5C3B62E687770>

04 김영규.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 26(10),

DBPIA-NURIMEDIA


I

< FBEC8B3BBB9AE2E6169>

05( ) CPLV12-04.hwp

[ReadyToCameral]RUF¹öÆÛ(CSTA02-29).hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 26(11),

., 3D HDTV. 3D HDTV,, 2 (TTA) [] 3D HDTV,,, /. (RAPA) 3DTV [2] 3DTV, 3DTV, DB(, / ), 3DTV. ATSC (Advanced Television Systems Committee) 8-VSB (8-Vesti

<30362E20C6EDC1FD2DB0EDBFB5B4EBB4D420BCF6C1A42E687770>

(JBE Vol. 20, No. 5, September 2015) (Special Paper) 20 5, (JBE Vol. 20, No. 5, September 2015) ISS

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

Software Requirrment Analysis를 위한 정보 검색 기술의 응용

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 29(3),

6.24-9년 6월

¼º¿øÁø Ãâ·Â-1

(JBE Vol. 20, No. 6, November 2015) (Regular Paper) 20 6, (JBE Vol. 20, No. 6, November 2015) ISSN

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 28(2),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

산선생의 집입니다. 환영해요

<31325FB1E8B0E6BCBA2E687770>

3 : ATSC 3.0 (Jeongchang Kim et al.: Study on Synchronization Using Bootstrap Signals for ATSC 3.0 Systems) (Special Paper) 21 6, (JBE Vol. 21

REP - CP - 016, N OVEMBER 사진 요약 25 가지 색상 Surf 를 이용한 사진 요약과 사진 배치 알고리즘 Photo Summarization - Representative Photo Selection based on 25 Color Hi

untitled

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 27, no. 8, Aug [3]. ±90,.,,,, 5,,., 0.01, 0.016, 99 %... 선형간섭

04_오픈지엘API.key

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 30(9),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 29(2), IS

ePapyrus PDF Document

A-PS-C-1-040( ).hwp

08김현휘_ok.hwp

Building Mobile AR Web Applications in HTML5 - Google IO 2012

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE May; 26(5),

04 최진규.hwp

학습영역의 Taxonomy에 기초한 CD-ROM Title의 효과분석

목 차

09오충원(613~623)

(JBE Vol. 7, No. 4, July 0)., [].,,. [4,5,6] [7,8,9]., (bilateral filter, BF) [4,5]. BF., BF,. (joint bilateral filter, JBF) [7,8]. JBF,., BF., JBF,.

06박영수.hwp

10 노지은.hwp

example code are examined in this stage The low pressure pressurizer reactor trip module of the Plant Protection System was programmed as subject for

±è±¤¼ø Ãâ·Â-1

07_À±ÀåÇõ¿Ü_0317

01이국세_ok.hwp

Transcription:

a,b), b) Real-Time Camera Tracking for Markerless Augmented Reality Juhyun Oh a,b) and Kwanghoon Sohn b). SURF(speeded up robust features), (multi-scale). (normalized cross correlation, NCC).. (pose).. Abstract We propose a real-time tracking algorithm for an augmented reality (AR) system for TV broadcasting. The tracking is initialized by detecting the object with the SURF algorithm. A multi-scale approach is used for the stable real-time camera tracking. Normalized cross correlation (NCC) is used to find the patch correspondences, to cope with the unknown and changing lighting condition. Since a zooming camera is used, the focal length should be estimated online. Experimental results show that the focal length of the camera is properly estimated with the proposed online calibration procedure. Keywords: Machine vision, real time systems, tracking, TV cameras I. (augmented reality, AR) HCI(human computer interaction), (visual servoing) a) KBS Technical Research Institute, KBS b) School of Electrical and Electronic Engineering, Yonsei University : (khsohn@yonsei.ac.kr) (0 3 7 ),(0 6 ), (0 6 5 )., Lepetit []. Park [] 6 (tracking by detection). K-vision PC. (binarization) (orientation)

... K-vision Fig.. K-vision camera tracking screen Drummond [3] 3. 3 (lines), (edge) 3. (target). (homography perspective transform) [4]. (drifting). (a) (b), (c). Vacchetti [5] (homography chaining) (jitter) (drift),., (pose).. Wang [6] (Bayesian).,. Comport [7] (moving edges). Pressigout [8] (texture) (hybrid). D'fusion [9]. (a) (b) (c).. (a) (b) (c) Fig.. Drift in a homography chaining experiment. (a) The object image. (b) One frame of the input video. (c) The object image transformed by the chained homography.

. (keypoints) affine training set, randomized trees ferns (classification) Lepetit [0] [,]. training phase. Wagner [3,4]. SIFT(scale invariant feature transform) [5] ferns [].., Yu [6] 6 CPU. PC. Wagner [3,4],....,.. 3. 3., 3. -. 4, 5.. II.,. SIFT [5], ferns [], SURF [7], SURF. 3 SURF. (a) (b) (top), (c) (bottom) 3. SURF. (a). (b). (c). Fig. 3. The object detected by SURF. (a) The feature correspondences and the obtained object. (b) The object features. (c) The object pattern warped by the estimated homography. 3 664 34.. (Hessian).. 50.

.,.. - (lookup) [8,9].,.., Zhang [0]. SURF 3 3 H. Z=0 H. éxù éx ù s = ê y H ê Y êë û êë û 3 (projection equation),. éxù s ê y = K êë û éx ù ê Y êz ë û éx ù ê Y ê 0 ë û [ R t] = K[ r r r t] = K[ r r t] éx ù êë û ' 3 Y K skew s = 0, aspect ratio a =. (principal point). f. () (), [ h h h ] = K[ r r t] H = 3 l h H. (orthogonal),. - T - T ( K h ) K h = h 0. = ωh l l T r = r.. T T r = r r = r h (5) (6) T ωh = h T ωh ω = K -T K - (the image of the absolute conic [] ) (symmetric) 6 b = [ w, w, w, w, w w ] T 3 3, 33. (5) (6). Vb = 0 t r R (column vector). K. V 6. b K. K K H.

r = lk r r 3 = lk t = lk - - = r r - h h h 3 (orientation). (normalized cross correlation, NCC)... III.....,. (warping). (patch).,.. [3,4] (affine approximation) (full perspective transformation).. SAD(sum of absolute differences) (cross correlation).,. [3,4], 7 4. Levenberg-Marquardt (nonlinear least squares optimization) [] r, t, f.. N { r t, f } = arg min ( Proj( m, θ x ) θ =, å r i ) - i i m i i x i NCC. ρ, M (M-estimator). Geman-McClure.. (feature descriptors),.

0.8 0.6 r 0.4 0. 0-50 -40-30 -0-0 0 0 0 30 40 50 x 4. M Geman-McClure Fig. 4. The Geman-McClure function used as an M-estimator 5. Fig. 5. The algorithm outline 5... H RANSAC [3]

. r, t f (0)... NCC.., NCC (threshold). threshold=90. IV. Wagner [3,4]. 6. 6 RANSAC.. 6(a)., (residual error). 6(b),.. 7.. -68 (perspective). 7.. 4 PC 640 480 38.7 Hz. Wagner. (a) 6.. (a) Wagner, (b) Fig. 6. Camera tracking results. (a) Wagner et al. (b) The proposed online camera calibration (b)

오주현 외 : 마커 없는 증강현실을 위한 실시간 카메라 추적 6 그림 7. 잔여 오류 Fig. 7. Residual error 있듯이 현재의 구현에서는 투영변환에 많은 시간이 소요 되고 있는데, 향후 OpenGL 등을 이용하여 GPU에서 처리 함으로써 상당한 개선이 가능할 것으로 보인다. 표. 추적 모듈의 계산 시간 Table. Computation time for the tracking module 절차 (procedure) 투영 변환 다운샘플링 NCC 정합 RANSAC을 이용한 호모그래피 추정 카메라 파라미터의 비선형 최적화 합계 소요 시간 (밀리초) 9.05 0.99 6.9 4.0 5.50 5.84 V. 결론 기존의 인위적인 마커를 사용하지 않아 방송용 증강현실 제작에 적합한 카메라 추적 알고리듬을 제안하였다. SURF 를 이용하여 객체를 검출함으로써 카메라 추적을 초기화하 고, 다층 구조를 사용하여 안정적인 실시간 카메라 추적이 가능하게 하였다. 알려져 있지 않은 조명 환경에서의 특징 정합을 위해 정규상호상관도를 사용하였다. 대부분 데스크 톱 웹캠 환경을 가정하고 있는 기존 증강현실 카메라 추적 연구들과 달리 본 논문에서는 줌 카메라에의 적용을 위해 온라인 카메라 보정 방법을 제안하였다. 실험 결과는 제안 된 온라인 보정 방법에 의해 카메라의 초점거리가 정확하 게 추정되는 것을 보여주었으며, 모든 과정이 실시간으로 처리 가능함을 확인하였다. 그러나 패턴이 카메라의 광축 에 수직인 경우와 같이 초점거리의 변화와 패턴의 이동을

(degenerate case),., (optical flow). HD (high definition),.. [] V. Lepetit and P. Fua, Monocular Model-based 3D Tracking of Rigid Objects, Now Publishers Inc, 005. [],,,,, 7, pp.-9, 00 3. [3] T. Drummond and R. Cipolla, "Real-time visual tracking of complex structures," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.4, pp.93-946, Jul. 00. [4] G. Simon, A. Fitzgibbon, and A. Zisserman, "Markerless tracking using planar structures in the scene," Int. Symposium on Augmented Reality (ISAR), pp.0-8, Oct. 000. [5] L. Vacchetti, V. Lepetit, and P. Fua, "Stable Real-Time 3D Tracking Using Online and Offline Information," IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(0), pp.385-39, 004. [6] Q. Wang, W. W. Zhang, X. O. Tang, and H. Y. Shum, "Real-time Bayesian 3-D pose tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol.6, pp.533-54, Dec. 006. [7] A. I. Comport, E. Marchand, M. Pressigout, and F. Chaumette, "Real-time markerless tracking for augmented reality: The virtual visual servoing framework," IEEE Transactions on Visualization and Computer Graphics, vol., pp.65-68, 006. [8] M. Pressigout and E. Marchand, "Real-time hybrid tracking using edge and texture information," International Journal of Robotics Research, vol.6, pp.689-73, Jul. 007. [9] "Total Immersion: Augmented reality software solutions with D'Fusion," http://www.t-immersion.com/, 00. [0] V. Lepetit and P. Fua, Keypoint recognition using randomized trees," IEEE Transactions on Patteren Analysis and Machine Intelligence, 8(9), pp.465-479, 006. [] M. Özuysal, P. Fua, and V. Lepetit, "Fast keypoint recognition in ten lines of code," in IEEE Conference on Computer Vision and Pattern Recognition, pp. -8, 007. [] Z. Chen and X. Li, "Markerless tracking based on natural feature for autmented reality," Int. Conf. on Educational and Information Technology (ICEIT), Sep. 00. [3] D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmalstieg, "Pose tracking from natural features on mobile phones," Int. Symposium on Mixed and Augmented Reality, pp.5-34, 008. [4] D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmalstieg, "Real-Time Detection and Tracking for Augmented Reality on Mobile Phones," IEEE Transactions on Visualization and Computer Graphics, vol.6, pp.355-368, May-Jun. 00. [5] D. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol.60, pp.9-0, 004. [6] J. Yu, J. Kim, H. Kim, I. Choi, and I. Jeong, "Real-time camera tracking for augmented reality," Int. Conf. on Advanced Communication Technology (ICACT), Feb. 009. [7] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-Up Robust Features (SURF)," Computer Vision and Image Understanding, vol.0, pp.346-359, Jun. 008. [8] J. Oh and K. Sohn, "Semiautomatic zoom lens calibration based on the camera's rotation," SPIE Journal of Electronic Imaging, 0(), Apr-Jun. 0. [9] W. Liu, Y. Wang, J. Chen, and J. Guo, "An efficient zoom tracking method for pan-tilit-zoom camera," IEEE Int. Conf. on Computer Science and Information Technology (ICCSIT), pp.536-540, Jul. 00. [0] Z. Y. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol., pp.330-334, Nov. 000. [] R. Hartley and A. Zisserman, Multiple view geometry in computer vision, Cambridge University Press, 003. [] M. Lourakis, "A brief description of the Levenberg-Marquardt algorithm implemented by levmar," matrix, 3(), 005. [3] M. Fischler and R. Bolles, "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol.4, pp.38-395, 98.

- 997 : - 999 : - 999 ~ : KBS - :, 3-983 : - 985 : University of Minnesota, MSSE - 99 : North Carolina State University, Ph.D. - 99 ~ 993 : - 994 : Georgetown University, Post Doctoral Fellow - 995 ~ : - 00 9 ~ 003 8 : Nanyang Technological University, Visiting Professor - : 3,