Microsoft PowerPoint - ar13.artoolkit.ppt

Similar documents
강의10

untitled

MAX+plus II Getting Started - 무작정따라하기

목차 제 1 장 inexio Touch Driver소개 소개 및 주요 기능 제품사양... 4 제 2 장 설치 및 실행 설치 시 주의사항 설치 권고 사양 프로그램 설치 하드웨

Week3

PowerPoint 프레젠테이션

19_9_767.hwp

untitled

High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

untitled

6주차.key

04_오픈지엘API.key

PowerPoint 프레젠테이션

2005CG01.PDF

LCD Display

chap 5: Trees

Sena Technologies, Inc. HelloDevice Super 1.1.0

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

untitled

À±½Â¿í Ãâ·Â

Orcad Capture 9.x

13주-14주proc.PDF

Microsoft PowerPoint - lecture16-ch6

example code are examined in this stage The low pressure pressurizer reactor trip module of the Plant Protection System was programmed as subject for

歯Lecture2.PDF

Something that can be seen, touched or otherwise sensed

PowerPoint 프레젠테이션

Microsoft PowerPoint - lecture15-ch6.ppt

C++-¿Ïº®Çؼ³10Àå

DE1-SoC Board

<372DBCF6C1A42E687770>

歯9장.PDF

2002년 2학기 자료구조

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

thesis

K&R2 Reference Manual 번역본

untitled

chap7.key

PowerPoint 프레젠테이션

<31325FB1E8B0E6BCBA2E687770>

Remote UI Guide

Microsoft PowerPoint - ch10 - 이진트리, AVL 트리, 트리 응용 pm0600

03장.스택.key

chap8.PDF

1 Nov-03 CST MICROWAVE STUDIO Microstrip Parameter sweeping Tutorial Computer Simulation Technology

Microsoft PowerPoint - 3ÀÏ°_º¯¼ö¿Í »ó¼ö.ppt

슬라이드 1

Microsoft PowerPoint - IP11.pptx

Poison null byte Excuse the ads! We need some help to keep our site up. List 1 Conditions 2 Exploit plan 2.1 chunksize(p)!= prev_size (next_chunk(p) 3

SRC PLUS 제어기 MANUAL

PowerPoint 프레젠테이션

ISO17025.PDF

Interstage5 SOAP서비스 설정 가이드

Output file

Microsoft PowerPoint - [2009] 02.pptx

RVC Robot Vaccum Cleaner

Microsoft PowerPoint - Java7.pptx

untitled

(Asynchronous Mode) ( 1, 5~8, 1~2) & (Parity) 1 ; * S erial Port (BIOS INT 14H) - 1 -

ETL_project_best_practice1.ppt

untitled

<443A5C4C C4B48555C B3E25C32C7D0B1E25CBCB3B0E8C7C1B7CEC1A7C6AE425CBED0C3E0C7C1B7CEB1D7B7A55C D616E2E637070>

BMP 파일 처리

10주차.key

Microsoft PowerPoint - ch03ysk2012.ppt [호환 모드]

김기남_ATDC2016_160620_[키노트].key

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

1217 WebTrafMon II

감각형 증강현실을 이용한

Line (A) å j a k= i k #define max(a, b) (((a) >= (b))? (a) : (b)) long MaxSubseqSum0(int A[], unsigned Left, unsigned Right) { int Center, i; long Max

슬라이드 1

4 CD Construct Special Model VI 2 nd Order Model VI 2 Note: Hands-on 1, 2 RC 1 RLC mass-spring-damper 2 2 ζ ω n (rad/sec) 2 ( ζ < 1), 1 (ζ = 1), ( ) 1

<C0CCBCBCBFB52DC1A4B4EBBFF82DBCAEBBE7B3EDB9AE2D D382E687770>

LIDAR와 영상 Data Fusion에 의한 건물 자동추출


untitled

#Ȳ¿ë¼®

B _02_M_Ko.indd

PCServerMgmt7

; struct point p[10] = {{1, 2, {5, -3, {-3, 5, {-6, -2, {2, 2, {-3, -3, {-9, 2, {7, 8, {-6, 4, {8, -5; for (i = 0; i < 10; i++){ if (p[i].x > 0 && p[i

Microsoft Word - KSR2012A021.doc

iii. Design Tab 을 Click 하여 WindowBuilder 가자동으로생성한 GUI 프로그래밍환경을확인한다.

K7VT2_QIG_v3

i-movix 특징 l 안정성 l 뛰어난화질 l 차별화된편의성

step 1-1

목차 BUG offline replicator 에서유효하지않은로그를읽을경우비정상종료할수있다... 3 BUG 각 partition 이서로다른 tablespace 를가지고, column type 이 CLOB 이며, 해당 table 을 truncate

DW 개요.PDF

PRO1_09E [읽기 전용]

Gray level 변환 및 Arithmetic 연산을 사용한 영상 개선

Structure and Interpretation of Computer Programs: Assignment 3 Seung-Hoon Na October 4, George (아래 3개의 문제에 대한 구현이 모두 포함된 george.rkt파일을 제출하시오.

2011년 10월 초판 c 2011 Sony Corporation. All rights reserved. 서면 허가 없이 전체 또는 일부를 복제하는 것을 금합니다. 기능 및 규격은 통보 없이 변경될 수 있습니다. Sony와 Sony 로고는 Sony의 상표입니다. G L

Windows 네트워크 사용 설명서

프로그램을 학교 등지에서 조금이라도 배운 사람들을 위한 프로그래밍 노트 입니다. 저 역시 그 사람들 중 하나 입니다. 중고등학교 시절 학교 도서관, 새로 생긴 시립 도서관 등을 다니며 책을 보 고 정리하며 어느정도 독학으르 공부하긴 했지만, 자주 안하다 보면 금방 잊어

A Hierarchical Approach to Interactive Motion Editing for Human-like Figures

CD-RW_Advanced.PDF

PowerPoint 프레젠테이션

Microsoft Word - ASG AT90CAN128 모듈.doc

전용]

T100MD+


Transcription:

ARToolKit as Starting Point 2 Augmented Reality AR Library - ARToolKit Tutorials 박종승 Dept. of CSE, Univ. of Incheon jong@incheon.ac.kr http://ecl.incheon.ac.kr/ ARToolKit - Free tracking library Library for vision-based AR applications Open Source(C language), multi-platform(sgi IRIX, PC Linux, PC Windows) Overlays 3D virtual objects on real markers Uses single tracking marker Determines camera pose information (6 DOF) Includes utilities for marker-based interaction ARToolKit Website http://www.hitl.washington.edu/artoolkit/ Limitations Monocular camera setup 을사용하므로 3D 측정이불가능함 Lighting condition 과 Marker 의재질에민감하게반응할수있음 ecl.incheon.ac.kr Installation 3 Introduction 4 Requirement QuickCam (Video For Windows driver를지원해야함 ) GLUT OpenGL interface library: GLUT 3.6 이상 Free download from http://reality.sgi.com/opengl/glut3/ 설치 원하는 ARToolkit directory에압축을풀면됨 subdirectory: bin, examples, include, lib, patterns, util 준비 patterns/pattxxx.pdf 파일을출력하여얇고딱딱한카드에부착 실행 bin/simpletest.exe 또는 bin/simpletest2.exe를실행 조명에민감할수있음 : threshold 값 (default 100) 을 0~255사이에서적절히조절 ESC key로종료됨 (frame rate 정보를출력함 ) 단계 input video thresholded video virtual overlay Live video image 를 binary image 로변환 (lighting threshold value 사용 ) binary image 에서모든 square region 탐색 각 square region 내의 pattern 을 capture 하여 pre-trained pattern template 들과 match 를시도하여실제 marker 인지를결정 일단한 marker 를찾으면, 알려진 square size 와 pattern orientation 을사용하여 (marker 에상대적인 ) real video camera 의 position 을계산함 실제 camera 를계산하면, virtual object 들을 image 에 draw 할수있음

System Flow 5 ARToolKit Applications 6 Tangible Interaction - ARToolKit supports physically based interaction Face to face collaboration, Remote conferencing ARToolkit Outline 7 Pose & Position Estimation 8 1. Mathematical & Algorithm Background Pose & Position Estimation Rectangle Extraction 2. Implementation Camera Calibration Pose Estimation Background Video Stream Display Coordinate Systems

Coordinate Transformations (1/3) 9 Coordinate Transformations (2/3) 10 1. Relation between marker and camera (rotation & translation) XC R11 R12 R13 T1 XM XM Y C R21 R22 R23 T 2 Y M Y M = = T CM Z C R31 R32 R33 T 3 Z M Z M 1 0 0 0 1 1 1 3. Relation between ideal and observed screen coordinates (image distortion parameters) 2 2 2 = ( I 0) + ( I 0) 2 d x x y y p = 1 fd x = p( x x ) + x, y = p( y y ) + y O I 0 0 O I 0 0 (x 0,y 0 ): center coordinates of distortion f: distortion factor 2. Relation between camera and ideal screen (perspective projection) XC XC hx I sfx 0 xc 0 Y C Y C hy I 0 sf y yc 0 = = C Z C Z C h 0 0 1 0 1 1 C: camera parameters Coordinate Transformations (3/3) 11 Pose & Position Estimation (1/2) 12 4. Scaling parameters for size adjustment Implementation of image distortion parameters What is pose & position estimation? 1. Marker coordinates: (X m,y m,z m ) Known if T mc is given 2. Camera coordinates Known 3. Ideal screen coordinates Known 4. Observed screen coordinates: (x O,y O ) How to get T CM?

Pose & Position Estimation (2/2) 13 Rectangle Extraction 14 How to get T CM? : Search T CM by minimizing error (iterative optimization) X Mi hxˆ i Y Mi hyˆ i = CTCM, i = 1,2,3,4 Z Mi h 1 1 err = ( xi xˆi) + ( yi yˆi) 4 i= 1,2,3,4 2 2 { Steps 1. Thresholding, Labeling, Feature Extraction (area, position) 2. Contour Extraction 3. Four straight lines fitting Little fitting error -> Rectangle [This method is very simple. Then it works very fast.] How to set the initial condition for optimization process Geometrical calculation based on 4 vertices coordinates Independent in each image frame: Good feature Unstable result (Jitter occurs): Bad feature Use of information from previous image frame Needs previous frame information Cannot use for the first frame Stable results (This does not mean accurate results) [ARToolkit supports both. See simpletest2.] Camera Calibration 15 Accurate two-step method (1/3) 16 Camera Parameters Perspective projection matrix Image distortion parameters Using dot pattern and grid pattern ARToolkit has two methods for camera calibration Accurate two-step method Easy one-step method 2 step method Step 1: Getting distortion parameters Step 2: Getting perspective projection parameters

Accurate two-step method (2/3) 17 Accurate two-step method (3/3) 18 Step 1: Getting distortion parameters: calib_dist Step 2: Getting perspective projection matrix: calib_cparam selecting dots with mouse getting distortion parameters by automatic line-fitting Manual line-fitting Take pattern pictures as large as possible Slant in various directions with big angle 4 times or more Easy one-step method: calib_camera2 19 Camera Parameter Implementation 20 Same operation as calib_dist Getting all camera parameters including distortion parameters and perspective projection matrix Not require careful setup Accuracy is good enough for image overlay [But, Not good enough for 3D measurement.] Camera parameter structure typedef struct { int xsize, ysize; double mat[3][4]; double dist_factor[4]; ARParam; Adjust camera parameter for the input image size int arparamchangesize(arparam *source, int xsize, int ysize, ARParam *newparam); Read camera parameters from the file int arparamload(char *filename, int num, ARParam *param, );

Notes on Image Processing (1/2) 21 Notes on Image Processing (2/2) 22 Image size for marker detection AR_IMAGE_PROC_IN_FULL Full size images are used for marker detection Taking more time but accuracy is better Not good for interlaced images AR_IMAGE_PROC_IN_HALF Re-sampled half size images are used for marker detection Taking less time but accuracy is worse Good for interlaced images External variable: arimageprocmode in ar.h Default value: DEFAULT_IMAGE_PROC_MODE in config.h Use of tracking history Marker detection With tracking history: ardetectmarker(); Without tracking history: ardetectmarkerlite(); How to use tracking history Error correction of pattern identification Lost marker insertion Accuracy vs. Speed on pattern identification Pattern normalization takes much time This is a problem in use of many markers Normalization process In config.h : normalization #define AR_PATT_SAMPLE_NUM 64 #define AR_PATT_SIZE_X 16 #define AR_PATT_SIZE_Y 16 resolution convert identification accuracy speed large size good slow small size bad fast Pose and Position Estimation 23 Background Video Display 24 Two types of initial condition 1. Geometrical calculation based on 4 vertices in screen coordinates double argettransmat(armarkerinfo *marker_info, double center[2], double width, double conv[3][4]); 2. Use of information from previous image frame double argettransmatcont(armarkerinfo *marker_info, double prev_conv[3][4], double center[2], double width, double conv[3][4]); [See simpletest2.c ] X Mi hxˆ i Y Use of estimation accuracy ˆ Mi hy i = CTCM, i = 1,2,3,4 Z Mi argettransmat() minimizes the err h 1 It returns this minimized err 1 2 2 err = If err is still big, {( ˆ ) ( ˆ ) 4 xi xi + yi yi i= 1,2,3,4 Miss-detected marker 원인 : Use of camera parameters by bad calibration Texture mapping vs. gldrawpixels() Performance depends on HW and OpenGL driver Mode external variable: argdrawmode in gsub.h #define DEFAULT_DRAW_MODE in config.h AR_DRAW_BY_GL_DRAW_PIXELS AR_DRAW_BY_TEXTURE_MAPPING Note: gldrawpixels() does not compensate image distortion [See examples/test/graphicstest.c and modetest ]

ARToolkit Library Functions (1/2) 25 ARToolkit Library Functions (2/2) 26 ARToolkit library libar : 주요함수들. marker tracking, calibration, parameter 계산등 libarmulti : multi-pattern 함수들. libar 을확장함. libarvideo : video frame 을 capture 하는 video 관련함수. Microsoft Vision SDK 의 video capture 함수들을사용함. libargsub : OpenGL utilities. OpenGL 과 GLUT library 에기반한 graphics 함수들 libargsubutil : libargsub 에추가된것 Library 계층구조 libarmulti libar libargsubutil libargsub libarvideo ARToolkit library libar armalloc, arinitcparam, arloadpatt, ardetectmarker, ardetectmarkerlite, argettransmat, argettransmatcont, argettransmat2, argettransmat3, argettransmat4, argettransmat5, arfreepatt, aractivatepatt, ardeactivatepatt, arsavepatt, arutilmatinv, arutilmatmul, arutilmat2quatpos, arutilquatpos2mat, arutiltimer, arutiltimerreset, arutilsleep, arlabeling, argetimgfeature, ardetectmarker2, argetmarkerinfo, argetcode, argetpatt, argetline, argetcontour, armodifymatrix, argetangle, argetrot, argetnewmatrix, argetinitrot libarmulti armultireadconfigfile, armultigettransmat, armultiactivate, armultideactivate, armultifreeconfig libarvideo arvideodispoption, arvideoopen, arvideoclose, arvideocapstart, arvideocapstop, arvideocapnext, arvideogetimage, arvideoinqsize, ar2videodispoption, ar2videoopen, ar2videoclose, ar2videocapstart, ar2videocapnext, ar2videocapstop, ar2videogetimage, ar2videoinqsize libargsub arginit, argloadhmdparam, argcleanup, argswapbuffers, argmainloop, argdrawmode2d, argdraw2dleft, argdraw2dright, argdrawmode3d, argdraw3dleft, argdraw3dright, argdraw3dcamera, argconvglpara, argconvglcpara, argdispimage, argdisphalfimage, argdrawsquare, arglineseg, arglineseghmd, arginqsetting, libargsubutil argutilcalibhmd Basic Structures 27 Basic Functions in ar.h 28 Detect 된 marker 의정보들은 ARMarkerInfo 구조체에서정의됨 (ar.h) See ar.h! typedef struct { int area; int id; //marker identity number int dir; double cf; //confidence value (0.0~1.0) that the marker //has been correctly identified. double pos[2]; //center of the marker in ideal screen coords double line[4][3]; //line eq for the 4 sides of the marker //in ideal screen coords. //Three values, line[x][0/1/2], are the a,b,c in //the line equation ax+by+c=0. double vertex[4][2]; //position of 4 marker vertices //in ideal screen coords ARMarkerInfo; Load initial parameters and trained patterns: int arinitcparam( ARParam *param ); int arloadpatt( char *filename ); Detect markers and camera position: int ardetectmarker( ARUint8 *dataptr, int thresh, ARMarkerInfo **marker_info, int *marker_num ); int ardetectmarkerlite( ARUint8 *dataptr, int thresh, ARMarkerInfo **marker_info, int*marker_num ); int argettransmat( ARMarkerInfo *marker_info, double pos3d[4][2], double trans[3][4] ); int arsavepatt( ARUint8 *image, ARMarkerInfo *marker_info, char *filename );

Basic Functions in video.h 29 Sample Patterns 30 See video.h! SampPatt1, SampPatt2, hiropatt, kanjipatt Commonly used: int arvideoopen( void ); int arvideoclose( void ); int arvideoinqsize( int *x, int *y ); unsigned char *arvideogetimage( void ); AR Application 의개발 31 AR Application Code: main 32 Application 의작성단계 1. video path 를초기화, marker pattern 파일과 camera parameter 를읽기 *2-5 를반복 2. video 입력 frame 을 grab 3. video 입력 frame 에서 marker 를 detect 하고 pattern 을인식 4. detect 된 pattern 에상대적인 camera 변환을계산 5. detect 된 pattern 에 virtual object 를 draw 6. video path 를 close Steps and corresponding functions: 1. Initialize the application: init 2. Grab a video input frame: arvideogetimage 3. Detect the markers: ardetectmarker 4. Calculate camera transformation: argettransmat 5. Draw the virtual objects: draw 6. Close the video path down: cleanup #include <windows.h>/<stdio.h>/<stdlib.h> #include <GL/gl.h>/<GL/glut.h> #include <AR/gsub.h>/<AR/video.h>/<AR/param.h>/<AR/ar.h> int main(int argc, char **argv){ init(); //initialize video path, read marker and // camera parameters, setup graphics window arvideocapstart(); //start video image capture argmainloop( NULL, keyevent, mainloop ); //start the loop //keyevent: keyboard event function //mainloop: main graphics rendering function //prototype defined in lib/src/gl/gsub.c return (0);

AR Application Code: init int xsize, ysize; char *vconf = "flipv,showdlg"; //video configuration //see video.h for supported params char *cparam_name = "Data/camera_para.dat"; ARParam cparam; char *patt_name = "Data/patt.hiro"; int thresh = 100, count = 0, mode = 1, patt_id, patt_width = 80.0; double patt_center[2] = {0.0, 0.0, patt_trans[3][4]; static void init( void ) { ARParam wparam; /* open the video path */ if( arvideoopen( vconf ) < 0 ) exit(0); /* find the size of the window */ if( arvideoinqsize(&xsize, &ysize) < 0 ) exit(0); printf("image size (x,y) = (%d,%d)\n", xsize, ysize); /* set the initial camera parameters */ if( arparamload(cparam_name, 1, &wparam) < 0 ) { printf("camera parameter load error!!\n"); exit(0); arparamchangesize( &wparam, xsize, ysize, &cparam ); arinitcparam( &cparam ); printf("*** Camera Parameter ***\n"); arparamdisp( &cparam ); if( (patt_id=arloadpatt(patt_name)) < 0 ) { //load trained markers printf("pattern load error!!\n"); exit(0); /* open the graphics window */ arginit( &cparam, 1.0, 0, 0, 0, 0 ); 33 AR Application Code: mainloop (1/2) static void mainloop(void){ static int contf = 0; ARUint8 *dataptr; ARMarkerInfo *marker_info; int marker_num; int j, k; /* grab a vide frame */ if( (dataptr = (ARUint8 *)arvideogetimage()) == NULL ) { arutilsleep(2); return; if( count == 0 ) arutiltimerreset(); count++; argdrawmode2d(); argdispimage( dataptr, 0,0 ); /* detect the markers in the video frame */ if( ardetectmarker(dataptr, thresh, &marker_info, /*a list of marker structures*/ &marker_num /*num of detected markers*/ ) < 0 ) { cleanup(); exit(0); arvideocapnext(); /*next page*/ 34 AR Application Code: mainloop (2/2) /*continue*/ /*pick the highest confidence value marker*/ k = -1; for( j = 0; j < marker_num; j++ ) { if( patt_id == marker_info[j].id ) { if( k == -1 ) k = j; else if( marker_info[k].cf < marker_info[j].cf ) k = j; if( k == -1 ) { contf = 0; argswapbuffers(); return; /* get the transformation between the marker and the real camera */ if( mode == 0 contf == 0 ) { argettransmat(&marker_info[k], patt_center, patt_width, patt_trans); else { argettransmatcont(&marker_info[k], patt_trans, patt_center, patt_width, patt_trans); /*real camera position & orientation relative to marker i are in the 3x4 matrix patt_trans */ contf = 1; draw( patt_trans ); argswapbuffers(); 35 AR Application Code: draw static void draw( double trans[3][4] ){ double gl_para[16]; GLfloat mat_ambient[] = {0.0, 0.0, 1.0, 1.0; GLfloat mat_flash[] = {0.0, 0.0, 1.0, 1.0; GLfloat mat_flash_shiny[] = {50.0; GLfloat light_position[] = {100.0,-200.0,200.0,0.0; GLfloat ambi[] = {0.1, 0.1, 0.1, 0.1; GLfloat lightzerocolor[] = {0.9, 0.9, 0.9, 0.1; argdrawmode3d(); argdraw3dcamera( 0, 0 ); glcleardepth( 1.0 ); glclear(gl_depth_buffer_bit); glenable(gl_depth_test); gldepthfunc(gl_lequal); /* load the camera transformation matrix */ argconvglpara(trans, gl_para); glmatrixmode(gl_modelview); glloadmatrixd( gl_para ); /* draw */ glenable(gl_lighting); glenable(gl_light0); gllightfv(gl_light0, GL_POSITION, light_position); gllightfv(gl_light0, GL_AMBIENT, ambi); gllightfv(gl_light0, GL_DIFFUSE, lightzerocolor); glmaterialfv(gl_front, GL_SPECULAR, mat_flash); glmaterialfv(gl_front, GL_SHININESS, mat_flash_shiny); glmaterialfv(gl_front, GL_AMBIENT, mat_ambient); glmatrixmode(gl_modelview); gltranslatef( 0.0, 0.0, 25.0 ); glutsolidcube(50.0); gldisable( GL_LIGHTING ); gldisable( GL_DEPTH_TEST ); 36

AR Application Code: keyevent, mouseevent, cleanup 37 Recognizing different patterns (1/2) 38 static void keyevent( unsigned char key, int x, int y){ /* quit if the ESC key is pressed */ if( key == 0x1b ) { printf("*** %f (frame/sec)\n", (double)count/arutiltimer()); cleanup(); exit(0); if( key == 'c' ) { printf("*** %f (frame/sec)\n", (double)count/arutiltimer()); count = 0; mode = 1 - mode; if( mode ) printf("continuous mode: Using argettransmatcont.\n"); else printf("one shot mode: Using argettransmat.\n"); static void mouseevent(int button, int state, int x, int y){ /* cleanup function called when program exits */ static void cleanup(void) { arvideocapstop(); /*stop the video processing */ arvideoclose(); /*close down the video path */ argcleanup(); marker objects file 인식할 marker object 들에대한정보를명시함 해당 marker 의 pattern 파일도명시함 Format Name Pattern Recognition File Name Width of tracking marker Example #pattern 1 cone Data/hiroPatt 80.0 새로운 pattern 을만드는방법 1. patterns/blankpatt.gif( 까만 square 가있고그안에빈흰색 square 가있음 ) 를출력 2. 원하는흑백 / 칼라패턴을만들어이이미지의안쪽흰 square 에배치함 좋은 pattern 은 asymmetric 하고, 정교한것들이없어야함 3. 새 pattern 을만든후그파일을 bin 으로옮기고 bin/mk_patt( 소스코드는 util/mk_patt.c) 를실행 camera parameter filename 을물으면이를입력 Recognizing different patterns (2/2) 39 Camera Calibration Utility 40 ( 참고 ) possible sample patterns Generate a camera parameter file for a specific camera that is being used. 1. 두패턴을출력 patterns/calib_dist.pdf (6x4 dot pattern, 각 dot 간의거리가 40mm 가되도록출력해야함 ) patterns/calib_cpara.pdf (line 들의 grid, 각 line 간의거리가 40mm 가되도록출력해야함 ) 4. mk_patt 가실행되면, Video window 가열림 train 되도록할 pattern 을평평한표면에붙이고, 조명조건을원하는환경으로조절한후, 화면에보이도록함 카메라가화면바로위에서아래로수직으로내려보도록함 red/green square 가패턴주위로보여질때까지카메라를조절함 red corner 가비디오이미지에서 top left corner 에오도록카메라를회전하여조절 조절이끝나면 left mouse button 을 click pattern filename 을물으면파일이름 (bitmap image file) 을입력 다른 pattern 들에대해서계속반복할수있고, 프로그램종료를원하면 right mouse button 을 click 하면됨 2. bin/calib_dist 를실행하여카메라이미지의 center point 와 lens distortion 을계산 calib_dist.pdf 이미지를이용 3. bin/calib_cparam 을실행하여 camera focal length 와그외의카메라 parameter 를계산 calib_cpara.pdf 를이용

Camera Calibration Utility: calib_dist (1/2) 41 Camera Calibration Utility: calib_dist (2/2) 42 calib_dist.pdf 이미지의 6x4 dot 간격을측정하여 lens distortion 을계산 1. 모든점들이보이도록카메라를조정한후 left button 을 click 하여비디오를 freeze 시킴 2. 까만사각형을 left drag 하여각 dot 의위치에맞춤 dot 의순서는 top left 모서리 dot 부터시작하여야하고다음의순서를따라야함 까만사각형을움직이면 dot 을찾아 red cross 로그중심을표시함 3. 위의 2 번과정을이미지 5~10 개에대해서반복함. 각이미지의각도나위치가다르도록해야함 더많은이미지를반복할수록더정확한 calibration 을얻을수있음 5~10 개의이미지를반복한후에 right button click 을하여 image capture 를중지함 center position 과 camera distortion 값을계산함 시간이소요됨 ; 출력결과를기록해두어야함 4. ( 선택 ) 결과값이정확한지를확인하려면 left click 을누름 첫번째 grab 된이미지에각 dot 들을지나는 red line 들을 draw 함 left click 을반복하면다음 grab 된이미지들에대해서보여줌 24 개 dot 을모두찾았으면다시 left button 을 click 함» dot 의위치들을저장하고, video 를 unfreeze 시킴 참고 : 오른쪽 button 을 click 하면입력된값을 discard 하고 video 를 unfreeze 함 5. 결과가만족스러우면 right click 하여종료하고, calib_cparam 을실행시킴 Camera Calibration Utility: calib_cparam (1/3) 43 Camera Calibration Utility: calib_cparam (2/3) 44 calib_cparam 은패턴 calib_cparam.pdf(7 개의수평선과 9 개의수직선 ) 를사용하여 camera focal length 와기타 parameter 들을계산함 1. 이전에서계산한 center 좌표 (X,Y) 와 distortion ration 을입력함. live video window 가나타남 2. 카메라를움직여패턴을위에서수직아래로보도록하고모든 grid line 들이포함되면서동시에최대로보이도록최대로가까이위치시킴 3. left click 하여이미지를 grab 함 흰색수평선이이미지위에표시됨 up/down 화살표키를사용하여위 / 아래로이동시키고, left/right 키를사용하여시계 / 반시계방향으로회전시켜서, 흰색수평선을가장위의 grid line 과최대로일치시킴 일치시킨후 enter key 를누름» white line 이 blue 로바뀌고다른 white line 이생김 모든 7 개의수평선들에대해서반복함 모든수평선에대해서반복한후에는, 수직선이흰색으로나타나고모든 9 개의수직선에대해서가장왼쪽에서부터오른쪽의순서로반복함 16 개의선 (7 개의수평선, 9 개의수직선 ) 을위에서아래로, 왼쪽에서오른쪽으로수행함 4. 한이미지에대해서위의 3 번과정을수행한후, 카메라와 grid 패턴과의거리가 100mm 가되도록하여 3 번과정을다시수행함 ( 주의 : 카메라가수직으로패턴을보도록하는것은계속유지시켜야함 ) 100mm 씩더멀리하여 3 번과정을다시수행하는작업을 5 번반복수행함 (5 번째에서패턴과의거리가 500mm 가됨 ) 5. 5 번반복후 camera parameter 들이자동계산됨 parameter 를저장할 filename 을입력함 6. 저장후, right click 하여종료함 이파일은 camera_para.dat 파일과같이프로그램에서동일하게사용할수있음 -> tracking 결과가더좋아질것임

Camera Calibration Utility: calib_cparam (3/3) 45 Limitations 46 참고 : grid line의간격이 40mm이고, pattern을 100mm씩이동하였음 40: current distance, 100: distance the pattern should be moved back from the camera each time util/calib_cparam/calib_cparam_sub.c 안에서고정된값임 수정하고싶은경우, 다음코드의 40,100을변경하면됨 inter_coord[k][j][i+7][0] = 40.0*i; inter_coord[k][j][i+7][1] = 40.0*j; inter_coord[k][j][i+7][2] = 100.0*k; 5 회의반복횟수를수정하고싶은경우, 다음코드의 5 를수정하면됨 *loop_num = 5; marker 전체가온전히보여야함 일부가가려진경우에는안됨 큰 virtual object 를삽입하기가곤란함 pattern 이작으면카메라에서조금만멀어지면 detect 가되지않음 pattern size Usable range (inches) (inches) 2.75 16 3.50 25 4.25 34 7.37 50 복잡한모양의 pattern 은 detect 가어려움 큰흑백영역들로구성된단순한모양의 pattern 이잘됨 4.25inch 크기의 pattern 을복잡한모양으로바꾸면 tracking 거리가 34inch 에서 15inch 로감소함 marker 의기울어짐이큰경우에 detect 가어려움 조명조건이 detect 에영향을미침 marker 의빛반사 (reflection, glare spots) 가 detect 를어렵게함 non-reflective material 을사용 ( 예 : 흰바탕천에까만 velvet fabric(paper) 을부착 ) References 47 Inside ARToolKit, Slides by Hirokazu Kato (Hiroshima City University) ARToolKit Manual (version 2.33), November 2000