09김수현_ok.hwp

Similar documents
High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

09권오설_ok.hwp

19_9_767.hwp

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

03-16-김용일.indd

À±½Â¿í Ãâ·Â

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 10, Oct ,,. 0.5 %.., cm mm FR4 (ε r =4.4)

08원재호( )

3 : 3D (Seunggi Kim et. al.: 3D Depth Estimation by a Single Camera) (Regular Paper) 24 2, (JBE Vol. 24, No. 2, March 2019)

DBPIA-NURIMEDIA

untitled

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 27(12),

08김현휘_ok.hwp

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 30(9),

63-69±è´ë¿µ

(JBE Vol. 7, No. 4, July 0)., [].,,. [4,5,6] [7,8,9]., (bilateral filter, BF) [4,5]. BF., BF,. (joint bilateral filter, JBF) [7,8]. JBF,., BF., JBF,.

(JBE Vol. 23, No. 1, January 2018). (VR),. IT (Facebook) (Oculus) VR Gear IT [1].,.,,,,..,,.. ( ) 3,,..,,. [2].,,,.,,. HMD,. HMD,,. TV.....,,,,, 3 3,,

07.045~051(D04_신상욱).fm

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 26(11),

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

融合先验信息到三维重建 组会报 告[2]

1 : (Sunmin Lee et al.: Design and Implementation of Indoor Location Recognition System based on Fingerprint and Random Forest)., [1][2]. GPS(Global P

1 : 360 VR (Da-yoon Nam et al.: Color and Illumination Compensation Algorithm for 360 VR Panorama Image) (Special Paper) 24 1, (JBE Vol. 24, No

(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 29(2), IS

14.이동천교수님수정

07변성우_ok.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 6, Jun Rate). STAP(Space-Time Adaptive Processing)., -

14.531~539(08-037).fm

45-51 ¹Ú¼ø¸¸

(JBE Vol. 20, No. 5, September 2015) (Special Paper) 20 5, (JBE Vol. 20, No. 5, September 2015) ISS

[ReadyToCameral]RUF¹öÆÛ(CSTA02-29).hwp

<B8F1C2F72E687770>

10 이지훈KICS hwp

DBPIA-NURIMEDIA

1. KT 올레스퀘어 미디어파사드 콘텐츠 개발.hwp

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 26(10),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 28(11),

1 : (Su-Min Hong et al.: Depth Upsampling Method Using Total Generalized Variation) (Regular Paper) 21 6, (JBE Vol. 21, No. 6, November 2016)

2 : (JEM) QTBT (Yong-Uk Yoon et al.: A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM) (Special Paper) 22 5, (JBE Vol. 2

(JBE Vol. 22, No. 2, March 2017) (Regular Paper) 22 2, (JBE Vol. 22, No. 2, March 2017) ISSN

2 : (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (Regular Paper) 24 4, (JBE

<5B D B3E220C1A634B1C720C1A632C8A320B3EDB9AEC1F628C3D6C1BE292E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

(JBE Vol. 24, No. 1, January 2019) (Regular Paper) 24 1, (JBE Vol. 24, No. 1, January 2019) ISSN 2287

표지

30이지은.hwp

04 최진규.hwp

... K-vision Fig.. K-vision camera tracking screen Drummond [3] 3. 3 (lines), (edge) 3. (target). (homography perspective transform) [4]. (drifting).

06_ÀÌÀçÈÆ¿Ü0926

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 30(3),

05 목차(페이지 1,2).hwp

Analyses the Contents of Points per a Game and the Difference among Weight Categories after the Revision of Greco-Roman Style Wrestling Rules Han-bong

<35335FBCDBC7D1C1A42DB8E2B8AEBDBAC5CDC0C720C0FCB1E2C0FB20C6AFBCBA20BAD0BCAE2E687770>

디지털포렌식학회 논문양식

02( ) SAV12-19.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun; 26(6),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 28(3),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 27, no. 8, Aug [3]. ±90,.,,,, 5,,., 0.01, 0.016, 99 %... 선형간섭

인문사회과학기술융합학회

<30362E20C6EDC1FD2DB0EDBFB5B4EBB4D420BCF6C1A42E687770>

untitled

DBPIA-NURIMEDIA

<B1B3B9DFBFF83330B1C7C1A631C8A35FC6EDC1FDBABB5FC7D5BABB362E687770>

example code are examined in this stage The low pressure pressurizer reactor trip module of the Plant Protection System was programmed as subject for

(JBE Vol. 20, No. 6, November 2015) (Regular Paper) 20 6, (JBE Vol. 20, No. 6, November 2015) ISSN

DBPIA-NURIMEDIA

(JBE Vol. 21, No. 1, January 2016) (Special Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

09구자용(489~500)

04임재아_ok.hwp

<352E20BAAFBCF6BCB1C5C320B1E2B9FDC0BB20C0CCBFEBC7D120C7D1B1B920C7C1B7CEBEDFB1B8C0C720B5E6C1A1B0FA20BDC7C1A120BCB3B8ED D2DB1E8C7F5C1D62E687770>

Æ÷Àå82š

2 : (Jaeyoung Kim et al.: A Statistical Approach for Improving the Embedding Capacity of Block Matching based Image Steganography) (Regular Paper) 22

Lumbar spine

3. 클라우드 컴퓨팅 상호 운용성 기반의 서비스 평가 방법론 개발.hwp

<333820B1E8C8AFBFEB2D5A B8A620C0CCBFEBC7D120BDC7BFDC20C0A7C4A1C3DFC1A42E687770>

untitled

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

8-VSB (Vestigial Sideband Modulation)., (Carrier Phase Offset, CPO) (Timing Frequency Offset),. VSB, 8-PAM(pulse amplitude modulation,, ) DC 1.25V, [2

???? 1

Æ÷Àå82š

정보기술응용학회 발표

#Ȳ¿ë¼®

05( ) CPLV12-04.hwp

Journal of Educational Innovation Research 2018, Vol. 28, No. 1, pp DOI: * A Analysis of

<372DBCF6C1A42E687770>

12.077~081(A12_이종국).fm

02이재원_ok.hwp

DBPIA-NURIMEDIA

5 : HEVC GOP R-lambda (Dae-Eun Kim et al.: R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder) (Special Pa

09오충원(613~623)

Transcription:

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras) (Regular Paper) 19 4, 2014 7 (JBE Vol. 19, No. 4, July 2014) http://dx.doi.org/10.5909/jbe.2014.19.4.533 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) TOF DSLR a), a), a) Hybrid Camera System with a TOF and DSLR Cameras Soohyeon Kim a), Jae-In Kim a), and Taejung Kim a) Time-of-Flight(ToF) DSLR. ToF.. DSLR.. 3. Abstract This paper presents a method for a hybrid (color and depth) camera system construction using a photogrammetric technology. A TOF depth camera is efficient since it measures range information of objects in real-time. However, there are some problems of the TOF depth camera such as low resolution and noise due to surface conditions. Therefore, it is essential to not only correct depth noise and distortion but also construct the hybrid camera system providing a high resolution texture map for generating a 3D model using the depth camera. We estimated geometry of the hybrid camera using a traditional relative orientation algorithm and performed texture mapping using backward mapping based on a condition of collinearity. Other algorithm was compared to evaluate performance about the accuracy of a model and texture mapping. The result showed that the proposed method produced the higher model accuracy. Keyword : Depth camera, Hybrid camera system, Relative orientation, Texture mapping, Rigid transformation a) (Dept. of Geoinformatic Engineering Inha University) Corresponding Author : (Taejung Kim) E-mail: tezid@inha.ac.kr Tel: +82-32-860-7606 2011 () (2011-0009721). Manuscript received July 3, 2014 Revised July 28, 2014 Accepted July 28, 2014

(JBE Vol. 19, No. 4, July 2014)., 3., Time-of-Flight( ToF) (Structure Light). ToF. ToF Mesa-Imaging SR4000, 3DVsystems ZCamTM, PMD Camcube. ToF 3. (0~10m),., 176x144 3 [1][2].. Microsoft Kinect. Kinect RGB 3, 3, 3, 3 [3]. Kinect ToF 30 640x480. 3 RGB. 1 3.5 [4]. Microsoft ToF Kinect2 2014. 3. ToF 3 [5]. 3 [6]. ToF SR4000 Kinect [4][7]. 1.5m Kinect 1.5m SR4000. ToF DSLR RGB 3. Li [8] (rigid transformation) ToF (CamCube 3.0) VGA RGB, (3 ) RGB. [9] Kinect RGB. [10] Kinect DSLR 3D. ToF RGB Levenberg

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras) Marquardt [11]. 3 ToF 3. ToF (, ) [8].. 3. ToF (Mesa-Imaging SR4000) DSLR(Canon EOS 450D). 3 DSLR. 3 ToF,,.... ToF... Time-of-Flight. Time-of- Flight(ToF). ToF. ToF (0~10m) 0 16383 14, 14.. 0 255 (Gray-Level). 1 Mesa-Imaging ToF SR4000, 1(a) 2(b) (intensity). (a) 1. SR4000 Fig. 1. Acquired images of SR4000 (b) 3 14 (meter). (1)

(JBE Vol. 19, No. 4, July 2014). 14 0.6104mm. (1),,.. (intensity value).. 3 2 3 ( ). 3.. 2 3....,,. 1. 2. Fig. 2. Cartesian coordinates of depth camera SR4000, DSLR. 1 SR4000 DSLR 1. Table 1. Specification of hybrid camera system SR4000 DSLR FoV 57 x 42 (H x V) 71 x 47 (H x V) Focal length 10mm 18mm Pixel fitch 40µm 5.2µm Resolution 176 x 144 4272 x 2848 FPS 30fps - Operating range 0.5m ~ 10m unlimited 3. Fig. 3. Scene of Pointcloud Model ToF

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras). 4. (a) 4. Fig. 4. Hybrid camera on stereo rig (b),. ( ).,. (3).. column, row. 2. 3 2 5 [12]. 3 (Projective transform) (2). 5. Fig. 5. Pinhole camera model (2) (3) (4). (SR4000). EOS 450D GML C++ Camera Calibration Toolbox [13]. 2. 2. Table 2. Camera inner parameters SR4000 EOS 450D Inner parameters - - 249.157 3444.139 249.363 3440.360 88.000 2136.136 72.000 1399.086 Distortion param. -0.8522230-0.175539 0.5397860 0.163173 0 0 0.001894 0.000962 0.004963-0.001467

방송공학회논문지 제19권 제4호, 2014년 7월 (JBE Vol. 19, No. 4, July 2014) 538 그림 6은 SR4000의 렌즈왜곡 보정 전과 후의 영상이고 그림 7은 EOS 450D의 렌즈왜곡 보정 전과 후의 영상이다. 터링을 위한 윈도우의 크기는 7x7을 사용하였다. 그림 8(b) 는 보정된 깊이 영상을 나타낸다. 4. (a) 왜곡 보정 전 그림 6. SR4000의 명암 영상 왜곡보정 (b) 왜곡 보정 후 (b) 왜곡 보정 후 Fig. 6. Correcting distortion of SR4000 왜곡 보정 전 그림 7. EOS 450D의 왜곡보정 (a) 깊이 영상의 포인트클라우드화 일반적으로 깊이 영상은 각 화소에 깊이 정보를 저장한 2.5차원의 정보 이다. 하지만 실세계의 대상 물체에 대한 입체 모델 생성을 위해선 3차원 정보로 변환할 필요가 있다. 깊이 카메라의 내부변수를 고려한 식 (4)로부터 유도 되는 식 (5)를 이용하면 보정된 깊이 영상을 실세계의 단위 (미터, metre)를 가지는 포인트클라우드로 변환할 수 있다. 이 때 는 깊이 영상의 column 좌표, 는 row 좌표를 의미 한다. 그림 9(a)는 변환된 포인트클라우드를 정면 시점에서 바라본 장면이며 그림 9(b)는 우측 측면 시점에서 바라본 장면이다. Fig. 7. Correcting distortion of 450D 3. (5) 깊이 영상 잡음보정 그림 8(a)에 보이듯이 깊이 영상은 렌즈왜곡과 깊이 정보 에 대한 심한 잡음(과대오차)이 존재한다. 이러한 문제는 깊이 정보를 3차원 정보로 변환할 때 기하학적 오차를 발생 시키므로 보정작업이 필요하다. 다음으로 깊이 정보의 과 대오차 제거를 위하여 메디안필터링을 수행한다. 메디안필 정면 시점 그림 9. 포인트클라우드 (a) (b) 측면(우측) 시점 Fig. 9. Pointcloud 5. 깊이 영상 잡음보정 전 그림 8. 깊이 영상의 보정작업 (a) (b) Fig. 8. Correcting error of depth image 깊이 영상 잡음보정 후 상대표정 기반의 제안 기법 일반적으로 사진측량분야에서는 두 카메라 사이의 기하 구조를 추정하기 위한 방법으로 상대표정을 사용한다. 상대표정은 수학적으로 엄밀한 제약조건식과 최소제곱조 정(LSE, Least-Squares Estimation Method) 과정을 거쳐 연 [14]

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras).. 3 6 ( ). 12. 0(Zero) 5 ( ). ( 0 ). 5 (). 5. 3. (,,, ) ( ).. 5.,[15] 3 (3D Transformation)..,, 3 10 (6)-(8).. DSLR, ToF, DSLR, 3. 10 (6).. (6) (7) (8). 10. Fig. 10. Coplanarity equation in model space 5 (8).

(JBE Vol. 19, No. 4, July 2014) 5 0 [15].. 3.. 11. 4 3D-, 2D-. 3 DSLR. 3 DSLR. (9). 3, DSLR,,. 3 (10). (11). 11. Fig. 11. Inconsistency of the coordinate system 6.. (11) (9)-(10), Levenberg-Marquardt.

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras). DSLR ToF. 3 3. 3.. 4 SR4000 Canon EOS 450D DSLR. 14 C++. PC Mesa-Imaging SRAPI(Swissranger ToF API), Canon EDSDK(EOS Digital camera Software Develope Kit), OpenCV 2.3...., 2 0 1. 12. (12).. 3. (12) 3. 1. 3 RGB. (a) 12. Fig. 12. Coordinate of image and texture map (b),. 13

(JBE Vol. 19, No. 4, July 2014) 3. (a) 3 13. 3 Fig. 13. Result of texture mapping 2. (b) 3 DSLR. 3. 3. 14. 1 3 ( ). 2 3.. 3 ToF DSLR. 4. 5, 4. 10 X.. 3. Table 3. Comparison of extrinsic parameters Proposed Existed Matches 40 40 (deg) 0.9568 1.1508 0.0343 1.1666 0.2234-0.3944 (m) 0.1307 0.1313-0.0117 0.0068-0.0832-0.0837 4. Table 4. Comparison of model accuracy Checks Proposed(RMSE) Existed(RMSE) 30 0.0023 0.0209 5.7010 4.0921 3. 14. Fig. 14 Accuracy verification of model. 5.

2 : TOF DSLR (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras) 5. Table 5. Process of model accuracy verification Proposed Existed Relative orientation Image projective transformation Apply extrinsic parameter to model equation Apply extrinsic parameter to model equation Retrieve tie-points on gray and color image Retrieve matching-points on ToF 3D points and color image Input Input Backward mapping Output Output Pixel error (RMSE) 15., 3, 3,. 1 3 2 3. 6. 16 ().,,. 6. Tale 6. Comparison of texture mapping accuracy Checks Proposed(RMSE) Existed(RMSE) 30 3.21 4.26 5.71 4.09 15. Fig. 15. Accuracy verification of texture mapping 30 2.5.

(JBE Vol. 19, No. 4, July 2014) (a) (b) 16. Fig. 16. Comparison of texture mapping accuracy.... DSLR 3,. DSLR.,. 3..,... 3D 3 ToF. ToF. (References) [1] Yoon, S., Hwang, B., 3D reconstruction Technologies using multi view images, Electronics and Telecommunications Research Institute, pp.136-145, 2012, 3. [2] Um, G., Ahn, C., Lee, S., Kim, K., Lee, K., Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images, Journal of broadcast engineering, vol 9, no. 3, pp. 185-195, 2004, 9. [3] Newcombe, Richard A., Izad, S., Hilliges, O., Molyneaux, D., Kim, D.,

김수현 외 2인 : TOF 깊이 카메라와 DSLR을 이용한 복합형 카메라 시스템 구성 방법 545 (Soohyeon Kim et al. : Hybrid Camera System with a TOF and DSLR Cameras) [4] [5] [6] [7] [8] [9] Davison, A. J., Kohi, P., Shotton, J., Hodges, S., Fitzagibbon, A,., KinectFusion: Real-Time Dense Surface Mapping and Tracking, Mixed and Augmented Reality (ISMAR), IEEE International Symposium on, pp. 127-136, Oct, 2011 Lange, S., Sunderhauf, N., Neubert, P., Drews, S., Protzel, P., Advances in Autonomous Mini Robots: Autonomous corridor flight of a UAV using a low-cost and light-weight rgb-d camera, Springer Berlin Heidelberg, pp.183-192. 2012. Cui, Y., Schuon, S., Chan, D., Thrun, S., Theobalt, C., 3D Shape Scanning with a Time-of-Flight Camera. Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 1173-1180, June, 2010. Yoon, S., Hwang, B., Kim, K., Lim, S., Choi, J., Koo, B., A Survey and Trends on 3D Face Reconstruction Technologies, 2012 Electronics and Telecommunications Trends, vol 2012, no. 2, pp. 12-21, 2012. Hansard, M., Lee, S., Choi, O., Horaud, R. P., Time of Flight Cameras: Principles, Methods, and Applications, Springer. pp. 95. 2012. Li, X., Guo, W., Li, M., Chen, C., Generating Colored Pointcloud Under the Calibration between TOF and RGB Cameras, Information and Automation (ICIA), 2013 IEEE International Conference on, pp. 483-488, Aug, 2013. Jung, H., Kim, T., Lyou, J., 3D Image Construction Using Color and Depth Cameras, Journal of the Institute of Electronics Engineers of Korea - System and Control, vol 49, no. 1, pp. 1-7, 2012, 1. [10] Kwon, S., Lee, S., Son, K., Jeong, Y., Lee, S., High resolution 3D object generation with a DSLR and depth information by Kinect., Korean Society For Computer Game, vol 26, no. 1, pp. 221-227, 2013,3. [11] Jorge J., The Levenberg-Marquardt Algorithm: Implementation and Theory, Springer Berlin Heidelberg, 1978. [12] Lee, N., Park, S., Lee, S., Visualization of The Three Dimensional Information Using Stereo Camera, The journal of Korea Institute of Electronics Engineers - System and Control, vol 47, no. 4, pp. 15-20, 2010, 7. [13] Zhang, Z., A Flexible New Technique for Camera Calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 22, no. 11, pp. 1330-1334, Nov, 2000. [14] Kim, J., Kim, T., Precise Rectification of Misaligned Stereo Images for 3D Image Generation, Journal of broadcast engineering, vol 17, no. 2, pp. 411-421, 2012, 3. [15] Kim, J., Kim, T., Development of Photogrammetric Rectification Method Applying Bayesian Approach for High Quality 3D Contents Production, Journal of broadcast engineering, vol 18, no. 1, pp. 31-42, 2013, 1 [16] Lee, E., Ho, Y., Generation of high-quality depth maps using hybrid camera system for 3-D video, Journal of Visual Communication and Image Representation, vol 22, no. 1, pp. 73-84, 2011, 1 저자소개 김수현 년 2월 : 인하대학교 지리정보공학과 학사 년 3월 ~ 현재 : 인하대학교 지리정보공학과 석사과정 주관심분야 : 3D 객체 복원, 복합형 카메라 보정, ToF 깊이카메라 활용 - 2013-2013 - 김재인 년 8월 : 인하대학교 지리정보공학과 학사 년 2월 : 인하대학교 지리정보공학과 석사 년 3월 ~ 현재 : 인하대학교 지리정보공학과 박사과정 주관심분야 : 3D 입체영상 생성, 위성영상 품질분석, 위성영상 활용 - 2010-2013 - 2013 -

(JBE Vol. 19, No. 4, July 2014) - 1991 8 : - 1992 10 : University College London () - 1996 2 : University College London () - 1995 8 ~ 2001 3 : KAIST - 2001 4 ~ 2003 8 : KAIST - 2003 9 ~ : - :, 3D, 3D,, DEM,