(JBE Vol. 20, No. 5, September 2015) (Special Paper) 20 5, (JBE Vol. 20, No. 5, September 2015) ISS

Similar documents
09권오설_ok.hwp

À±½Â¿í Ãâ·Â

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

(JBE Vol. 7, No. 4, July 0)., [].,,. [4,5,6] [7,8,9]., (bilateral filter, BF) [4,5]. BF., BF,. (joint bilateral filter, JBF) [7,8]. JBF,., BF., JBF,.

High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

19_9_767.hwp

45-51 ¹Ú¼ø¸¸

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 10, Oct ,,. 0.5 %.., cm mm FR4 (ε r =4.4)

1. 서 론

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 26(11),

V28.

08김현휘_ok.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 6, Jun Rate). STAP(Space-Time Adaptive Processing)., -

2 : (Jaeyoung Kim et al.: A Statistical Approach for Improving the Embedding Capacity of Block Matching based Image Steganography) (Regular Paper) 22

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

63-69±è´ë¿µ

디지털포렌식학회 논문양식

µðÇÃÇ¥Áö±¤°í´Ü¸é

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

<35335FBCDBC7D1C1A42DB8E2B8AEBDBAC5CDC0C720C0FCB1E2C0FB20C6AFBCBA20BAD0BCAE2E687770>

(JBE Vol. 20, No. 6, November 2015) (Regular Paper) 20 6, (JBE Vol. 20, No. 6, November 2015) ISSN

07.045~051(D04_신상욱).fm

<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

2 : (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (Regular Paper) 24 4, (JBE

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 30(9),

08년요람001~016

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

(JBE Vol. 23, No. 5, September 2018) (Special Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

04 최진규.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 27(12),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jan.; 26(1),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 25(3),

2 : (JEM) QTBT (Yong-Uk Yoon et al.: A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM) (Special Paper) 22 5, (JBE Vol. 2

(JBE Vol. 22, No. 2, March 2017) (Regular Paper) 22 2, (JBE Vol. 22, No. 2, March 2017) ISSN

8-VSB (Vestigial Sideband Modulation)., (Carrier Phase Offset, CPO) (Timing Frequency Offset),. VSB, 8-PAM(pulse amplitude modulation,, ) DC 1.25V, [2

DBPIA-NURIMEDIA

09È«¼®¿µ 5~152s

[ReadyToCameral]RUF¹öÆÛ(CSTA02-29).hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 29(2), IS

Ⅱ. Embedded GPU 모바일 프로세서의 발전방향은 저전력 고성능 컴퓨팅이다. 이 러한 목표를 달성하기 위해서 모바일 프로세서 기술은 멀티코 어 형태로 발전해 가고 있다. 예를 들어 NVIDIA의 최신 응용프 로세서인 Tegra3의 경우 쿼드코어 ARM Corte

차분 이미지 히스토그램을 이용한 이중 레벨 블록단위 가역 데이터 은닉 기법 1. 서론 멀티미디어 기술과 인터넷 환경의 발달로 인해 현대 사회에서 디지털 콘텐츠의 이용이 지속적 으로 증가하고 있다. 이러한 경향과 더불어 디지털 콘텐츠에 대한 소유권 및 저작권을 보호하기

6.24-9년 6월

1 : 360 VR (Da-yoon Nam et al.: Color and Illumination Compensation Algorithm for 360 VR Panorama Image) (Special Paper) 24 1, (JBE Vol. 24, No

???? 1

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

지속가능경영보고서도큐_전체

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 26(10),

04 김영규.hwp

DBPIA-NURIMEDIA

Software Requirrment Analysis를 위한 정보 검색 기술의 응용

02( ) SAV12-19.hwp

untitled

<372DBCF6C1A42E687770>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 26(12),

DBPIA-NURIMEDIA

¿ì¾ç-ÃÖÁ¾

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

<B8F1C2F72E687770>

<31325FB1E8B0E6BCBA2E687770>

인문사회과학기술융합학회

04_이근원_21~27.hwp


3 : OpenCL Embedded GPU (Seung Heon Kang et al. : Parallelization of Feature Detection and Panorama Image Generation using OpenCL and Embedded GPU). e

1 : (Sunmin Lee et al.: Design and Implementation of Indoor Location Recognition System based on Fingerprint and Random Forest)., [1][2]. GPS(Global P

(JBE Vol. 23, No. 4, July 2018) (Special Paper) 23 4, (JBE Vol. 23, No. 4, July 2018) ISSN

05( ) CPLV12-04.hwp

. 서론,, [1]., PLL.,., SiGe, CMOS SiGe CMOS [2],[3].,,. CMOS,.. 동적주파수분할기동작조건분석 3, Miller injection-locked, static. injection-locked static [4]., 1/n 그림

°í¼®ÁÖ Ãâ·Â

(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

Journal of Educational Innovation Research 2019, Vol. 29, No. 1, pp DOI: (LiD) - - * Way to

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 30(2),

표지

10 이지훈KICS hwp

DBPIA-NURIMEDIA

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

02손예진_ok.hwp

(JBE Vol. 24, No. 2, March 2019) (Special Paper) 24 2, (JBE Vol. 24, No. 2, March 2019) ISSN

(JBE Vol. 24, No. 1, January 2019) (Regular Paper) 24 1, (JBE Vol. 24, No. 1, January 2019) ISSN 2287

방송공학회논문지 제18권 제2호

DBPIA-NURIMEDIA

05 목차(페이지 1,2).hwp

歯1.PDF

2 : MMT QoS (Bokyun Jo et al. : Adaptive QoS Study for Video Streaming Service In MMT Protocol). MPEG-2 TS (Moving Picture Experts Group-2 Transport S

03-16-김용일.indd

DBPIA-NURIMEDIA

DBPIA-NURIMEDIA

06_ÀÌÀçÈÆ¿Ü0926

2 : (Rahoon Kang et al.: Image Filtering Method for an Effective Inverse Tone-mapping) (Special Paper) 24 2, (JBE Vol. 24, No. 2, March 2019) h

09구자용(489~500)

03±èÀçÈÖ¾ÈÁ¤ÅÂ

07변성우_ok.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun; 26(6),

(JBE Vol. 23, No. 1, January 2018) (Regular Paper) 23 1, (JBE Vol. 23, No. 1, January 2018) ISSN 2287

3 : 3D (Seunggi Kim et. al.: 3D Depth Estimation by a Single Camera) (Regular Paper) 24 2, (JBE Vol. 24, No. 2, March 2019)

Journal of Educational Innovation Research 2018, Vol. 28, No. 4, pp DOI: * A Research Trend

04임재아_ok.hwp

Transcription:

(Special Paper) 20 5, 2015 9 (JBE Vol. 20, No. 5, September 2015) http://dx.doi.org/10.5909/jbe.2015.20.5.676 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) 4 Light Field Dictionary Learning a), a) Dictionary Learning based Superresolution on 4D Light Field Images Seung-Jae Lee a) and In Kyu Park a) Light field 4 light field 2 (spatial domain) 2 (angular domain). 4 light field 2 CMOS. 4 light field, 4 light field (dictionary learning-based) (superresolution). 4 light field 4 (patch),. 2. light field Lytro. Abstract A 4D light field image is represented in traditional 2D spatial domain and additional 2D angular domain. The 4D light field has a resolution limitation both in spatial and angular domains since 4D signals are captured by 2D CMOS sensor with limited resolution. In this paper, we propose a dictionary learning-based superresolution algorithm in 4D light field domain to overcome the resolution limitation. The proposed algorithm performs dictionary learning using a large number of extracted 4D light field patches. Then, a high resolution light field image is reconstructed from a low resolution input using the learned dictionary. In this paper, we reconstruct a 4D light field image to have double resolution both in spatial and angular domains. Experimental result shows that the proposed method outperforms the traditional method for the test images captured by a commercial light field camera, i.e. Lytro. Keyword : Superresolution, dictionary learning, light field, spatial domain, angular domain, Lytro a) (Department of Information and Communication Engineering, Inha University) Corresponding Author : (In Kyu Park) E-mail: pik@inha.ac.kr Tel: +82-32-860-9190 ORCID: http://orcid.org/0000-0003-4774-7841 2013 ( ) (NRF-2013R1A2A2A01069181). Manuscript received July 10, 2015; Revised September 22, 2015; Accepted September 22, 2015.

1 : 4 Light Field Dictionary Learning (Seung-Jae Lee et al.: Dictionary Learning based Superresolution on 4D Light Field Images). 4 light field 2,, 3. Light field 2. light field., 4 2 (trade-off). light field [1][2].,., 3.. 2. Farsiu [3], Freeman [4], Yang Yang [5]., light field. Yang Yang [5]. 2 light field. light field. 4,.,.. 2 4 light field,,. 3,. 4.. Light field 4 light field, 4. 4 light field., light field 4, light field,. 4 light field light field

.. 2. light field.,. 1. Light Field light field 1(a). 1(a).. Dansereau [6] (sub-aperture image). 4 1(b) 4 light field.,,,. 2. 4 Light field.. 2 light field light field. 4 light field 4. 2 4 4, (a) raw data (b) 4D light field image 1. light field (Lytro) Fig 1. Light field sub-aperture images generated from the raw data captured by commercial light field camera (Lytro)

1 : 4 Light Field Dictionary Learning (Seung-Jae Lee et al.: Dictionary Learning based Superresolution on 4D Light Field Images) 2. 4 Fig 2. Overview of 4D training patch reconstruction. 2.. 3. light field 4.. K- (K-means clustering). K-. 3. 3. Fig 3. Overview of the dictionary learning procedure

4 light field (4D light field image), light field (LR light field image) 4 (HR patch) (LR patch). K- K (cluster center). L ( ), H ( ).. (1) (regression coefficient) C ( ). C argmin H CL c (dictionary).,. 4. Light Field light field,. light field,. Freedman Fattal [7] Glasner [8],. Freedman Fattal [7], Glasner [8]. 4. light field Fig 4. Region division of an input light field image to match the image resolution of the dictionary and the input image

1 : 4 Light Field Dictionary Learning (Seung-Jae Lee et al.: Dictionary Learning based Superresolution on 4D Light Field Images)..,. 4.1 Light Field - light field (, ),. Light field 4 4 16.. 4.2 16. 4,., (2) C l h. h C l (2). 4.3 Light Field 16 (a) (b) (c) 5.. (a), (b) EPI bicubic, (c) Fig 5. Results of resolution enhancement at the boundary region. (a) Initial results of the proposed algorithm, (b) Result of bicubic interpolation on EPI, (c) Final result of the proposed method

light field., 5(a). 2,. 4.. 6 light field (plane) EPI (epipolar plane image) EPI bicubic EPI. EPI light field EPI EPI light field 5(b) 4 light field. light field 5(c). 4 light field.. Intel i7-3770k 3.5GHz CPU 16G RAM, light field Lytro. 4 light field... 6.. Light field EPI bicubic Fig 6. Postprocessing algorithm to handle the image boundary. Bicubic interploation is applied after converting the light field image to EPI image

1 : 4 Light Field Dictionary Learning (Seung-Jae Lee et al.: Dictionary Learning based Superresolution on 4D Light Field Images) 7. Lytro Fig 7. Training dataset captured by Lytro camera (a) Bicubic (b) 8. Fig 8. Qualitative comparison in the spatial domain

.1. Table 1. Quantitative comparison in the spatial domain Algorithm tiger monkey beer can eiffel rhinoceros PSNR (db) SSIM Bicubic Interpolation Proposed Algorithm Bicubic Interpolation Proposed Algorithm 30.02 29.52 27.63 30.23 28.62 30.16 30.94 29.67 31.05 30.93 0.83 0.88 0.87 0.83 0.86 0.87 0.90 0.90 0.86 0.87. 4 light field,,,,. 7 40 4 light field 200,000. K- K, K 512.,. 2 (a) (1,2) (b) (3,12) (c) (10,10) (d) (14,6) (e) (16,16) 10. Fig 10. Results of superresolution at various locations in the angular domain

1 : 4 Light Field Dictionary Learning (Seung-Jae Lee et al.: Dictionary Learning based Superresolution on 4D Light Field Images). EPI. 2. Lytro. Lytro ( ) ( )., Lytro ( ) ( ). 8 bicubic. 1 8 PSNR(peak signal to noise ratio) SSIM(structural similarity), bicubic PSNR 2dB SSIM 0.4., light field ( ) bicubic Freeman [4]. 9 bicubic Freeman [4],,. 10. ((1,2), (3,12), (10,10), (14,6), (16,16)).. 4 light field 4 light field.,, 2 4 light field. 4,,,. 4 light field. (References) [1] Lytro, https://www.lytro.com/ [2] Raytrix, https://www.raytrix.de/ [3] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, "Fast and robust multiframe super resolution," IEEE Trans. on Image Processing, vol. 13, no. 10, pp. 1327-1344, October 2004. [4] W. T. Freeman, T. R. Jones, and E. C. Pasztor, "Example-based superresolution," IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 56-65, March 2002. [5] C.-Y. Yang and M.-H. Yang, "Fast direct super-resolution by simple functions," Proc. IEEE International Conference on Computer Vision, pp. 561-568, December 2013. [6] D. G. Dansereau, O. Pizarro, and S. B. Williams, "Decoding, calibration and rectification for lenselet-based plenoptic cameras," Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 1027-1034, June 2013. [7] G. Freedman and R. Fattal, "Image and video upscaling from local self-examples," ACM Trans.on Graphics, vol. 30, no. 12, pp. 1-12, April 2011. [8] D. Glasner, S. Bagon, and M. Irani, "Super-resolution from a single image," Proc. IEEE International Conference on Computational

Photography, pp. 349-356, September 2009. [9] T. E. Bishop and P. Favaro, "The light field camera: Extended depth of field, aliasing, and superresolution," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 5, pp. 972-986, May 2012. [10] T. E. Bishop, S. Zanetti, and P. Favaro, "Light field superresolution," Proc. IEEE International Conference on Computational Photography, pp.1-9, April 2009. [11] V. Boominathan, K. Mitra, and A. Veeraraghavan, "Improving resolution and depth-of-field of light field cameras using a hybrid imaging system," Proc. IEEE International Conference on Computational Photography, pp.1-10, May 2014. [12] D. Cho, M. Lee, S. Kim, and Y.-W. Tai, "Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction," Proc. IEEE International Conference on Computer Vision, pp.3280-3287, December 2013. [13] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael, "Learning low-level vision," International Journal of Computer Vision, vol. 40, no. 1, pp. 25-47, October 2000. [14] S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, "The lumigraph," Proc. SIGGRAPH '96, pp. 43-54, August 1996. [15] X. Huang, O. Cossairt, "Dictionary Learning Based Color Demosaicing for Plenoptic Cameras," Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 455-460, June 2014. [16] M. Levoy and P. Hanrahan, "Light field rendering," Proc. SIGGRAPH '96, pp. 31-42, August 1996. [17] Z. Li, Image patch modeling in a light field, EECS Department, University of California, Berkeley, Doctorate thesis, May 2014. [18] K. Mitra and A. Veeraraghavan, "Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior," Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp.22-28, June 2012. [19] S. Wanner and B. Goldluecke, "Spatial and angular variational superresolution of 4d light fields," Proc. European Conference on Computer Vision, vol. 7576, pp. 608-621, October 2012. [20] C.-Y. Yang, C. Ma, and M.-H. Yang, "Single-image super-resolution: A benchmark," Proc. European Conference on Computer Vision, vol. 8692, pp. 372-386, September 2014. - 2013 2 : - 2015 8 : - 2015 7 ~ : ( ) - ORCID : http://orcid.org/0000-0001-5309-8898 - : (, Light field ), GPGPU - 1995 2 : - 1997 2 : - 2001 8 : - 2001 9 ~ 2004 3 : - 2007 1 ~ 2008 2 : Mitsubishi Electric Research Laboratories (MERL) - 2014 9 ~ 2015 8 : MIT Media Lab, Visiting Associate Professor - 2004 3 ~ : - ORCID : http://orcid.org/0000-0003-4774-7841 - : ( 3, 3, computational photography), GPGPU