2: (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (Regular Paper) 24 4, 2019 7 (JBE Vol. 24, No. 4, July 2019) https://doi.org/10.5909/jbe.2019.24.4.623 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) a), a), a) Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network Seungsoo Lee a), Changyeol Choi a), and Manbae Kim a),. hand-crafted.,. 102. Abstract Many researches have been carried out for brightness and contrast enhancement, illumination reduction and so forth. Recently, the aforementioned hand-crafted approaches have been replaced by artificial neural networks. This paper proposes a convolutional neural network that can replace the method of generating a reflectance image where illumination component is attenuated. Experiments are carried out on 102 low-light images and we validate the feasibility of the replacement by producing satisfactory reflectance images. Keywords : Low-light image, Retinex, Convolutional neural network, Reflectance, Illumination a) (Department of Computer and Communications Eng., Kangwon National University) Corresponding Author : (Manbae Kim) E-mail: manbae@kangwon.ac.kr Tel: +82-33-250-6395 ORCID: https://orcid.org/0000-0002-4702-8276 This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2019-2018-0-01433) supervised by the IITP (Institute for Information & communications Technology Promotion). Manuscript received January 31, 2019; Revised April 15, 2019; Accepted June 25, 2019. Copyright 2016 Korean Institute of Broadcast and Media Engineers. All rights reserved. This is an Open-Access article distributed under the terms of the Creative Commons BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited and not altered.
(JBE Vol. 24, No. 4, July 2019)..,...,., (histogram equalization) [1-2], (gamma correction) [3]. Land (reflectance) (illumination) [4]. Jobson Land Weber-Fachner Single Scale Retinex(SSR) [5].. SSR,. (low-light),,. 1,,,., Retinex. hand-crafted, (Deep Neural Network: DNN) [6-9]. DNN. (ground-truth). Lore stacked denoising (autoencoder) [6]. Shen (Convolutional Neural Network) [7]. Park CNN [8]. Kim [9]. (high-light image) (gamma transformation),.,.,. (Convolutional Neural Network: CNN) 1. Fig. 1. Low-light images affected by light source
2: (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (feasibility). Retinex. Retinex (ground-truth image) CNN, CNN. 2. 2(a),, (predicted image).,.,. 2(b) Retinex, CNN. CNN., CNN., CNN. CNN.. Retinex..... Retinex 2.. (a) [8] (b) CNN Fig. 2. Difference of conventional reflectance image generation and the proposed method. (a) The approach of the former [8] and (b) diagram replacing a customized reflectance map generation algorithm with CNN Land [4], (brightness) (illumination) (reflectance)..
(JBE Vol. 24, No. 4, July 2019).,.,. Weber-Fechner [5] (log), (1). log loglog. logixy, logrxy, zxy. (3)... (5) 0.5 (Butterworth).,.,. 3 Retintex. 3(a). 3(b).. 3(a), 3(b).,. 3.. (a), f (b), r R Fig. 3. Images used at the training of a neural network. (a) input images, f and (b) reflectance images, r R
2: (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network). 4. (Input low-light image) (Low-pass filter: LPF). (LPF input image)., (High-frequency data)..., 3x3. CNN., CNN. denoising [10], super-resolution [11].,... CNN autoencoder [12]. deconvolution [10] motion deblurring, super-resolution..,...,.,.. 4. Fig. 4. The overall block diagram of the proposed method
(JBE Vol. 24, No. 4, July 2019) 5. 4 CNN Fig. 5. CNN model in Fig. 4 4 CNN 5. 2 (conv layer) (fully-connected layer, fc layer).., (patch), [6,7,8,9].. N=20. 1 CNN. (conv layer 1) 128 3x3, (activation function) relu(rectified linear unit). (conv layer 2) 64 3x3, relu.. 1. CNN Table 1. Layers and parameters of CNN network Layer Input conv layer 1 pooling activation function conv layer 2 activation function fc-1 activation function Output Parameter NxN patch 128, 3x3 filter max pooling relu 64, 3x3 filter relu 500 nodes sigmoid nodes 500 (fc-1), 1. [0,1] (sigmoid).,...., 6.,.,. 128x128 WINDOWS 10 MATLAB R2018a. 102. 82, 20. [0,1]...,,
2: (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) [0,1], 255 [0, 255]. 1) RMSE SE. 2) PSNR PSNR log RMSE 3) Structural Similarity Index (SSIM) [13] SSIM,,. 1 0. 6. Fig. 6. Low-light images under illumination used in experiments [7]. (stride), [8,9].. 0.001, (stochastic gradient descent). Adam. 3 ; 1) Root Mean Squared Error (RMSE), 2) Peak Signal-to-Noise Ratio (PSNR), 3) Structural Similarity Index (SSIM).,.,,,. 2 4 Target reflectance image Predicted reflectance image. 82 Train 20 Test RMSE (Training, Test) = (18.62, 27.26). [0,255]. PSNR (Training, Test) = (23.47, 19.44). SSIM=(0.42, 0.22). 2 PSNR, SSIM, [8]. [8] PSNR 13~16 db, SSIM 0.38~0.65. [8] [8].,
(JBE Vol. 24, No. 4, July 2019),.,.,. 2. RMSE, PSNR, SSIM Table 2. Performance evaluation of RMSE, PSNR and SSIM Avg. RMSE (in pixel) Avg. PSNR (in db) Avg. SSIM [0,1] Training Test Training Test Training Test 18.62 27.26 23.47 19.44 0.42 0.22 7, 8 Train Test,. 7(a), 8(a) 4 Target reflectance image. Retinex. 7(b), 8(b) Predicted reflectance image. Test 1, 2, 3. 4, 5. 1, 2, 3 9. 9(a), 9(b)., 7.. [0,255]. (a), (b) Fig. 7. Resulting images of train images. (a) Ground-truth output images, and (b) predicted images 8.. [0,255]. (a), (b) Fig. 8. Resulting images of test images. (a) Ground-truth output images, and (b) predicted images
2: (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network). CNN,.,. (References) 9. 8 1, 2, 3. (a) (b) Fig. 9. Close-up of Columns 1, 2, 3 in Fig. 8. (a) Ground-truth output images, and (b) predicted images.,. Retinex,. Retinex,. 102, Train RMSE 18.62, PSNR 23.47(dB), SSIM 0.42, Test RMSE 27.26, PSNR 19.44(dB), SSIM 0.22..,. hand-crafted. [1] D. Cho, H. Kang, and W. Kim, "An image enhancement algorithm on color constancy and histogram equalization using edge region", Journal of Broadcast Engineering, Vol. 15, No. 3, 2010. [2] H. Cheng and X. Shi, "A simple and effective histogram equalization approach to image enhancement," Digital Signal Processing, pp. 158-170, 2004. [3] C. Poynton, "The Rehabilitation of Gamma," Human Vision and Electronic Imaging, Proceedings of SPIE, vol. 3299, p. 232-249, 1998. [4] E. Land and J. McCann, "Lightness and retinex theory," J. Opt. Soc. Am., vol. 61, no. 1, pp. 111, 1971. [5] D. Jobson, Z. Rahman, and G. Woodell, "Properties and performance of a center/surround retinex," IEEE Transactions on Image Processing, Vol. 6, no. 3, pp. 451-462, Mar. 1997. [6] K. Lore, A. Akintayo, and S. Sarkar, "LLNet: A deep autoencoder approach to natural low-light image enhancement," Pattern Recognition, vol. 61, pp. 650-662, 2017. [7] L. Shen, Z. Yue, F. Feng, and Q. Chen, "MSR-net: Low-light image enhancement using deep convolutional network," arxiv:1711.02488. [8] S. Park, S. Yu, M. Kim, K. Park, and J. Paik, "Dual Autoencoder Network for Retinex-based Low-Light Image Enhancement", IEEE Access, Vol. 6, Mar. 2018. [9] W. Kim, I. Hwang and M. Kim, Generating a Retinex-based reflectance image from a low-light image using deep neural network, Journal of Broadcast Engineering, Vol. 24, No. 1, Jan. 2019. [10] L. Xu. C. Liu, J. Ren, and J. Jia, Deep convolutional neural network for image deconvolution, Int. Conf. Neural Information Processing System, 2014. [11] A. Agrawal, R. Raskar, Resolving objects at higher resolution from a single motion-blurred image, Int. Conf. Computer Vision and Pattern Recognition, 2007 [12] C. Turhan and H. Bilge, Recent trends in deep generative modes: a review, 3rd Int. Conf. Computer Science and Engineering, 2018. [13] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, "Image quality assessment: from error visibility to structural similarity". IEEE Trans. Image Processing, 13(4), pp. 600-612, 2004.
(JBE Vol. 24, No. 4, July 2019) - 2017 8 : IT - 2017 9 ~ : - :,, - 1979 : - 1981 : - 1995 : - 1984 ~ 1996 : ETRI / - 2009 ~ 2011 : IT - 1996 ~ : - ORCID : http://orcid.org/0000-0002-8340-4195 - :, 3D, - 1983 : - 1986 : University of Washington, Seattle - 1992 : University of Washington, Seattle - 1992 ~ 1998 : - 1998 ~ : - 2016 ~ 2018 : - ORCID : http://orcid.org/0000-0002-4702-8276 - : 3D,,