6 : (Gicheol Kim et al.: Object Tracking Method using Deep Learing and Kalman Filter) (Regular Paper) 24 3, (JBE Vol. 24, No. 3, May 2019) http

Similar documents
09권오설_ok.hwp

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

2 : 3 (Myeongah Cho et al.: Three-Dimensional Rotation Angle Preprocessing and Weighted Blending for Fast Panoramic Image Method) (Special Paper) 23 2

(JBE Vol. 22, No. 2, March 2017) (Regular Paper) 22 2, (JBE Vol. 22, No. 2, March 2017) ISSN

(JBE Vol. 24, No. 2, March 2019) (Special Paper) 24 2, (JBE Vol. 24, No. 2, March 2019) ISSN

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

<4D F736F F D20B1E2C8B9BDC3B8AEC1EE2DC0E5C7F5>

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 26(11),

High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

1 : 360 VR (Da-yoon Nam et al.: Color and Illumination Compensation Algorithm for 360 VR Panorama Image) (Special Paper) 24 1, (JBE Vol. 24, No

2 : (Seungsoo Lee et al.: Generating a Reflectance Image from a Low-Light Image Using Convolutional Neural Network) (Regular Paper) 24 4, (JBE

19_9_767.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 10, Oct ,,. 0.5 %.., cm mm FR4 (ε r =4.4)

<30312DC1A4BAB8C5EBBDC5C7E0C1A4B9D7C1A4C3A52DC1A4BFB5C3B62E687770>

(JBE Vol. 23, No. 5, September 2018) (Regular Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

1 : (Sunmin Lee et al.: Design and Implementation of Indoor Location Recognition System based on Fingerprint and Random Forest)., [1][2]. GPS(Global P

학습영역의 Taxonomy에 기초한 CD-ROM Title의 효과분석

<353420B1C7B9CCB6F52DC1F5B0ADC7F6BDC7C0BB20C0CCBFEBC7D120BEC6B5BFB1B3C0B0C7C1B7CEB1D7B7A52E687770>

(JBE Vol. 24, No. 4, July 2019) (Special Paper) 24 4, (JBE Vol. 24, No. 4, July 2019) ISSN

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 29, no. 6, Jun Rate). STAP(Space-Time Adaptive Processing)., -

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Feb.; 29(2), IS

À±½Â¿í Ãâ·Â

°í¼®ÁÖ Ãâ·Â

2 : (JEM) QTBT (Yong-Uk Yoon et al.: A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM) (Special Paper) 22 5, (JBE Vol. 2

08김현휘_ok.hwp

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 27(12),

(JBE Vol. 23, No. 1, January 2018). (VR),. IT (Facebook) (Oculus) VR Gear IT [1].,.,,,,..,,.. ( ) 3,,..,,. [2].,,,.,,. HMD,. HMD,,. TV.....,,,,, 3 3,,

Software Requirrment Analysis를 위한 정보 검색 기술의 응용

< BCADC3A2C1F82DB9ABC0CE20C7D7B0F8B1E2B8A620C0CCBFEBC7D120B9D0C1FDBFB5BFAA2E485750>

DBPIA-NURIMEDIA

6.24-9년 6월

(JBE Vol. 23, No. 6, November 2018) (Special Paper) 23 6, (JBE Vol. 23, No. 6, November 2018) ISSN 2

디지털포렌식학회 논문양식

DBPIA-NURIMEDIA

3 : 3D (Seunggi Kim et. al.: 3D Depth Estimation by a Single Camera) (Regular Paper) 24 2, (JBE Vol. 24, No. 2, March 2019)

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

4 : (Hyo-Jin Cho et al.: Audio High-Band Coding based on Autoencoder with Side Information) (Special Paper) 24 3, (JBE Vol. 24, No. 3, May 2019

김기남_ATDC2016_160620_[키노트].key

(JBE Vol. 24, No. 1, January 2019) (Regular Paper) 24 1, (JBE Vol. 24, No. 1, January 2019) ISSN 2287

02( ) SAV12-19.hwp

정보기술응용학회 발표

2 : (EunJu Lee et al.: Speed-limit Sign Recognition Using Convolutional Neural Network Based on Random Forest). (Advanced Driver Assistant System, ADA

DBPIA-NURIMEDIA


Journal of Educational Innovation Research 2018, Vol. 28, No. 4, pp DOI: A Study on Organizi

I

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jul.; 27(7),

(JBE Vol. 7, No. 4, July 0)., [].,,. [4,5,6] [7,8,9]., (bilateral filter, BF) [4,5]. BF., BF,. (joint bilateral filter, JBF) [7,8]. JBF,., BF., JBF,.

04 최진규.hwp

45-51 ¹Ú¼ø¸¸

06_±è¼öö_0323

(JBE Vol. 23, No. 1, January 2018) (Special Paper) 23 1, (JBE Vol. 23, No. 1, January 2018) ISSN 2287-

<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

<30312DC1A4BAB8C5EBBDC5C7E0C1A4B9D7C1A4C3A528B1E8C1BEB9E8292E687770>

4 : WebRTC P2P DASH (Ju Ho Seo et al.: A transport-history-based peer selection algorithm for P2P-assisted DASH systems based on WebRTC) (Special Pape

05( ) CPLV12-04.hwp

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 29(3),

06_ÀÌÀçÈÆ¿Ü0926

07.045~051(D04_신상욱).fm

DBPIA-NURIMEDIA

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Dec.; 26(12),

<5B D B3E220C1A634B1C720C1A632C8A320B3EDB9AEC1F628C3D6C1BE292E687770>

SchoolNet튜토리얼.PDF

APOGEE Insight_KR_Base_3P11

,. 3D 2D 3D. 3D. 3D.. 3D 90. Ross. Ross [1]. T. Okino MTD(modified time difference) [2], Y. Matsumoto (motion parallax) [3]. [4], [5,6,7,8] D/3

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Nov.; 25(11),

<32382DC3BBB0A2C0E5BED6C0DA2E687770>

03-서연옥.hwp

DBPIA-NURIMEDIA

레이아웃 1

<313920C0CCB1E2BFF82E687770>

(JBE Vol. 20, No. 5, September 2015) (Special Paper) 20 5, (JBE Vol. 20, No. 5, September 2015) ISS

03 장태헌.hwp

(JBE Vol. 22, No. 3, May 2017) (Special Paper) 22 3, (JBE Vol. 22, No. 3, May 2017) ISSN (O

3. 클라우드 컴퓨팅 상호 운용성 기반의 서비스 평가 방법론 개발.hwp

R을 이용한 텍스트 감정분석

½Éº´È¿ Ãâ·Â

10 이지훈KICS hwp

untitled

DBPIA-NURIMEDIA

인문사회과학기술융합학회

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Mar.; 25(3),

232 도시행정학보 제25집 제4호 I. 서 론 1. 연구의 배경 및 목적 사회가 다원화될수록 다양성과 복합성의 요소는 증가하게 된다. 도시의 발달은 사회의 다원 화와 밀접하게 관련되어 있기 때문에 현대화된 도시는 경제, 사회, 정치 등이 복합적으로 연 계되어 있어 특

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE. vol. 27, no. 8, Aug [3]. ±90,.,,,, 5,,., 0.01, 0.016, 99 %... 선형간섭

07_À±ÀåÇõ¿Ü_0317

Æ÷Àå82š

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Sep.; 30(9),

THE JOURNAL OF KOREAN INSTITUTE OF ELECTROMAGNETIC ENGINEERING AND SCIENCE Jun.; 27(6),

Ch 1 머신러닝 개요.pptx

8-VSB (Vestigial Sideband Modulation)., (Carrier Phase Offset, CPO) (Timing Frequency Offset),. VSB, 8-PAM(pulse amplitude modulation,, ) DC 1.25V, [2

3 : (Won Jang et al.: Musical Instrument Conversion based Music Ensemble Application Development for Smartphone) (Special Paper) 22 2, (JBE Vol

(JBE Vol. 23, No. 5, September 2018) (Special Paper) 23 5, (JBE Vol. 23, No. 5, September 2018) ISSN

09È«¼®¿µ 5~152s

(JBE Vol. 23, No. 4, July 2018) (Special Paper) 23 4, (JBE Vol. 23, No. 4, July 2018) ISSN

3 : OpenCL Embedded GPU (Seung Heon Kang et al. : Parallelization of Feature Detection and Panorama Image Generation using OpenCL and Embedded GPU). e

1. KT 올레스퀘어 미디어파사드 콘텐츠 개발.hwp

30이지은.hwp

Transcription:

(Regular Paper) 24 3, 2019 5 (JBE Vol. 24, No. 3, May 2019) https://doi.org/10.5909/jbe.2019.24.3.495 ISSN 2287-9137 (Online) ISSN 1226-7953 (Print) a), a), a), b), b), b), a) Object Tracking Method using Deep Learning and Kalman Filter Gicheol Kim a), Sohee Son a), Minseop Kim a), Jinwoo Jeon b), Injae Lee b), Jihun Cha b), and Haechul Choi a) CNN(Convolutional Neural Networks), RNN(Recurrent Neural Networks). CNN., CNN R-CNN, YOLO(You Only Look Once), SSD(Single Shot Multi-box Detector)... YOLO v2, YOLO v2 7.7% IoU FHD 20 fps. Abstract Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images. Keyword : YOLO, Kalman filter, Object tracking, CNN, Deep learning Copyright 2016 Korean Institute of Broadcast and Media Engineers. All rights reserved. This is an Open-Access article distributed under the terms of the Creative Commons BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited and not altered.

(JBE Vol. 24, No. 3, May 2019). ICT(Information and Communications Technologies),. 2020 115, 10% 2020 26. [1][2].,,.,.,.,.,.. a) (Information of Departments, Hanbat National University) b) (ETRI) Corresponding Author : (Haechul Choi) E-mail: choihc@hanbat.ac.kr Tel: +82-42-821-1149 ORCID: http://orcid.org/0000-0002-7594-0828 2019 () (No. 19PCRD-C139687-03, EO/IR ). This research was supported by a grant from Police Science and Technology R&D Program funded by Korean National Police Agency. [No.19PCRD-C139687-03, Development and Field Demonstration Test of Surveillance System using radar and EO/IR for detecting illegal Flight of UAVs] Manuscript received April 15, 2019; Revised May 24, 2019; Accepted May 24, 2019. [3][4]..,,. YOLO(You Only Look Once) v2 (Kalman filter). YOLO v2 (Deep learning) [8][9]. YOLO v2,. GPU 40-90 fps.,.. CNN(Convolutional Neural Networks) [5][6][7] YOLO v2, [10][11][12][13].,. YOLO v2.. 2 CNN. 3, 4. 5.

. 1. [14][15]. 1950 XOR, [16], RBM (Restricted Boltzmann Machine) [17]. CNN (Feature Extraction).,.,. CNN.. CNN 2014 R-CNN(Region-CNN) [18]. R-CNN SIFT (Scale Invariant Feature Transform) [19], HOG(Histogram of Oriented Gradient) [20], Optical Flow [21], haar-like features [22], low-level feature. R-CNN,. R-CNN R-CNN, SPP-Net [23], Fast R-CNN [24], Faster R-CNN [25]. R-CNN.,,. SSD(Single Shot Multi-box Detector) [26], YOLO [8], YOLO v2 [9]. YOLO v2. YOLO v2. YOLO v2 1. Table 1. The detection system network Layer Filters Size Input Output 0 conv 32 3 3/1 416 416 3 416 416 32 1 max 2 2/2 416 416 32 208 208 32 2 conv 64 3 3/ 1 208 208 32 208 208 64 3 max 2 2/2 208 208 64 104 104 64 4 conv 128 3 3/1 104 104 64 104 104 128 5 conv 64 11/1 104 104 128 104 104 64 6 conv 128 3 3/1 104 104 64 104 104 128 7 max 2 2/2 104 104 128 52 52 128 8 conv 256 3 3/1 52 52 128 52 52 256 9 conv 128 1 1/1 52 52 256 52 52 128 10 conv 256 3 3/1 52 52 128 52 52 256 11 max 2 2/2 52 52 256 26 26 256 12 conv 512 3 3/1 26 26 256 26 26 512 13 conv 256 1 1/1 26 26 512 26 26 256 14 conv 512 3 3/1 26 26 256 26 26 512 15 conv 256 1 1/1 26 26 512 26 26 256 16 conv 512 3 3/1 26 26 256 26 26 512 17 max 2 2/2 26 26 512 13 13 512 18 conv 1024 3 3/1 13 13 512 13 13 1024 19 conv 512 1 1/1 13 13 1024 13 13 512 20 conv 1024 3 3/1 13 13 512 13 13 1024 21 conv 512 1 1/1 13 13 1024 13 13 512 22 conv 1024 3 3/1 13 13 512 13 13 1024 23 conv 1024 3 3/1 13 13 1024 13 13 1024 24 conv 1024 3 3/1 13 13 1024 13 13 1024 25 route 16 26 conv 64 1 1/1 26 26 512 26 26 64 27 reorg /2 26 26 64 13 13 256 28 route 27 24 29 conv 1024 3 3/1 13 13 1280 13 13 1024 30 conv 30 1 1/1 13 13 1024 13 13 30 31 detection

(JBE Vol. 24, No. 3, May 2019),,. YOLO v2 1.,., 6.. ( ). 2..,,..,.,..,, 1., 1 1 ( ) ( ) ( ) ( ).. 2. 3 4. 1 1. Fig. 1. Overall flowchart of Kalman filter

2 H 1, ( ) (R) ( ).,. 3 2. ( ) 0 1, ( ) ( ).,.,,. 1, 2. 4.. 1...,.... 2. CNN YOLO v2. 2., YOLO v2.., 2. Fig. 2. System flowchart

(JBE Vol. 24, No. 3, May 2019) 3. Fig. 3. Flowchart of proposed method.. 3. YOLO v2 confidence. 0.4..,..,. 4,000, 500. 4 FHD, DJI Phantom 4 Professional. 16.04 Intel i7-6850k 3.6GHz CPU 32GB RAM NVIDIA geforce GTX. YOLO v2,,, 4. Fig. 4. Examples of training data

1080 Ti GPU, python 3.5.5, tensorflow [28] 1.10.0, opencv 3.4.0 [29]. IoU(Intersection over Union). IoU, 5., (ground truth) IoU. IoU 3, 5 IoU 0.5. 5. IoU IoU Fig. 5. Calculation of IoU and evaluation of the various IoU values 6 YOLO v2, IoU 0.595 FHD((Full High Definition) 24 fps. 7 YOLO v2 IoU. YOLO v2. 500, YOLO v2 75, 36. 6. YOLO v2 IoU Fig. 6. IoU results of YOLO v2 network 7. YOLO v2 IoU Fig. 7. Comparison of proposed method and YOLO v2 with IoU results

(JBE Vol. 24, No. 3, May 2019) 8. YOLO v2 Fig. 8. Examples of detection failure (YOLO v2) and tracking success (Proposed method). YOLO v2. IoU 0.672 FHD 20 fps. 8 YOLO v2. YOLO v2 YOLO v2.. CNN YOLO v2.. YOLO v2.,.,., YOLO v2 7.7%. FHD

YOLO v2 4 fps 20 fps. YOLO v2... DB.,. YOLO v2, ( 1km )..,.,. (References) [1] Teal Group, 2014 Market Profile and Forecast, World Unmanned aerial Vehicle Systems, 2014 [2] Choi Youngchul, Ahn Hyosung. (2015). Dron's current and technology development trends and prospects. The world of electricity, 64(12), 20-25. [3] Eric N. Johnson, Anthony J. Calise, Yoko Watanabe, Jincheol Ha, and James C. Neidhoefer, 2007, Real-Time Vision-Based Relative Aircraft Navigation, Journal of Aerospace Computing, Information, and Communication, Vol.4, pp.707-738 [4] John Lai, Luis Mejias, and Jason J. Ford, 2011, Airborne Vision-Based Collision-Detection System, Journal of Field Robotics, Vol.28, Issue 2, pp.137-157. [5] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. [6] Schmidhuber, Jurgen. "Deep learning in neural networks: An overview." Neural networks 61 (2015): 85-117. [7] Gidaris, Spyros, and Nikos Komodakis. "Object detection via a multi-region and semantic segmentation-aware cnn model." Proceedings of the IEEE International Conference on Computer Vision. 2015. [8] Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [9] Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [10] Brown, Robert Grover, and Patrick YC Hwang. Introduction to random signals and applied Kalman filtering. Vol. 3. New York: Wiley, 1992. [11] Ristic, Branko, Sanjeev Arulampalam, and Neil Gordon. "Beyond the Kalman filter." IEEE Aerospace and Electronic Systems Magazine 19.7 (2004): 37-38. [12] Haykin, Simon. Kalman filtering and neural networks. Vol. 47. John Wiley & Sons, 2004. [13] Peterfreund, Natan. "Robust tracking of position and velocity with Kalman snakes." IEEE transactions on pattern analysis and machine intelligence 21.6 (1999): 564-569. [14] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." nature 521.7553 (2015): 436. [15] Deng, Li, and Dong Yu. "Deep learning: methods and applications." Foundations and Trends in Signal Processing 7.3-4 (2014): 197-387. [16] Mayer-Schönberger, Viktor, and Kenneth Cukier. Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt, 2013. [17] Nair, Vinod, and Geoffrey E. Hinton. "Rectified linear units improve restricted boltzmann machines." Proceedings of the 27th international conference on machine learning (ICML-10). 2010. [18] Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. [19] Lowe, David G. "Object recognition from local scale-invariant features." Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Vol. 2. Ieee, 1999. [20] Dalal, Navneet, and Bill Triggs. "Histograms of oriented gradients for human detection." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1. IEEE, 2005. [21] Bouguet, Jean-Yves. "Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm." Intel Corporation 5.1-10 (2001): 4. [22] Lienhart, Rainer, and Jochen Maydt. "An extended set of haar-like features for rapid object detection." Proceedings. International Conference on Image Processing. Vol. 1. IEEE, 2002. [23] He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." IEEE transactions on pattern analysis and machine intelligence 37.9 (2015): 1904-1916. [24] Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE international

(JBE Vol. 24, No. 3, May 2019) conference on computer vision. 2015. [25] Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015. [26] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016. [27] Everingham, Mark, et al. "The pascal visual object classes (voc) challenge." International journal of computer vision 88.2 (2010): 303-338. [28] Abadi, Martín, et al. "Tensorflow: A system for large-scale machine learning." 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). 2016. [29] Bradski, Gary, and Adrian Kaehler. Learning OpenCV: Computer vision with the OpenCV library." O'Reilly Media, Inc.", 2008. - 2017 2 : ( ) - 2019 2 : ( ) - ORCID : https://orcid.org/0000-0002-2091-4841 - :,,, - 2015 2 : ( ) - 2017 2 : ( ) - 2017 9 ~ : - ORCID : http://orcid.org/0000-0003-2499-492x - :,,, - 2018 2 : ( ) - 2018 3 ~ : - ORCID : https://orcid.org/0000-0003-4877-6388 - :,, - 2015 : - 2017 : - 2017 ~ : - ORCID : https://orcid.org/0000-0001-9934-1187 - :,,,

- 1997 : - 2001 : - 2001 ~ : - ORCID : https://orcid.org/0000-0002-1975-1838 - :,,, - 1993 : - 1996 : - 2002 : - 2003 ~ : / - ORCID : http://orcid.org/0000-0002-5257-014x - :,,, - 1997 : - 1999 : - 2004 : - 2004 9 ~ 2010 2 : (ETRI) - 2010 3 ~ : - ORCID : http://orcid.org/0000-0002-7594-0828 - :,,,