A Basic Introduction to Artificial Neural Network (ANN) 도대체인공신경망이란무엇인가? INDEX. Introduction to Artificial neural networks 2. Perceptron 3. Backpropagation Neural Network 4. Hopfield memory 5. Self Organizing Map(SOM) www.company.com AI Lab,
Company LOGO. Introduction to ANN Introduction ti to ANN Connectionism vs Symbolism Artificial intelligence Connectionism Symbolism
Introduction ti to ANN Connectionism vs Symbolism Connectionism과 Symbolism은인공지능분야를대표하는두가지고전적접근방법 Symbolic AI는지식을symbol과그들간의relation 또는Logic 으로표현. 문제해결또는새로운지식추론을위하여 symbolic instance 와그들간의 relation 에특정 algebraic Inference 을적용한다 e.g. Logical Inference in Ontology, Probabilistic Inference, Fuzzy inference) Connectionist AI는지식을network상에분산된형태로표현. 생명체의신경구조를모방하여지능적프로세스를발현시킴으로써인식 / 학습 / 추론문제를해결 e.g. Artificial Neural Net Introduction ti to ANN The Brain vs. Computer. billion neurons 2. 6 trillion synapses 3. Distributed processing 4. Nonlinear processing 5. Parallel processing
Introduction ti to ANN Biological inspiration 생명체의경우주변환경에적응적행동및학습을수행함 생명체는네트워크형태의 신경구조 (nervous system) 를사용 <Nervous system> Introduction ti to ANN Biological inspiration Artificial neural network는이러한동물의신경구조 (nervous system) 를모방하여기존의 Symbolic AI 로해결하기어려할수없었던문제를해결하고자하는접근. 신경구조 (nervous system) 는뉴런 (neuron) 이라는간단한형태의기본 unit 의연결망으로구성 뉴런의기본형태를모사하여신경구조의기능과행동을발현하고자함
Introduction ti to ANN Biological inspiration <Neuron 의구조 > 뉴런은 Dendrites를통해, 다수의타뉴런들로부터의 입력신호 를전달받음 Nucleus( 핵 ) 을거친신호는 Axon terminals를통해다음뉴런으로출력전파 Introduction ti to ANN Biological inspiration Dendrites Nucleus Axon terminals 뉴런의입력, 처리, 출력 ( 전파 ) 를각각담당하는 Dendrites, Nucleus, Axon terminals 를모방한구조 ANN 에서위와같은구조의 unit 을 Perceptron 이라한다
Types of simple ANNs 입력형식학습방식 Artificial neural network model 이진입력 지도 (supervised) 학습 Hopfield memory, BAM 지도 (supervised) 학습실수입력비지도 (unsupervised) 학습 Perceptron, Backpropagation neural network Self-Organizing Map(SOM) < ANN 모델분류예 > Company LOGO 2. Perceptron
Perceptron <Perceptron 의구조 > Activation function Perceptron <hard limiter> <Linear> <Sigmoid>
Perceptron Perceptron Widrow-Hoff rule(delta rule)
Perceptron Widrow-Hoff rule(delta rule) Perceptron Widrow-Hoff rule(delta rule)
Perceptron Widrow-Hoff rule(delta rule) Perceptron Widrow-Hoff rule(delta rule)
Perceptron Widrow-Hoff rule(delta rule) Perceptron Example(AND l Operation using Perceptron) 뉴런에입력되는가중치의합이임계치를초과하면, 아니면 Activation function hard limiter AND W W. 5 W W W.5 W W W.5 W W W W.5 W, W :.3 or.4
Perceptron Example(XOR l Operation using Perceptron) W W.5 W W W.5 W W W.5 W W W W.5 만족하는 W, W 는존재하지않음 하나의 Perceptron 으로는간단한 XOR 문제도해결하지못함 XOR : linearly non-separable 이러한문제를해결하기위해서 2개또는 3개의층 (layer) 을사용 Backpropagation Neural Network (Multi-layer Perceptron) 3층 Perceptron으로어떤문제도 ( 근사적으로 ) 해결가능 Perceptron 은 Multi-layer layer Perceptron 및 Error Back propagation Algorithm 의기반 XOR 다층퍼셉트론예 O Θ=3.22 7.7-7.54 x x O Θ=-5.32 Θ=-2. -3.65-5.86-3.63-5.74 X X
Company LOGO 3. Backpropagation p Neural Network Backpropagation Neural Network What is Backpropagation Neural Network? <Backpropagation p Neural Network> input layer과 output layer 사이에하나이상의 hidden layer을가지는단방향신경회로망
Backpropagation Neural Network What is Backpropagation Neural Network? 단층퍼셉트론의선형분리 (linearly non-separable) 문제점을해결 (XOR operation 등구현가능 ) 일반적인 continuous function approximation 문제해결을위해널리사용 8 년대중반등장한 Error Back propagation Algorithm 을바탕으로함 일반화된델타규칙 (generalized delta rule) Learning? 원하는목표값 (d) 과실제출력값 (o) 사이의오차제곱합으로정의된 Error Function 의값을최소화하는방식으로학습 Backpropagation Neural Network Learning? Error backpropagation! Error backpropagation Algorithm 개념 hidden layer 의학습을위해 output layer 에서발생한오류를이용하여 hidden layer 가중치재계산 이값을다시 input layer으로역전파 (backpropagation) 시켜가중치를재계산 output layer의오류를 Gradient Descent Method 기법으로최소화함 문제점 상위층의목표값과실제출력값간의오류를하위층으로역전파시키는것은생물학적현상과일치하지않음 하위층의각뉴런이상위층의목표값을알지못하는경우가일반적인생물학적현상임 보편적으로많이사용되는인공신경망학습방법
Gradient Descent Method Gradient Descent Method
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer Input i-th neuron N i pi X W j-th neuron ij j Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron k-th neuron Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron Input M i O W jk k-th neuron k
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron k-th neuron Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron w jk 학습 k-th neuron E p ( d O ) 2 2 k E p ( O ) ( input ) ( d O ) W ( input ) ( W ) jk jk ( d O ) O ( O ) O 및 w jk 학습
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron w jk 학습 k-th neuron W ( t ) W ( t ) * * O jk jk Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron j-th neuron 오류역전파 k-th neuron
Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron w ij 학습 j-th neuron 오류역전파 k-th neuron 및 w ij 학습 E p W ij ( O ) ( input ) ( O ) ( input ) M ( d ) k O ( input ) ( O ) ( input ) ( Wij ) M ( ) k W jko O X pi Backpropagation Neural Network Learning? Error backpropagation! Input layer Hidden layer Output layer i-th neuron w ij 학습 j-th neuron k-th neuron W ( t ) W ( t ) * * X ij ij pi
Backpropagation Neural Network Learning? Error backpropagation! Input N i X pi W ij j Backpropagation Neural Network Learning? Error backpropagation! E p W jk ( O ) ( input ) ( d O ) ( d O ) O ( O ( input ) ( W ) E ( ) ( ) ( ) ( ) p M O input O input ( d ) k O W ( input ) ( O ) ( input ) ( W ) ij M ( d k M k O jk jk ) O ( O ) W O ( O ) X W O ( O ) X pi jk pi ij ) O E p 2 ( input ) ( input ) 2 O, W W O jk k ( d O ) 2 jk ( input ) 3 x pi ( Wij ) y 4 y f ( x ) y( y ) x e x
Backpropagation Neural Network Learning? Error backpropagation! 8. Gradient Descent 기법으로 Hidden-Output 연결가중치를갱신한다. W ( t ) W ( t ) * * O jk jk 9. Gradient Descent 기법으로 Input-Hidden 연결가중치를갱신한다. W ( t ) W ( t ) * * X ij ij 단계. 모든학습쌍에대하여전부학습할때까지 2 로분기하여반복수행한다. pi 2 단계. 출력층의오차합 E 가허용값이하이거나최대반복회수보다크면종료, 그렇지않으면 2 로가서다시반복한다. Backpropagation Neural Network 함수근사화 (function approximation) y.5(cos 8 x sin 4 x x.8) y -.5-3.46 -.94 8.5-4.6-4.8 7.9-2.9-2.63-6.67.27 7.54.53.9 위관계식으로부터생성된임의의 5개 <x,y> 쌍을.34 학습데이터로사용 (* 학습데이터 : 아래그림의점 ) 입력층 : 개뉴런, 은닉층 : 6개뉴런, 출력층 : 개뉴런사용.9-8.43.2 x 3.22.23.67.8.7 6.6.5.4.3.2.9.8.7.6.5.4.3.2.8.7 6.6.5.4.3.2.....2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9..2.3.4.5.6.7.8.9 주어진데이터로, 회학습후, 회학습후 2, 회학습후 www.company.com
Backpropagation Neural Network Example https://www.youtube.com/watch?v=6lgazeomn-4 Company LOGO 5. Hopfield memory
Types of simple ANNs 입력형식학습방식 Artificial neural network model 이진입력 지도 (supervised) 학습 Hopfield memory, BAM 지도 (supervised) 학습실수입력비지도 (unsupervised) 학습 Perceptron, Backpropagation neural network Self-Organizing Map(SOM) < ANN 모델분류예 > Hopfield memory What is Hopfield memory? <Hopfield memory> Hopfield memory는자신을제외한모든뉴런과양방향으로상호연결된형태의 ANN Activation function으로 hard limiter를사용 기본모델은 bipolar 값 (+, -) 을사용 연상기억또는최적화문제를푸는데주로사용 다른종류의 ANN model과달리점진적학습을하지않고, 초기학습패턴의외적합 (sum of outer product) 을사용하여연결가중치를만듦 하나의뉴런층을사용하므로입력벡터와출력벡터의차원이동일 (Auto associative Memory)
Hopfield memory What kind of problem can I solve with Hopfield memory? 연상기억문제 (ex. 필기인식 ) 최적화문제 (ex.traveling salesman problem) Hopfield memory How does Hopfield memory work? <Hopfield memory>
Hopfield memory How does Hopfield memory work? Hopfield memory Learning(Hopfield memory)
Hopfield memory Learning(Hopfield memory) Hopfield memory 패턴이미지크기 : (5x5) 대상 : { ㄱ, ㄴ, ㄷ, ㄹ } 으로학습 회로망크기에대한저장가능한패턴수문제 Hopfield에서는뉴런수가 N인경우일반적으로.5N개의패턴기억가능 이예제의경우 25x.5 = 3.75 4 개미만의패턴인식 Hopfield 신경망의큰문제점은수렴결과가최적인지보장안됨 잘못된기억을연상해낼수있음
Hopfield memory Example of Hopfield memory https://www.youtube.com/watch?v=egazcwejguy Company LOGO 7. Self Organizing g Map(SOM)
Self Organizing i Map(SOM) 입력형식학습방식 Artificial neural network model 이진입력지도학습 Hopfield memory, BAM 실수입력 지도학습 비지도학습 Perceptron, Backpropagation neural network Self-Organizing Map(SOM) <ANN 모델의분류 > Self Organizing i Map (SOM) What is SOM? <Self Organizing Map> 인접한출력뉴런들은비슷한기능을수행할것이라는예측 ( 뇌의부분에따라인지기능의종류가다르고, 뇌의비슷한부분은비슷한인지기능을수행한다는가정 ) 입력벡터와가장가까운출력뉴런 ( 승자뉴런 ) 뿐만아니라위상적으로이웃한뉴런들도함께학습시킴
Self Organizing i Map(SOM) What kind of problem can I solve with SOM? Clustering & Classification 최적화문제 (ex.traveling salesman problem) Self Organizing i Map(SOM) Learning (SOM). 연결가중치를초기화 N 개의입력으로부터 M 개의출력뉴런사이의연결강도를임의의값으로초기화 2. 새로운입력패턴을입력뉴런에제시한다. 3. 입력벡터와모든출력뉴런들과의거리 ( 입력벡터와가중치벡터의거리 ) 를계산 4. 최소거리를가지는승자뉴런을구함. d j 가최소인출력뉴런 j* 를선택
Self Organizing i Map(SOM) Learning (SOM) 5. 뉴런 j* 와그이웃반경내의뉴런들의연결강도를다음식에의해재조정 여기서 j 는 j* 의이웃반경내의모든뉴런 6. 2 로가서모든입력벡터를처리 7. 지정된학습회수까지이웃반경을점차감소시키면서 2 부터 6 의과정을충분한횟수반복 Self Organizing i Map(SOM) Learning (SOM, Example) ^ ^
Self Organizing i Map(SOM) Example of SOM(TSP problem) https://www.youtube.com/watch?v=8tnxgfe6gli Company LOGO THANK YOU!