PowerPoint Presentation



Similar documents
<313120C0AFC0FCC0DA5FBECBB0EDB8AEC1F2C0BB5FC0CCBFEBC7D15FB1E8C0BAC5C25FBCF6C1A42E687770>

OR MS와 응용-03장

김경재 안현철 지능정보연구제 17 권제 4 호 2011 년 12 월

<C7A5C1F620BEE7BDC4>

°í¼®ÁÖ Ãâ·Â

김기남_ATDC2016_160620_[키노트].key

DBPIA-NURIMEDIA

<C7D1B1B9B0E6C1A6BFACB1B8C7D0C8B828C0CCC1BEBFF85FC0CCBBF3B5B75FBDC5B1E2B9E9292E687770>

example code are examined in this stage The low pressure pressurizer reactor trip module of the Plant Protection System was programmed as subject for

Gray level 변환 및 Arithmetic 연산을 사용한 영상 개선

산선생의 집입니다. 환영해요

I

제 출 문 한국산업안전공단 이사장 귀하 본 보고서를 2002 년도 공단 연구사업계획에 따라 수행한 산 업안전보건연구수요조사- 산업안전보건연구의 우선순위설정 과제의 최종보고서로 제출합니다. 2003년 5월 연구기관 : 산업안전보건연구원 안전경영정책연구실 정책조사연구팀 연

3 Gas Champion : MBB : IBM BCS PO : 2 BBc : : /45

<4D F736F F D20B1E2C8B9BDC3B8AEC1EE2DC0E5C7F5>

public key private key Encryption Algorithm Decryption Algorithm 1

(JBE Vol. 23, No. 2, March 2018) (Special Paper) 23 2, (JBE Vol. 23, No. 2, March 2018) ISSN

Buy one get one with discount promotional strategy


untitled

DIY 챗봇 - LangCon


??

<3130BAB9BDC428BCF6C1A4292E687770>

PJTROHMPCJPS.hwp

#Ȳ¿ë¼®

Output file

<31372DB9DABAB4C8A32E687770>

ASETAOOOCRKG.hwp

강의지침서 작성 양식

정보기술응용학회 발표

Multi-pass Sieve를 이용한 한국어 상호참조해결 반-자동 태깅 도구

<33312D312D313220C0CCC7D1C1F820BFB0C3A2BCB12E687770>

(JBE Vol. 21, No. 1, January 2016) (Regular Paper) 21 1, (JBE Vol. 21, No. 1, January 2016) ISSN 228

Manufacturing6

(Exposure) Exposure (Exposure Assesment) EMF Unknown to mechanism Health Effect (Effect) Unknown to mechanism Behavior pattern (Micro- Environment) Re

DBPIA-NURIMEDIA

09권오설_ok.hwp

REP - SVM - 002, SV M Multiclass 를이용한데이터학습및분류 김선영 부산대학교컴퓨터공학과 ABSTRACT 여러그룹의데이터를알고있을때, 새로운데이터가나타나면이데이터가어느그룹에가까운지알수있다. 이를기계적

High Resolution Disparity Map Generation Using TOF Depth Camera In this paper, we propose a high-resolution disparity map generation method using a lo

untitled

07.045~051(D04_신상욱).fm

<3130C0E5>

±è¼ºÃ¶ Ãâ·Â-1


Coriolis.hwp

BSC Discussion 1

SchoolNet튜토리얼.PDF

슬라이드 제목 없음

저작자표시 - 비영리 - 변경금지 2.0 대한민국 이용자는아래의조건을따르는경우에한하여자유롭게 이저작물을복제, 배포, 전송, 전시, 공연및방송할수있습니다. 다음과같은조건을따라야합니다 : 저작자표시. 귀하는원저작자를표시하여야합니다. 비영리. 귀하는이저작물을영리목적으로이용할

2 : (Juhyeok Mun et al.: Visual Object Tracking by Using Multiple Random Walkers) (Special Paper) 21 6, (JBE Vol. 21, No. 6, November 2016) ht

融合先验信息到三维重建 组会报 告[2]

FMX M JPG 15MB 320x240 30fps, 160Kbps 11MB View operation,, seek seek Random Access Average Read Sequential Read 12 FMX () 2

untitled

DBPIA-NURIMEDIA

ºÎ·ÏB

DBPIA-NURIMEDIA

내지4월최종

에너지경제연구 Korean Energy Economic Review Volume 17, Number 2, September 2018 : pp. 1~29 정책 용도별특성을고려한도시가스수요함수의 추정 :, ARDL,,, C4, Q4-1 -

DBPIA-NURIMEDIA

<312E B3E2B5B520BBE7C8B8BAB9C1F6B0FC20BFEEBFB5B0FCB7C320BEF7B9ABC3B3B8AE20BEC8B3BB28B0E1C0E7BABB292DC6EDC1FD2E687770>

DBPIA-NURIMEDIA

À±½Â¿í Ãâ·Â

hwp

공급업체평가를 위한 DEA 모형의 확장

ÀÌÁÖÈñ.hwp

서론 34 2

45-51 ¹Ú¼ø¸¸

Microsoft PowerPoint - 7-Work and Energy.ppt

탄도미사일 방어무기체계 배치모형 연구 (Optimal Allocation Model for Ballistic Missile Defense System by Simulated Annealing Algorithm)

½Éº´È¿ Ãâ·Â

<31332EBEC6C6AEB8B6C4C9C6C3C0BB20C8B0BFEBC7D120C6D0C5B0C1F6B5F0C0DAC0CE20BFACB1B82E687770>

泰 東 古 典 硏 究 第 24 輯 이상적인 정치 사회의 구현 이라는 의미를 가지므로, 따라서 천인 합일론은 가장 적극적인 경세의 이론이 된다고 할 수 있다. 권근은 경서의 내용 중에서 현실 정치의 귀감으로 삼을 만한 천인합일의 원칙과 사례들을 발견하고, 이를 연구하여

Microsoft PowerPoint - 27.pptx

13 Who am I? R&D, Product Development Manager / Smart Worker Visualization SW SW KAIST Software Engineering Computer Engineering 3

진단, 표시・광고법 시행 1년

입장

Preliminary spec(K93,K62_Chip_081118).xls

300 구보학보 12집. 1),,.,,, TV,,.,,,,,,..,...,....,... (recall). 2) 1) 양웅, 김충현, 김태원, 광고표현 수사법에 따른 이해와 선호 효과: 브랜드 인지도와 의미고정의 영향을 중심으로, 광고학연구 18권 2호, 2007 여름

우리나라의 전통문화에는 무엇이 있는지 알아봅시다. 우리나라의 전통문화를 체험합시다. 우리나라의 전통문화를 소중히 여기는 마음을 가집시다. 5. 우리 옷 한복의 특징 자료 3 참고 남자와 여자가 입는 한복의 종류 가 달랐다는 것을 알려 준다. 85쪽 문제 8, 9 자료

상품 전단지

::: 해당사항이 없을 경우 무 표시하시기 바랍니다. 검토항목 검 토 여 부 ( 표시) 시 민 : 유 ( ) 무 시 민 참 여 고 려 사 항 이 해 당 사 자 : 유 ( ) 무 전 문 가 : 유 ( ) 무 옴 브 즈 만 : 유 ( ) 무 법 령 규 정 : 교통 환경 재

2

DBPIA-NURIMEDIA

화이련(華以戀) hwp

ÆòÈ�´©¸® 94È£ ³»Áö_ÃÖÁ¾

歯1##01.PDF

<5BC1F8C7E0C1DF2D31B1C75D2DBCF6C1A4BABB2E687770>

120229(00)(1~3).indd

01Report_210-4.hwp

<C3D1BCB15FC0CCC8C45FBFECB8AE5FB1B3C0B0C0C75FB9E6C7E D352D32315FC5E4292E687770>



교육 과 학기 술부 고 시 제 호 초 중등교육법 제23조 제2항에 의거하여 초 중등학교 교육과정을 다음과 같이 고시합니다. 2011년 8월 9일 교육과학기술부장관 1. 초 중등학교 교육과정 총론은 별책 1 과 같습니다. 2. 초등학교 교육과정은 별책

시험지 출제 양식

177

제주어 교육자료(중등)-작업.hwp

¸é¸ñ¼Ò½ÄÁö 63È£_³»Áö ÃÖÁ¾

<C3D6C1BE5FBBF5B1B9BEEEBBFDC8B0B0DCBFEFC8A C3D6C1BEBABB292E687770>

초등국어에서 관용표현 지도 방안 연구

Transcription:

컴퓨터비전 및 패턴인식 연구회 2009.2.12 Support Vector Machines http://cespc1.kumoh.ac.kr/~nonezero/svm ws cvpr.pdf 금오공과대학교 컴퓨터공학부 고재필 1

Contents Introduction Optimal Hyperplane Soft-Margin SVM Nonlinear SVM with Kernel Trick Training SVM: SMO Link to Statistical Learning Theory Multiclass SVM with Classifiers Ensemble SVM Demo References 2

Sample height weight object / sample 3

Feature height x 2 feature weight x 1 sample / feature vector x=(x 1, x 2 ) 4

Training Set height x 2 training set {(x i, y i )} N i=1 weight x 1 y 1 기린 y 2 하마 5

How to Classify Them Using Computer? height weight 6

How to Classify Them Using Computer? height (2, 3) x 2 = 3/4x 1 4x 2 = 3x 1 3x 1 4x 2 = 0 (3 4) (x 1 x 2 ) T = 0 (4, 2) weight w = (3 4) T w T x = 0 3 x 2 4 x 3 < 0 3 x 4 4 x 2 > 0 7

Linear Classification class 2, y=-1 w T x+ b < 0 height weight w T x + b = 0 hyperplane decision surface w T x+ b > 0 class 1, y=+1 8

Optimal Hyperplane height? weight w T x + b = 0 9

Optimal Hyperplane margin w T x + b < 0 w T x + b > 0 w T x + b = 0 optimal hyperplane 100 10

Canonical Hyperplane support vector w T x + b -1 y=-1 w T x + b = 0 w T x + b +1 y=+1 100 / Scale Factor optimal hyperplane 11

Normal Vector x 2 (2, 4) x 1 x 2 = 4/2x 1 2x 2 = 4x 1 4x 1 2x 2 = 0 (4 2) (x 1 x 2 ) T = 0 w T x = 0 w = (4 2) T (4, -2) (2 4)(4 2) T = 8 8 = 0 x y = <x y> = x T y = x 1 y 1 +x 2 y 2 = x y cosθ 12

Distance from a Sample to the Optimal Hyperplane x r x p x = x p + r w/ w g(x) = w T x + b = 0 g(x)=w T (x p + r w/ w )+b g(x)= w T x p + b + r w T w/ w g(x)= r w T w/ w r=g(x) w /w T w r = g(x) / w 13

Margin of Separation support vector: x (s) w T x (s) + b = 1 y= 1 r r 2r w T x (s) + b = +1 y=+1 r = g(x (s) )/ w where, g(x (s) ) = w T x (s) + b = ±1 for y (s) = ±1 margin: 2r = 2 / w 14

Finding the Optimal Hyperplane margin: w T x + b 1, y= 1 w T x + b +1, y=+1 Constrained Optimization Problem (convex) objective/cost function (linear) constraints 15

Convex Quadratic Problem Saddle Point minimize Primal problem maximize Dual problem 16

Lagrange Multipliers Method cost function: f(x) constraint function: g(x)=0 f(x) = λ g(x) tangent (line) ~ gradient = normal convex / concave f(x) = 0, g(x)=0 L(x, λ) ) = f(x) ) + λ g(x) Lagrange function Lagrange multiplier 17

A Brief Overview of Optimization Theory (2001년 추계CVPR 튜토리얼) 18

A Brief Overview of Optimization Theory 19

A Brief Overview of Optimization Theory 20

A Brief Overview of Optimization Theory Karush 21

A Brief Overview of Optimization Theory h 22

Lagrange Function for the Optimal Hyperplane (Primal Problem) Solution: dlp dw dlp db = 0 = 0 KKT condition: 23

Remark on Support Vector KKT Condition If then is support vector : is not support vector w is only related to support vectors, x i 24

Lagrange Function for the Optimal Hyperplane (Dual Problem) Optimizing L D only depends on the input patterns in the form of a set of dot product x T x (+) Not depend on the dimension of the input pattern (+) Can replace dot product with Kernel 25

n n (> 1000) Large Scale Quadratic Problem 26

Support Vector Machine Classifier for Linearly Separable Case optimal weight optimal bias 27

Optimal Hyperplane for Non-Separable Case misclassification correct classification impossible to find a separating hyperplane give them up as errors while minimizing the probabilities of classification error averaged over the training set 28

Soft Marin Technique Problem: Can t satisfy for all Adopting Slack Variable Minimizing Errors min. For QP, replace 29

Lagrange Function for Soft Margin Tech. penalize errors C penalize complexity 30

Dual Problem for Soft Margin Tech. non-bound pattern bound pattern The optimal value of C is determined experimentally, it cannot be readily related to the characteristics of the dataset or model 31

Nonlinear Support Vector Machines What if the problem is not linear? 32

Feature Space 1D 2D linearly non-separable on 1D (low-dimensional input space) linearly separable on 2D (high-dimensional feature space) 33

Dual Problem and Classifier in the Feature Space (not feasible) The feature space can be huge or infinite! The phi feature has the form of inner product 34

Kernel Kernel: a function k that takes 2 variables and computes a scalar value (a kind of similarity) Kernel Matrix: m m matrix K with elements K ij = k(x i, x j ). Standard Kernels Polynomial Kernel Radial Basis Function Kernel Sigmoid Kernel 35

Mercer s Condition Valid Kernel functions should satisfy Mercer s Condition For any g(x) for which: It must be the case that: A criteria is that the kernel should be positive semi definite Theorem: If a kernel is positive semi definite i.e.: are real numbers Then, there exists a function defining an inner product of possibly higher dimension i.e.: 36

Dual Problem and Classifier with Kernel (Generalized Inner Product SVM) Kernel Trick works without the mapping 37

Example: XOR Problem (small scale QP problem) Training set: { ( 1 1; 1), ( 1 +1; +1), (+1 1; +1), (+1 +1; 1)} x 1 x 2 x 3 x 4 Let kernel k(x, x i )=(1+x T x i ) 2, x=(x 1, x 2 ) T, x i =(x i1, x i2 ) T Then k(x, x i ) = (1+x T x i ) 2 = (1+x 1 x i1 +x 2 x i2 ) 2 = 1 + x 2 1 x2 i1 + 2x 1 x 2 x i1 x i2 + x2 2 x2 i2 + 2x 1 x i1 + 2 x2 x i2 A Mapping: ϕ(x)=(1, x 2 1, x2 2, 2x 1, 2x 2, 2x 1 x 2 )T Kernel Matrix: K = 4 x 4 38

Example: XOR Problem Dual Problem: L D (α)= α 1 + α 2 + α 3 + α 4 (9 α 2 1 2 α 1 α 2 2 α 1 α 3 +2 α 1 α 4 +9 α 2 2 +2 α 2 α 3 2 α 2 α 4 +9 α2 3 2 α 3 α 4 +9 α2 4 )/2 Optimizing L D : 9α 1 2α 2 α 3 + α 4 = 1, α 1 + 9 α 2 + α 3 α 4 = 1 α 1 + α 2 +9 α 3 α 4 = 1, α 1 α 2 α 3 + 9α 4 = 1 Optimal Lagrange multipliers: α 1 = α 2 = α 3 = α = 1/8 > 0 4 All the samples are support vectors Optimal w: w = α i y i ϕ(x i ) = (1/8)( 1) ϕ(x 1 )+ (1/8)(+1)ϕ(x 2 )+ (1/8)(+1) ϕ(x 3 )+ (1/8)( 1) ϕ(x 4 ) = ( 0 0 0 0 0 1 2) T Optimal Hyperplane: f(x) ) = sgn( w T ϕ(x) ) ) = sgn( x 1 x 2 ) 39

Sequential Minimal Optimization (Platt 98) In case of large scale QP problem divide a large QP into a series of smaller QP sub-problems and optimize them sequentially What is the smallest working set? Update just 2 Lagrange multipliers at a time Updating subset of variables while others fixed will also converge globally 40

Sequential Minimal Optimization Write dual prob. as a function of just 2 alphas Update Rules - no numerical part - memory for buffering E 41

Link to Statistical Learning Theory (Vapnik 95) Learn f from training set is to minimize the following: Unknown Minimize instead the average risk over the training set: 42

A Risk Bound Structural Risk: complexity (capacity) of func. set 43

Vapnik-Chervonenkis (VC) Dimension Maximum number of points that can be labeled in all possible way VC dimension of linear classifiers in N- dimensions is h=n+1 Lines(dichotomies) can shatter 3 points in 2d Measure of Complexity of Function Set Minimizing VC dim. Minimizing Complexity 44

VC Dimension of Marin Hyperplanes VC dimension satisfies the following: R is the smallest sphere containing a set of points Maximizing Margin Minimizing VC dim. 45

Results for Gaussian Kernel 46

Results for Gaussian Kernel 47

Summary Optimal Separating Hyperplane (Margin) Global Minimum Solution (Convexity) Only SVs are Relevant (Sparseness) Automatically selects SVs; # of SVs can be considered as # of hidden units of MLP Model selection problem; kernel selection Training speed and method for a large training set Binary classifier 48

Multiclass Support Vector Machines (Multiple Classifiers System) Ensemble of binary support vector machines Categorized by Coding / Decoding OPC (one-per-class), PWC(pair-wise coupling), ECOC (error correcting output coding) 49

Tree-based Methods 50

Open Software 51

Demo http://www.eee.metu.edu.tr/~alatan/courses/demo/ap pletsvm.html http://www.csie.ntu.edu.tw/~cjlin/libsvm/#gui 52

Applications Biometrics Object Detection and Recognition Character Recognition Information and Image Retrieval Other Applications 53

References S.Haykin, Neural Networks A Comprehensive Foundation, Prentice Hall, 1999. Chap. 6 Burges, C. J. C., A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, Vol. 2, pp. 121~167, 1998. John C. Platt, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines, Microsoft Research, Technical Report MSR TR 98 14, 1998 C. W. Hsu and C. J. Lin. A Comparison of Methods for Multi Class Support Vector Machines, IEEE Transactions on Neural Networks, 13, 415 425, 2002. 54

Books Vladimir N. Vapnik, The Nature of Statistical Learning Theory, Springer Verlag, 1995. Nello Cristianini and John Shawe Taylor, An Introduction to Support Vector Machines and Other Kernel based Learning Methods, Cambridge University Press, 2000. Bernhard Scholkopf and Alexander J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, 2001. Links http://videolectures.net/ http://www.kernel machines.org/ 55

감사합니다 56