SVM%20(Support%20Vector%20Machines) - PowerPoint PPT Presentation

About This Presentation
Title:

SVM%20(Support%20Vector%20Machines)

Description:

The support vector algorithm simply looks ... and the training was repeated, the same separating hyper plane would be found. ... w is the normal of hyper-plane ... – PowerPoint PPT presentation

Number of Views:300
Avg rating:3.0/5.0
Slides: 10
Provided by: PRIN182
Category:

less

Transcript and Presenter's Notes

Title: SVM%20(Support%20Vector%20Machines)


1
SVM (Support Vector Machines)
  • Base on statistical learning theory
  • choose the kernel before the learning process

2
Recent applications of SVM
  • Pattern recognition
  • Isolated handwritten digit recognition
  • Object recognition
  • Speaker identification
  • Regression estimation

3
Idea of SVM in Pattern Classification
  • The support vector algorithm simply looks for
    largest margin.
  • d (d-) is the shortest distance from the
    separating plane to the closest positive
    (negative) point.
  • ( d d- )
  • Margin equal to d plus d-

4
  • For these machines , the support vectors are the
    critical elements of the training set. If other
    training points are removed, and the training was
    repeated, the same separating hyper plane would
    be found.

5
A general two-class pattern classification
problem
  • sample point (x1,y1) , (x2,y2) .. (xi , yi)
  • X is the vector of the point
  • Y is the class label
  • For example, in two-class pattern classification
  • Y 1 , -1
  • Find a classifier with the decision function
  • f(x) such that y f(x)

6
Linear Support Vector Machine
  • w is the normal of hyper-plane
  • b / w is the perpendicular distance from
    the hyper-plane to the origin

decision function is f(x) wx b 0 Notice
that there is ambiguity in the magnitude of w and
b. They can be arbitrary scaled such that H1
xw b 1 , H2 xw b -1 d d- 1/ /
w , so , margin d d- 2/ w This
optimization problem is solved using the
Lagrangian formulation.
7
decision function
  • Simple dot product
  • Polynomial
  • Radius base function

8
Demo
http//svm.dcs.rhbnc.ac.uk/pagesnew/GPat.shtml
9
Dont understand !
  • According to the Empirical Risk Minimisation
    algorithm, the classifier with the largest margin
    will give lower expected risk, i.e. better
    generalisation
  • VC dimension
Write a Comment
User Comments (0)
About PowerShow.com