Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Introduction to SVM

Margins and Support vectors


Support Vector machines
Separating HP
Maximum Margin
Support Vectors
Maximum margin
Some concepts of HP geometry
HP –weights and bias
• the weight vector w is orthogonal to the
hyperplane, since it is orthogonal to any
arbitrary vector (a1 −a2) on the hyperplane.
• the bias b fixes the offset of hyperplane, in the
d-dimensional space.
• NOTE: The bias weight in SVM DOES NOT
have corresponding feature vector = 1 ( as we
saw in perceptron, ANN and LR)
Further let xp be the projection of x on the hyperplane. Let r
be the offset of x along the weight vector w
What is the distance of the origin from the HP?
Given any two points on the hyperplane
, say p = (p1, p2) = (4, 0), and q = (q1, q2) = (2, 5),
Given a training dataset, Margin of a classifier is defined as
the minimum distance of a point from the separating HP
• All the points (or vectors) that achieve this
minimum distance are also called support
vectors for the linear classifier.
• a support vector, is a point that lies
precisely on the margin of the classifier.
The scaler s
Geometrical and Functional Margin
Maximum Margin Hyperplane
The summary

You might also like