Lesson 6: Perceptron Learning Algorithm

Printer-friendly versionPrinter-friendly version

Introduction

Separating Hyperplanes

Construct linear decision boundaries that explicitly try to separate the data into different classes as well as possible.

Good separation is defined in a certain form mathematically.

Even when the training data can be perfectly separated by hyperplanes, LDA or other linear methods developed under a statistical framework may not achieve perfect separation.

graph

Review of Vector Algebra

A hyperplane or affine set L is defined by the linear equation:

formula

For any two points x1 and x2 lying in L, βT(x1 - x2) = 0 and hence β* = β / || β || is the vector normal to the surface of L .

For any point x0 in L, βTx0 = - β0

The signed distance of any point x to L is given by:

formula

Hence f (x) is proportional to the signed distance from x to the hyperplane defined by f (x) = 0 .

graph

Rosenblatt’s Perceptron Learning

Goal: find a separating hyperplane by minimizing the distance of misclassified points to the decision boundary.

Code the two classes by yi = 1, -1.

If yi = 1 is misclassified, βTxi + β0 < 0. If yi = -1 is misclassified, βTxi + β0 > 0.

Since the signed distance from x ito the decision boundary is formula, the distance from a misclassified xi to the decision boundary is formula.

Denote the set of misclassified points by M .

The goal is to minimize:

formula

Stochastic Gradient Descent

To minimize D( β, β0), compute the gradient (assuming M is fixed):

formula

Stochastic gradient descent is used to minimize the piecewise linear criterion.

Adjustment on β, β0 is done after each misclassified point is visited.

The update is:

formula

Here ρ is the learning rate, which in this case can be taken to be 1 without loss of generality. (Note: if βTx+ β0 = 0 is the decision boundary, λβTx+ λβ0 = 0 is also the boundary.)

Issues

If the classes are linearly separable, the algorithm converges to a separating hyperplane in a finite number of steps.

A number of problems with the algorithm:

– When the data are separable, there are many solutions, and which one is found depends on the starting values.

– The number of steps can be very large. The smaller the gap, the longer it takes to find it.

– When the data are not separable, the algorithm will not converge, and cycles develop. The cycles can be long and therefore hard to detect.

Optimal Separating Hyperplanes

Suppose the two classes can be linearly separated.

The optimal separating hyperplane separates the two classes and maximizes the distance to the closest point from either class.

There is a unique solution.

Tend to have better classification performance on test data.

The optimization problem:

formula

subject to formula .

Every point is at least C away from the decision boundary βTx+ β0 = 0.

graph

For any solution of the optimization problem, any positively scaled multiple is a solution as well. We can set || β || = 1 / C. The optimization problem is equivalent to:

formula

subject to yi (βTxi+ β0) ≥ 1, i = 1, ... , N.

This is a convex optimization problem.

The Lagrange sum is:

formula

Setting the derivatives to zero, we obtain:

formula

Substitute into LP, we obtain the Wolfe dual:

formula

subject to ai ≥ 0.

This is a simpler convex optimization problem.

The Karush-Kuhn-Tucker conditions require:

formula

– If ai > 0, then yi (βTxi+ β0) = 1, that is, xi is on the boundary of the slab.

– If yi (βTxi+ β0) > 1,that is, xi is not on the boundary of the slab, ai = 0.

The points xi on the boundary of the slab are called support points.

The solution vector β is a linear combination of the support points:

formula