<< Chapter < Page | Chapter >> Page > |
In this final set of notes on learning theory, we will introduce a different model of machine learning. Specifically, we have sofar been considering batch learning settings in which we are first given a training set to learn with, and our hypothesis is then evaluated on separate test data. In this set of notes, we will consider the online learning setting in which the algorithm has to make predictions continuously even while it's learning.
In this setting, the learning algorithm is given a sequence of examples in order. Specifically, the algorithm first sees and is asked to predict what it thinks is. After making its prediction, the true value of is revealed to the algorithm (and the algorithm may use this information to performsome learning). The algorithm is then shown and again asked to make a prediction, after which is revealed, and it may againperform some more learning. This proceeds until we reach . In the online learning setting, we are interested in the total number oferrors made by the algorithm during this process. Thus, it models applications in which the algorithm has to make predictions even while it'sstill learning.
We will give a bound on the online learning error of the perceptron algorithm. To make our subsequent derivations easier, we willuse the notational convention of denoting the class labels by .
Recall that the perceptron algorithm has parameters , and makes its predictions according to
where
Also, given a training example , the perceptron learning rule updates the parameters as follows. If , then it makes no change to the parameters.Otherwise, it performs the update This looks slightly different from the update rule we had written down earlier in the quarterbecause here we have changed the labels to be . Also, the learning rate parameter was dropped. The only effect of the learning rate is to scale all the parameters by some fixed constant, which does not affect the behavior of the perceptron.
The following theorem gives a bound on the online learning error of the perceptron algorithm, when it is run as an online algorithm that performs an updateeach time it gets an example wrong. Note that the bound below on the number of errors does not have an explicit dependence on the number ofexamples in the sequence, or on the dimension of the inputs (!).
Theorem (Block, 1962, and Novikoff, 1962) . Let a sequence of examples be given.Suppose that for all , and further that there exists a unit-length vector ( ) such that for all examples in the sequence (i.e., if , and if , so that separates the data with a margin of at least ). Then the total number of mistakes that the perceptron algorithm makes on this sequence is atmost .
Proof. The perceptron updates its weights only on those examples on which it makes a mistake. Let be the weights that were being used when it made its -th mistake. So, (since the weights are initialized to zero), and if the -th mistake was on the example , then , which implies that
Also, from the perceptron learning rule, we would have that .
We then have
By a straightforward inductive argument, implies that
Also, we have that
The third step above used Equation [link] . Moreover, again by applying a straightfoward inductiveargument, we see that [link] implies
Putting together [link] and [link] we find that
The second inequality above follows from the fact that is a unit-length vector (and , where is the angle between and ). Our result implies that . Hence, if the perceptron made a -th mistake, then .
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?