<< Chapter < Page Chapter >> Page >

The number of unique labellings of the training data that can be achieved with linear classifiers is, in fact, finite. A line can bedefined by picking any pair of training points, as illustrated in [link] . Two classifiers can be defined from each such line: one that outputs a label “1” for everything on or abovethe line, and another that outputs “0” for everything on or above. There exist n 2 such pairs of training points, and these define all possible unique labellings of the training data.Therefore, there are at most 2 n 2 unique linear classifiers for any random set of n 2-dimensional features (the factor of 2 is due to the fact that for each linear classifier thereare 2 possible assignments of the labelling).

Fitting a linear classifier to 2-dimensional data. There are an infinite number of such classifiers. We can generate alinear classifier by choosing two data points, drawing a line with both points on one side, and declaring all points on or above theline to be “ + 1 ” (or “ - 1 ”) and all points below the line to be “ - 1 ” (or “ + 1 ”).
From the discussion in the previous figure, we see that the two linear classifiers depicted in this figure are equivalent for this setof data points, and hence relative to the set of n training data there are only on the order of n 2 unique linear classifiers.

Thus, instead of infinitely many linear classifiers, we realize that as far as a random sample of n training data is concerned, there are at most

2 n 2 = 2 n ! ( n - 2 ) ! 2 ! = n ( n - 1 )

unique linear classifiers. That is, using linear classification rules, there are at most n ( n - 1 ) n 2 unique label assignments for n data points. If we like, we can encode each possibility with log 2 n ( n - 1 ) 2 log 2 n bits. In d dimensions there are 2 n d hyperplane classification rules which can be encoded in roughly d log 2 n bits. Roughly speaking, the number of bits required for encoding each model is the VC dimension. Theremarkable aspect of the VC dimension is that it is often finite even when F is infinite (as in this example).

If X has d dimensions in total, we might consider linear classifiers based on 1 , 2 , , d features at a time. Lower dimensional hyperplanes are less complex than higher dimensionalones. Suppose we set

F 1 = linear classifiers using 1 feature F 2 = linear classifiers using 2 features and so on .

These spaces have increasing VC dimensions, and we can try to balance the empirical risk and a cost function depending on the VC dimension.Such procedures are often referred to as Structural Risk Minimization . This gives you a glimpse of what the VC dimension is all about. In future lectures we will revisit this topic in greaterdetail.

Hold-out methods

The basic idea of “hold-out” methods is to split the n samples D { X i , Y i } i = 1 n into a training set, D T , and a test set, D V .

D T = { X i , Y i } i = 1 m , D V = { X i , Y i } i = m + 1 n .

Now, suppose we have a collection of different model spaces { F λ } indexed by λ Λ (e.g., F λ is the set of polynomials of degree d , with λ = d ), or suppose that we have a collection of complexity penalization criteria L λ ( f ) indexed by λ ( e.g., let L λ ( f ) = R ^ ( f ) + λ c ( f ) , with λ R + ). We can obtain candidate solutions using the training set as follows. Define

R ^ m ( f ) = i = 1 m ( f ( X i ) , Y i )

and take

f ^ λ = arg min f F λ R ^ m ( f )

or

f ^ λ = arg min f F R ^ m ( f ) + λ c ( f ) .

This provides us with a set of candidate solutions { f ^ λ } . Then we can define the hold-out error estimate using the test set:

R ^ V ( f ) = 1 n - m + 1 i = m + 1 n ( f ( X i ) , Y i ) ,

and select the “best” model to be f ^ = f ^ λ ^ where

λ ^ = arg min λ R ^ V f ^ λ .

This type of procedure has many nice theoretical guarantees, provided both the training and test set grow with n .

Leaving-one-out cross-validation

A very popular hold-out method is the so call “leaving-one-out cross-validation” studied in depth by Grace Wahba (UW-Madison,Statistics). For each λ we compute

f ^ λ ( k ) = arg min f F 1 n i k i = 1 n ( f ( X i ) , Y i ) + λ C ( f )

or

f ^ λ ( k ) = arg min f F λ 1 n i k i = 1 n ( f ( X i ) , Y i ) .

Then we have cross-validation function

V ( λ ) = 1 n k = 1 n ( f λ ( k ) ( X k ) , Y k ) λ * = arg min λ V ( λ ) .

Summary

To summarize, this lecture gave a brief and incomplete survey of different methods for dealing with the issues of overfitting and modelselection. Given a set of training data, D n = { X i , Y i } i = 1 n , our overall goal is to find

f * = arg min f F R ( f )

from some collection of functions, F . Because we do not know the true distribution P X Y underlyingthe data points D n , it is difficult to get an exact handle on the risk, R ( f ) . If we only focus on minimizing the empirical risk R ^ ( f ) we end up overfitting to the training data. Two general approaches were presented.

  1. In the first approach we consider an indexed collection of spaces { F λ } λ Λ such that the complexity of F λ increases as λ increases, and
    lim λ F λ = F .
    A solution is given by
    f ^ λ * = arg min f F λ * R ^ n ( f )
    where either λ * is a function which increases with n ,
    λ * = λ ( n ) ,
    or λ * is chosen by hold-out validation.
  2. The alternative approach is to incorporate a penalty term into the risk minimization problem formulation. Here we consideran indexed collection of penalties { C λ } λ Λ satisfying the following properties:
    1. C λ : F R + ;
    2. For each f F and λ 1 < λ 2 we have C λ 1 ( f ) C λ 2 ( f ) ;
    3. There exists λ 0 Λ such that C λ 0 ( f ) = 0 for all f F .
    In this formulation we find a solution
    f ^ λ * = arg min f F R ^ n ( f ) + C λ * ( f ) ,
    where either λ * = λ ( n ) , a function growing the number of data samples n , or λ * is selected by hold-out validation.

Consistency

If an estimator or classifier f ^ λ * satisfies

E R ( f ^ λ * ) inf f F R ( f ) as n ,

then we say that f ^ λ * is F -consistent with respect to the risk R . When the context is clear, we will simply say that f ^ is consistent.

Questions & Answers

can someone help me with some logarithmic and exponential equations.
Jeffrey Reply
sure. what is your question?
ninjadapaul
20/(×-6^2)
Salomon
okay, so you have 6 raised to the power of 2. what is that part of your answer
ninjadapaul
I don't understand what the A with approx sign and the boxed x mean
ninjadapaul
it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared
Salomon
I'm not sure why it wrote it the other way
Salomon
I got X =-6
Salomon
ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6
ninjadapaul
oops. ignore that.
ninjadapaul
so you not have an equal sign anywhere in the original equation?
ninjadapaul
Commplementary angles
Idrissa Reply
hello
Sherica
im all ears I need to learn
Sherica
right! what he said ⤴⤴⤴
Tamia
what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks.
Kevin Reply
a perfect square v²+2v+_
Dearan Reply
kkk nice
Abdirahman Reply
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
Kim Reply
or infinite solutions?
Kim
The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined.
Al
y=10×
Embra Reply
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
Nancy Reply
rolling four fair dice and getting an even number an all four dice
ramon Reply
Kristine 2*2*2=8
Bridget Reply
Differences Between Laspeyres and Paasche Indices
Emedobi Reply
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
Mary Reply
is it 3×y ?
Joan Reply
J, combine like terms 7x-4y
Bridget Reply
im not good at math so would this help me
Rachael Reply
yes
Asali
I'm not good at math so would you help me
Samantha
what is the problem that i will help you to self with?
Asali
how do you translate this in Algebraic Expressions
linda Reply
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
Crystal Reply
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
Chris Reply
what's the easiest and fastest way to the synthesize AgNP?
Damian Reply
China
Cied
types of nano material
abeetha Reply
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
what is nanomaterials​ and their applications of sensors.
Ramkumar Reply
what is nano technology
Sravani Reply
what is system testing?
AMJAD
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
good afternoon madam
AMJAD
what is system testing
AMJAD
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
can nanotechnology change the direction of the face of the world
Prasenjit Reply
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
Ali Reply
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
bamidele Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask