<< Chapter < Page Chapter >> Page >

Three elements of statistical data analysis

  • of learning from data and prediction problems.
    • concentration inequalities
    • uniform deviation bounds
    • approximation theory
    • rates of convergence
  • that run in polynomial time (e.g., decision trees, wavelet methods, support vector machines).

Learning from data

To formulate the basic learning from data problem, we must specify several basic elements: data spaces, probability measures, loss functions, andstatistical risk.

Data spaces

Learning from data begins with a specification of two spaces:

X Input Space
Y Output Space .

The input space is also sometimes called the “feature space” or “signal domain.” The output space is also called the “class label space,”“outcome space,” “response space,” or “signal range.”

X = R d d -dimensional Euclidean space of ``feature vectors''
Y = { 0 , 1 } two classes or ``class labels''
X = R one-dimensional signal domain (e.g., time-domain)
Y = R real-valued signal

A classic example is estimating a signal f in noise:

Y = f ( X ) + W

where X is a random sample point on the real line and W is a noise independent of X .

Probability measure and expectation

Define a joint probability distribution on X × Y denoted P X , Y . Let ( X , Y ) denote a pair of random variables distributed according to P X , Y . We will also have use for marginal and conditional distributions. Let P X denote the marginal distribution on X , and let P Y | X denote the conditional distribution of Y given X . For any distribution P , let p denote its density function with respect to the corresponding dominating measure; e.g., Lebesgue measure for continuous random variables or counting measure for discrete random variables.

Define the expectation operator:

E X , Y [ f ( X , Y ) ] f ( x , y ) d P X , Y ( x , y ) = f ( x , y ) p X , Y ( x , y ) d x d y .

We will also make use of corresponding marginal and conditional expectations such as E X and E Y | X .

Wherever convenient and obvious based on context, we may drop the subscripts (e.g., E instead of E X , Y ) for notational ease.

Loss functions

A loss function is a mapping

: Y × Y R .

In binary classification problems, Y = { 0 , 1 } . The 0 / 1 loss function is usually used: ( y 1 , y 2 ) = 1 y 1 y 2 , where 1 A is the indicator function which takes a value of 1 if condition A is true and zero otherwise. We typically will compare a true label y with a prediction y ^ , in which case the 0 / 1 loss simply counts misclassifications.

In regression or estimation problems, Y = R . The squared error loss function is often employed: ( y 1 , y 2 ) = ( y 1 - y 2 ) 2 , the square of the difference between y 1 and y 2 . In application, we are interested in a true value y in comparison to an estimate y ^ .

Statistical risk

The basic problem in learning is to determine a mapping f : X Y that takes an input x X and predicts the corresponding output y Y . The performance of a given map f is measured by its expected loss or risk :

R ( f ) E X , Y ( f ( X ) , Y ) .

The risk tells us how well, on average, the predictor f performs with respect to the chosen loss function. A key quantity of interestis the mininum risk value, defined as

R * = inf f R ( f )

where the infinum is taking over all measurable functions.

The learning problem

Suppose that ( X , Y ) are distributed according to P X , Y ( ( X , Y ) P X , Y for short). Our goal is to find a map so that f ( X ) Y with high probability. Ideally, we would chose f to minimize the risk R ( f ) = E [ ( f ( X ) , Y ) ] . However, in order to compute the risk (and hence optimize it) we need to know the jointdistribution P X , Y . In many problems of practical interest, the joint distribution is unknown, and minimizing the risk is notpossible.

Suppose that we have some exemplary samples from the distribution. Specifically, consider n samples X i , Y i i = 1 n distributed independently and identically (iid) according to the otherwise unknown P X , Y . Let us call these samples training data , and denote the collection by D n X i , Y i i = 1 n . Let's also define a collection of candidate mappings F . We will use the training data D n to pick a mapping f n F that we hope will be a good predictor. This is sometimes called the Model Selection problem. Note that the selected model f n is a function of the training data:

f n ( X ) = f ( X ; D n ) ,

which is what the subscript n in f n refers to. The risk of f n is given by

R ( f n ) = E X , Y [ ( f n ( X ) , Y ) ] .

Note that since f n depends on D n in addition to a new random pair ( X , Y ) , the risk is a random variable (i.e., a function of the training data D n ). Therefore, we are interested in the expected risk , computed over random realizations of the training data:

E D n [ R ( f n ) ] .

We hope that f n produces a small expected risk.

The notion of expected risk can be interpreted as follows. We would like to define an algorithm (a model selection process) that performswell on average, over any random sample of n training data. The expected risk is a measure of the expected performance of thealgorithm with respect to the chosen loss function. That is, we are not gauging the risk of a particular map f F , but rather we are measuring the performance of the algorithm that takes any realizationof training data and selects an appropriate model in F .

This course is concerned with determining “good” model spaces F and useful and effective model selection algorithms.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask