<< Chapter < Page Chapter >> Page >

In decision making problems, we know the value of the observation, but do not know the value y . Therefore, it is appealing to consider the conditional density or pmf as a function of the unknown values y , with X fixed at its observed value. The resulting function is called the likelihood function. As the name suggests, values of y where the likelihood function is largest are intuitively reasonable indicators of the true value of the unknown quantity, which we will denoteby y * . The rationale for this is that these values would produce conditional densities or pmfs that place high probability on theobservation X = x .

The Maximum Likelihood Estimator (MLE) is defined to be the value of y that maximizes the likelihood function; i.e., in the continuous case

y ^ ( X ) = arg max y p X | Y ( X | y )

with an analogous definition for the discrete case by replacing the conditional densitywith the conditional pmf. The decision rule y ^ ( X ) is called an “estimator,” which is common in decision problemsinvolving a continuous parameter. Note that maximizing the likelihood function is equivalent to minimizing the negative log-likelihoodfunction (since the logarithm is a monotonic transformation). Now let y * denote the true value of Y . Then we can view the negative log-likelihood as a loss function

L ( y , y * ) = - log p X | Y ( X | y )

where the dependence on y * on the right hand side is embodied in the observation X on the left. An interesting special case of the MLE results when the conditional density P X | Y ( X | y ) is a Gaussian, in which case the negative log-likelihood corresponds to a squared errorloss function.

Now let us consider the expectation of this loss, with respect to the conditional distribution P X | Y ( X | y * ) :

- E [ log p X | Y ( X | y ) ] = log 1 p X | Y ( x | y ) p X | Y ( x | y * ) d x

The true value y * minimizes the expected negative log-likelihood (or, equivalently, maximizes the expected log-likelihood ). To seethis, compare the expected log-likelihood of y * with that of any other value y :

E [ log p X | Y ( X | y * ) - log p X | Y ( X | y ) ] = E log p X | Y ( X | y * ) p X | Y ( X | y ) = log p X | Y ( x | y * ) p X | Y ( x | y ) p X | Y ( x | y * ) d x = KL ( p X | Y ( x | y * ) , p X | Y ( x | y ) ) .

The quantity KL ( p X | Y ( x | y * ) , p X | Y ( x | y ) ) is called the Kullback-Leibler (KL) divergence between the conditional densityfunction p X | Y ( x | y * ) and p X | Y ( x | y ) . The KL divergence is non-negative, and zero if and only if the two densities are equal [link] . So, we see that the KL divergence acts as a sort of risk function in the context of Maximum Likelihood Estimation.

The cramer-rao lower bound

The MLE is based on finding the value for Y that maximizes the likelihood function. Intuitively, if the maximum point is verydistinct, say a well isolated peak in the likelihood function, then the easier it will be to distinguish the MLE from alternativedecisions. Consider the case in which Y is a scalar quantity. The “peakiness” of the log-likelihood function can be gauged byexamining its curvature, - 2 log p X | Y ( x | y ) y 2 , at the point of maximum likelihood. The higher the curvature, the more peaky is the behavior of the likelihood functionat the maximum point. Of course, we hope that the MLE will be a good predictor (decision) for the unknown true value y * . So, rather than looking at the curvature of the log-likelihood function at themaximum likelihood point, a more appropriate measure of how easily it will be to distinguish y * from the alternatives is the expected curvature of the log-likelihood function evaluated at the value y * . The expectation taken over all possible observations with respect tothe conditional density p X | Y ( x | y * ) . This quantity, denoted I ( y * ) = E [ - 2 log p X | Y ( x | y ) y 2 ] | y = y * , is called the Fisher Information (FI). In fact, the FI provides us with an important performance bound known asthe Cramer-Rao Lower Bound (CRLB).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask