<< Chapter < Page Chapter >> Page >

The criterion used in the previous section---minimize the average cost of an incorrect decision---may seem to be acontrived way of quantifying decisions. Well, often it is. For example, the Bayesian decision rule depends explicitly on the a priori probabilities. A rational method of assigning values to these---either by experiment or through trueknowledge of the relative likelihood of each model---may be unreasonable. In this section, we develop alternative decisionrules that try to respond to such objections. One essential point will emerge from these considerations: the likelihood ratio persists as the core of optimal detectors asoptimization criteria and problem complexity change . Even criteria remote fromperformance error measures can result in the likelihood ratio test. Such an invariance does not occur often in signal processing andunderlines the likelihood ratio test's importance.

Maximizing the probability of a correct decision

As only one model can describe any given set of data (the models are mutually exclusive), the probability of beingcorrect P c for distinguishing two models is given by P c say 0 when 0 true say 1 when 1 true We wish to determine the optimum decision region placement.Expressing the probability of being correct in terms of the likelihood functions p R i r , the a priori probabilities and the decision regions, we have P c r Z 0 π 0 p R 0 r r Z 1 π 1 p R 1 r We want to maximize P c by selecting the decision regions Z 0 and Z 1 . Mimicking the ideas of the previous section, we associate each value of r with the largest integral in the expression for P c . Decision region Z 0 , for example, is defined by the collection of values of r for which the first term is largest. As all of the quantities involved are non-negative, the decision rulemaximizing the probability of a correct decision is

Given r , choose i for which the product π i p R i r is largest.
When we must select among more than two models, this result still applies (prove this for yourself). Simple manipulations lead to the likelihood ratio test when we must decide between two models. p R 1 r p R 0 r 0 1 π 0 π 1 Note that if the Bayes' costs were chosen so that C i i 0 and C i j C , ( i j ), the Bayes' cost and the maximum-probability-correct thresholds would be the same.

To evaluate the quality of the decision rule, we usually compute the probability of error P e rather than the probability of being correct. This quantity can be expressed in terms of the observations, thelikelihood ratio, and the sufficient statistic.

P e π 0 r Z 1 p R 0 r π 1 r Z 0 p R 1 r π 0 Λ Λ η p Λ 0 Λ π 1 Λ Λ η p Λ 1 Λ π 0 ϒ ϒ γ p ϒ 0 ϒ π 1 ϒ ϒ γ p ϒ 1 ϒ
These expressions point out that the likelihood ratio and the sufficient statistic can each be considered afunction of the observations r ; hence, they are random variables and have probability densities for each model.When the likelihood ratio is non-monotonic, the first expression is most difficult to evaluate. Whenmonotonic, the middle expression often proves to be the most difficult.No matter how it is calculated, no other decision rule can yield a smaller probability oferror . This statement is obvious as we minimized the probability of error implicitly by maximizing the probability of being correct because P e 1 P c .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elements of detection theory. OpenStax CNX. Jun 22, 2008 Download for free at http://cnx.org/content/col10531/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elements of detection theory' conversation and receive update notifications?

Ask