<< Chapter < Page Chapter >> Page >

Perhaps the ultimate non-parametric detector makes no assumptions about the observations' probability distribution under either model. Here, we assumethat data representative of each model are available to train the detection algorithm. One approach uses artificial neuralnetworks, which are difficult to analyze in terms of both optimality and performance. When the observations arediscrete-valued, a provable optimal detection algorithm ( Gutman ) can be derived using the theory of types .

For a two-model evaluation problem, let r (length L ) denote training data representative of some unknown probability distribution P . We assume that the data have statistically independentcomponents. To derive a non-parametric detector, form a generalized likelihood ratio to distinguish whether a set ofobservations r (length L ) has the same distribution as the training data or a different one Q . r P Q P r Q r P P r P r Because a type is the maximum likelihood estimate of the probability distribution (see Histogram Estimators ), we simply substitute types for the training data and observationsprobability distributions into the likelihood ratio. The probability of a set of observations having a probabilitydistribution identical to its type equals L P . Thus, the log likelihood ratio becomes r L P r L P r L L P r , r The denominator term means that the training and observed data are lumped together to form a type. This type equals the linearcombination of the types for the training and observed data weighted by their relative lengths. P r , r L P r L P r L L Returning to the log likelihood ratio, we have that r L P r L P r L L L P r L P r L L Note that the last term equals L L L P r L P r L L a a L P r a L P r a L P r a L P r a L L which means it can be combined with the other terms to yield the simple expression for the log likelihood ratio.

r L P r P r , r L P r P r , r

When the training data and the observed data are drawn from the same distribution, the Kullback-Leibler distances will besmall. When the distributions differ, the distances will be larger. Defining 0 to be the model that the training data and observations have the same distribution and 1 that they don't, Gutman showed that when we use the decision rule 1 L r 0 1 its false-alarm probability has an exponential rate at least as large as the threshold and the miss probability is thesmallest among all decision rules based on training data. L 1 L P F and P M minimum.

We can extend these results to the K -model case if we have training data r i (each of length L ) that represent model i , i 0 K 1 . Given observed data r (length L ), we calculate the log likelihood function given above for each model todetermine whether the observations closely resemble the tested training data or not. More precisely, define the sufficientstatistics i according to i P r P r , r i L L P r P r , r i Ideally, this statistic would be negative for one of the training sets (matching it) and positive for all of the others(not matching them). However, we could also have the observation matching more than one training set. In all such cases, wedefine a rejection region ? similar to what we defined in sequential model evaluation . Thus, we define the i th decision region i according to i 0 and j 0 , j i and the rejection region as the complement of i 0 K 1 i . Note that all decision regions depend on the value of , a number we must choose. Regardless of the value chosen, the probability ofconfusing models - choosing some model other than the true one - has an exponential rate that is at least for all models. Because of the presence of a rejection region, another kind of "error" isto not choose any model. This decision rule is optimal in the sense that no other training-data-based decision rule has asmaller rejection region than the type-based one.

Because it controls the exponential rate of confusing models, we would like to be as large as possible. However, the rejection region grows as increases; choosing too large a value could make virtually all decisionsrejections. What we want to ensure is that L i ? 0 . Obtaining this behavior requires that L L L 0 : As the length of the observations increases, so must the size of the training set. In summary, for some i : L L L 0 0 L i ? 1 i L L L 0 0 L 1 L i ? 0 The critical value 0 depends on the true distributions underlying the models. The exponential rate of the rejection probability also depends on the true distributions. These results mean that if sufficient trainingdata are available and the decision threshold is not too large, then we can perform optimal detection based entirely on data! Asthe number of observations increases (and the amount of training data as well), the critical threshold 0 becomes the Kullback-Liebler distance between the unknown models. In other words, the type-based detector becomesoptimal!

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask