<< Chapter < Page Chapter >> Page >

Introduction

Sufficient statistics arise in nearly every aspect of statistical inference. It is important to understandthem before progressing to areas such as hypothesis testing and parameter estimation.

Suppose we observe an N -dimensional random vector X , characterized by the density or mass function f x , where is a p -dimensional vector of parameters to be estimated. The functional form of f x is assumed known. The parameter completely determines the distribution of X . Conversely, a measurement x of X provides information about through the probability law f x .

Suppose X X 1 X 2 , where X i 1 are IID. Here is a scalar parameter specifying the mean. The distribution of X is determined by through the density f x 1 2 x 1 2 2 1 2 x 2 2 2 On the other hand, if we observe x 100 102 , then we may safely assume 0 is highly unlikely.

The N -dimensional observation X carries information about the p -dimensional parameter vector . If p N , one may ask the following question: Can we compress x into a low-dimensional statistic without any loss of information?Does there exist some function t T x , where the dimension of t is M N , such that t carries all the useful information about ?

If so, for the purpose of studying we could discard the raw measurements x and retain only the low-dimensional statistic t . We call t a sufficient statistic . The following definition captures this notion precisely:

Let X 1 , , X M be a random sample, governed by the density or probability mass function f x . The statistic T x is sufficient for if the conditional distribution of x , given T x t , is independent of . Equivalently, the functional form of f t x does not involve .
How should we interpret this definition? Here are somepossibilities:

1. Let f x t denote the joint density or probability mass function on ( X , T ( X ) ) . If T X is a sufficient statistic for , then

f x f x T x f t x f t f t x f t
Therefore, the parametrization of the probability law for the measurement x is manifested in the parametrization of the probability law for the statistic T x .

2. Given t T x , full knowledge of the measurement x brings no additional information about . Thus, we may discard x and retain on the compressed statistic t .

3. Any inference strategy based on f x may be replaced by a strategy based on f t .

Binary information source

( Scharf, pp.78 ) Suppose a binary information source emitsa sequence of binary (0 or 1) valued, independent variables x 1 , , x N . Each binary symbol may be viewed as a realization of a Bernoulli trial: x n Bernoulli , iid. The parameter 0 1 is to be estimated.

The probability mass function for the random sample x x 1 x N is

f x n 1 N f x n n 1 N f x x n 1 1 x n k 1 N k
where k n 1 N x n is the number of 1's in the sample.

We will show that k is a sufficient statistic for x . This will entail showing that the conditional probability massfunction f k x does not depend on .

The distribution of the number of ones in N independent Bernoulli trials is binomial: f k N k k 1 N k Next, consider the joint distribution of ( x , x n ) . We have f x f x x n Thus, the conditional probability may be written

f k x f x k f k f x f k k 1 N k N k k 1 N k 1 N k
This shows that k is indeed a sufficient statistic for . The N values x 1 , , x N can be replaced by the quantity k without losing information about .

In the previous example , suppose we wish to store in memory the information we possess about . Compare the savings, in terms of bits, we gain by storing the sufficientstatistic k instead of the full sample x 1 , , x N .

Determining sufficient statistics

In the example above , we had to guess the sufficient statistic, and work out theconditional probability by hand. In general, this will be a tedious way to go about finding sufficientstatistics. Fortunately, spotting sufficient statistics can be made easier by the Fisher-Neyman Factorization Theorem .

Uses of sufficient statistics

Sufficient statistics have many uses in statistical inference problems. In hypothesis testing, the Likelihood Ratio Test can often be reduced to a sufficient statistic of the data. In parameter estimation, the Minimum Variance Unbiased Estimator of a parameter can be characterized by sufficient statistics and the Rao-Blackwell Theorem .

Minimality and completeness

Minimal sufficient statistics are, roughly speaking, sufficient statistics that cannot becompressed any more without losing information about the unknown parameter. Completeness is a technical characterization of sufficient statistics that allows one toprove minimality. These topics are covered in detail in this module.

Further examples of sufficient statistics may be found in the module on the Fisher-Neyman Factorization Theorem .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Signal and information processing for sonar. OpenStax CNX. Dec 04, 2007 Download for free at http://cnx.org/content/col10422/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?

Ask