<< Chapter < Page Chapter >> Page >
Coverings of K by balls of radius ϵ .

Given ϵ > 0 , and the compact set K , consider all coverings of K by balls of radius ϵ , as shown in . In other words,

K U i = 1 N b ( f i , ϵ ) .
Let N ϵ : = inf { N : over all such covers } . N ϵ ( K ) is called the covering number of K . Since it depends on X and K , we write it as N ϵ = N ϵ ( K , X ) .

Kolmogorov entropy

The Kolmogorov entropy, denoted by H ϵ ( K , X ) , of the compact set K in X is defined as the logarithm of the covering number:

H ϵ ( K , X ) = log N ϵ ( K , X ) .

The Kolmogorov entropy solves our problem of optimal encoding in the sense of the following theorem.

For any compact set K X , we have n ϵ ( K , X ) = H ϵ ( K , X ) , where · is the ceiling function.

Sketch: We can define an encoder-decoder as follows To encode: Say f K . Just specify which ball it is covered by. Because the number of balls is N ϵ ( K , X ̲ ¯ ) , we need at most log N ϵ ( K , X ̲ ¯ ) bits to specify any such ball ball.

To decode: Just take the center of the ball specified by the bitstream.

It is now easy to see that this encoder-decoder pair is optimal in either of the senses given above.

The above encoder is not practical. However, the Kolmogorov entropy tells us the best performance we can expect from any encoder-decoder pair. Kolmogorov entropy is defined in the deterministic setting. It is the analogue of the Shannon entropy which is defined in a stochastic setting.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Compressive sensing. OpenStax CNX. Sep 21, 2007 Download for free at http://cnx.org/content/col10458/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Compressive sensing' conversation and receive update notifications?

Ask