<< Chapter < Page Chapter >> Page >
Describes the Noisy Channel Coding Theorem.

As the block length becomes larger, more error correction will be needed. Do codes exist that can correct all errors? Perhaps the crowning achievement of ClaudeShannon's creation of information theory answers this question. His result comes in two complementary forms: theNoisy Channel Coding Theorem and its converse.

Noisy channel coding theorem

Let E denote the efficiency of an error-correcting code: the ratio of the number of data bitsto the total number of bits used to represent them. If the efficiency is less than the capacity of the digital channel, an error-correcting code exists that has theproperty that as the length of the code increases, the probability of an error occurring in the decoded blockapproaches zero.

E E C N block error 0

Converse to the noisy channel coding theorem

If E C , the probability of an error in a decoded block must approach one regardless of the code that might be chosen.

N block error 1
These results mean that it is possible to transmit digital information over a noisy channel (one that introduces errors)and receive the information without error if the code is sufficiently inefficient compared to the channel's characteristics. Generally, a channel's capacity changes withthe signal-to-noise ratio: As one increases or decreases, so does the other. The capacity measures the overall errorcharacteristics of a channel—the smaller the capacity the more frequently errors occur—and an overly efficienterror-correcting code will not build in enough error correction capability to counteract channel errors.

This result astounded communication engineers when Shannon published it in 1948. Analog communication always yields anoisy version of the transmitted signal; in digital communication, error correction can be powerful enough tocorrect all errors as the block length increases. The key for this capability to exist is that the code's efficiency beless than the channel's capacity. For a binary symmetric channel, the capacity is given by

C 1 p e 2 logbase --> p e 1 p e 2 logbase --> 1 p e bits/transmission
[link] shows how capacity varies with error probability. For example, our (7,4) Hammingcode has an efficiency of 0.57 , and codes having the same efficiency but longer block sizes can be used on additivenoise channels where the signal-to-noise ratio exceeds 0 dB .

Capacity of a channel

The capacity per transmission through a binary symmetricchannel is plotted as a function of the digital channel's error probability (upper) and as a function of thesignal-to-noise ratio for a BPSK signal set (lower).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Fundamentals of electrical engineering i. OpenStax CNX. Aug 06, 2008 Download for free at http://legacy.cnx.org/content/col10040/1.9
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Fundamentals of electrical engineering i' conversation and receive update notifications?

Ask