<< Chapter < Page Chapter >> Page >
An introduction to the concept of typical sequences, which lie at the heart of source coding. The idea of typical sequences leads to Shannon's source-coding Theorem.

As mentioned earlier, how much a source can be compressed should be related to its entropy . In 1948, Claude E. Shannon introduced three theorems and developed very rigorousmathematics for digital communications. In one of the three theorems, Shannon relates entropy to the minimum number of bitsper second required to represent a source without much loss (or distortion).

Consider a source that is modeled by a discrete-time and discrete-valued random process X 1 , X 2 ,…, X n ,…where x i a 1 a 2 a N and define p X i x i a j p j for j 1 , 2 , , N , where it is assumed that X 1 , X 2 ,… X n are mutually independent and identically distributed.

Consider a sequence of length n

X X 1 X 2 X n
The symbol a 1 can occur with probability p 1 . Therefore, in a sequence of length n , on the average, a 1 will appear n p 1 times with high probabilities if n is very large.

Therefore,

P X x p X 1 x 1 p X 2 x 2 p X n x n
P X x p 1 n p 1 p 2 n p 2 p N n p N i 1 N p i n p i
where p i P X j a i for all j and for all i .

A typical sequence X may look like

X a 2 a 1 a N a 2 a 5 a 1 a N a 6
where a i appears n p i times with large probability. This is referred to as a typical sequence . The probability of X being a typical sequence is
P X x i 1 N p i n p i i 1 N 2 2 logbase --> p i n p i i 1 N 2 n p i 2 logbase --> p i 2 n i 1 N p i 2 logbase --> p i 2 n H X
where H X is the entropy of the random variables X 1 , X 2 ,…, X n .

For large n , almost all the output sequences of length n of the source are equally probably with probability 2 n H X . These are typical sequences. The probability of nontypical sequences arenegligible. There are N n different sequences of length n with alphabet of size N . The probability of typical sequences is almost 1.

k 1 # of typical seq. 2 n H X 1

Consider a source with alphabet {A,B,C,D} with probabilities { 1 2 , 1 4 , 1 8 , 1 8 }. Assume X 1 , X 2 ,…, X 8 is an independent and identically distributed sequence with X i A B C D with the above probabilities.

H X 1 2 2 logbase --> 1 2 1 4 2 logbase --> 1 4 1 8 2 logbase --> 1 8 1 8 2 logbase --> 1 8 1 2 2 4 3 8 3 8 4 4 6 8 14 8

The number of typical sequences of length 8

2 8 14 8 2 14

The number of nontypical sequences 4 8 2 14 2 16 2 14 2 14 4 1 3 2 14

Examples of typical sequences include those with A appearing 8 1 2 4 times, B appearing 8 1 4 2 times, etc. {A,D,B,B,A,A,C,A}, {A,A,A,A,C,D,B,B} and much more.

Examples of nontypical sequences of length 8: {D,D,B,C,C,A,B,D}, {C,C,C,C,C,B,C,C} and much more. Indeed, these definitions andarguments are valid when n is very large. The probability of a source output to be in the set of typical sequences is 1 when n . The probability of a source output to be in the set of nontypicalsequences approaches 0 as n .

Got questions? Get instant answers now!

The essence of source coding or data compression is that as n , nontypical sequences never appear as the output of the source. Therefore, one only needs to be able to representtypical sequences as binary codes and ignore nontypical sequences. Since there are only 2 n H X typical sequences of length n , it takes n H X bits to represent them on the average. On the average it takes H X bits per source output to represent a simple source that produces independent and identically distributed outputs.

Shannon's source-coding

A source that produced independent and identically distributed random variables with entropy H can be encoded with arbitrarily small error probability at any rate R in bits per source output if R H . Conversely, if R H , the error probability will be bounded away from zero, independent ofthe complexity of coder and decoder.

The source coding theorem proves existence of source coding techniques that achieve rates close to the entropy but does not provide anyalgorithms or ways to construct such codes.

If the source is not i.i.d. (independent and identically distributed), but it is stationary with memory, then a similar theorem applies withthe entropy H X replaced with the entropy rate H n H X n | X 1 X 2 X n-1

In the case of a source with memory, the more the source produces outputs the more one knows about the source and themore one can compress.

The English language has 26 letters, with space it becomes an alphabet of size 27. If modeled as a memoryless source (nodependency between letters in a word) then the entropy is H X 4.03 bits/letter.

If the dependency between letters in a text is captured in a model the entropy rate can be derived to be H 1.3 bits/letter. Note that a non-information theoretic representation of a text may require 5 bits/letter since 2 5 is the closest power of 2 to 27. Shannon's results indicate that there may be a compression algorithm with the rate of 1.3bits/letter.

Got questions? Get instant answers now!

Although Shannon's results are not constructive, there are a number of source coding algorithms for discrete time discretevalued sources that come close to Shannon's bound. One such algorithm is the Huffman source coding algorithm . Another is the Lempel and Ziv algorithm.

Huffman codes and Lempel and Ziv apply to compression problems where the source produces discrete time and discrete valuedoutputs. For cases where the source is analog there are powerful compression algorithms that specify all the steps fromsampling, quantizations, and binary representation. These are referred to as waveform coders. JPEG, MPEG, vocoders are a fewexamples for image, video, and voice, respectively.

Questions & Answers

find the 15th term of the geometric sequince whose first is 18 and last term of 387
Jerwin Reply
I know this work
salma
The given of f(x=x-2. then what is the value of this f(3) 5f(x+1)
virgelyn Reply
hmm well what is the answer
Abhi
how do they get the third part x = (32)5/4
kinnecy Reply
can someone help me with some logarithmic and exponential equations.
Jeffrey Reply
sure. what is your question?
ninjadapaul
20/(×-6^2)
Salomon
okay, so you have 6 raised to the power of 2. what is that part of your answer
ninjadapaul
I don't understand what the A with approx sign and the boxed x mean
ninjadapaul
it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared
Salomon
I'm not sure why it wrote it the other way
Salomon
I got X =-6
Salomon
ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6
ninjadapaul
oops. ignore that.
ninjadapaul
so you not have an equal sign anywhere in the original equation?
ninjadapaul
hmm
Abhi
is it a question of log
Abhi
🤔.
Abhi
I rally confuse this number And equations too I need exactly help
salma
But this is not salma it's Faiza live in lousvile Ky I garbage this so I am going collage with JCTC that the of the collage thank you my friends
salma
Commplementary angles
Idrissa Reply
hello
Sherica
im all ears I need to learn
Sherica
right! what he said ⤴⤴⤴
Tamia
hii
Uday
hi
salma
what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks.
Kevin Reply
a perfect square v²+2v+_
Dearan Reply
kkk nice
Abdirahman Reply
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
Kim Reply
or infinite solutions?
Kim
The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined.
Al
y=10×
Embra Reply
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
Nancy Reply
rolling four fair dice and getting an even number an all four dice
ramon Reply
Kristine 2*2*2=8
Bridget Reply
Differences Between Laspeyres and Paasche Indices
Emedobi Reply
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
Mary Reply
how do you translate this in Algebraic Expressions
linda Reply
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
Crystal Reply
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
Chris Reply
what's the easiest and fastest way to the synthesize AgNP?
Damian Reply
China
Cied
types of nano material
abeetha Reply
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials​ and their applications of sensors.
Ramkumar Reply
what is nano technology
Sravani Reply
what is system testing?
AMJAD
preparation of nanomaterial
Victor Reply
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
Himanshu Reply
good afternoon madam
AMJAD
what is system testing
AMJAD
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
Prasenjit Reply
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
Ali Reply
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
bamidele Reply
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
QuizOver.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Digital communication systems. OpenStax CNX. Jan 22, 2004 Download for free at http://cnx.org/content/col10134/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Digital communication systems' conversation and receive update notifications?

Ask