<< Chapter < Page Chapter >> Page >
PAM system diagram.
PAM system diagram.

There are several ways to measure the quality of the system. For instance, the “symbol recovery error”

e ( k T ) = w ( ( k - δ ) T ) - m ( k T )

measures the difference between the message and the soft decision. The average squared error

1 M k = 1 M e 2 ( k T ) ,

gives a measure of the performance of the system. Thiscan be used as in [link] to adjust the parameters of an equalizer when the sourcemessage is known. Alternatively, the differencebetween the message w ( · ) and the quantized output of the receiver Q { m ( · ) } can be used to measure the “hard decision error”

e ( k T ) = w ( ( k - δ ) T ) - Q { m ( k T ) } .

The “decision-directed error” replaces this with

e ( k T ) = Q { m ( k T ) } - m ( k T ) ,

the error between the soft decisions and the associated hard decisions. This error is used in [link] as a way to adjust the parameters in an equalizer when the source message is unknown, as a way of adjustingthe phase of the carrier in [link] , and as a way of adjusting the symbol timing in [link] .

There are other useful indicators of the performance of digital communication receivers.Let T b be the bit duration (when there are two bits per symbol, T b = T 2 ). The indicator

c ( k T b ) = 1 if b ( ( k - δ ) T b ) b ^ ( k T b ) 0 if b ( ( k - δ ) T b ) = b ^ ( k T b )

counts how many bits have been incorrectly received, and the bit error rate is

B E R = 1 M k = 1 M c ( k T b ) .

Similarly, the symbol error rate sums the indicators

c ( k T ) = 1 if w ( ( k - δ ) T ) ) Q { m ( k T ) } 0 if w ( ( k - δ ) T ) ) = Q { m ( k T ) } ,

counting the number of alphabet symbols that were transmitted incorrectly.More subjective or context-dependent measures are also possible, such as the percentage of “typical” listenerswho can accurately decipher the output of the receiver.

No matter what the exact form of the error measure, the ultimate goal is the accurate and efficienttransmission of the message.

Coding and decoding

What is information? How much can move across a particular channel in a given amount of time?Claude Shannon proposed a method of measuringinformation in terms of bits, and a measure of the capacity of the channel in terms of the bit rate—thenumber of bits transmitted per second (recall the quote at the beginning of the first chapter).This is defined quantitatively by the channel capacity , which is dependent on the bandwidth of the channel and on the power of thenoise in comparison to the power of the signal. For most receivers, however,the reality is far from the capacity, and this is caused by two factors. First, the data to be transmittedare often redundant, and the redundancy squanders the capacity of the channel. Second,the noise can be unevenly distributed among the symbols. When large noises disrupt the signal, thenexcessive errors occur.

The problem of redundancy is addressed in [link] by source coding , which strives to represent the data in the most concise manner possible. After demonstratingthe redundancy and correlation of English text, [link] introduces the Huffman code , which is a variable-length code that assigns short bit strings tofrequent symbols and longer bit strings to infrequent symbols. Like Morse code, this will encode the letter“e” with a short code word, and the letter “z” with a long code word. When correctly applied,the Huffman procedure can be applied to any symbol set (not just the letters of the alphabet),and is “nearly” optimal, that is, it approaches the limits set by Shannon.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Software receiver design. OpenStax CNX. Aug 13, 2013 Download for free at http://cnx.org/content/col11510/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Software receiver design' conversation and receive update notifications?

Ask