<< Chapter < Page Chapter >> Page >
Explains in simple terms the functions of a convolutional encoder and Viterbi decoder

Viterbi convolutional decoder

A convolutional code is not decoded in short blocks as in a block code. However, to simplify decoding, messages are artificially broken down into very long blocks by periodically flushing the encoder with a string of zeros, as in the example discussed here.

For illustration only this example here uses an unrealistically short block length of 5 data bits with the last two fixed at 0 to flush the encoder (remember that this is very inefficient and, in practice, practical block lengths are very much longer, typically 1,000 to 10,000 bits in length).

Convolutional codes are always decoded using the Viterbi algorithm as this simplifies the decoding operation. The algorithm is based on the nearest neighbour decoding scheme and, like the other algorithms we have looked at, it relies on the assumption that the probability of t errors is much greater than the probability of t+1 errors and it thus selects or chooses and retains only the paths which have fewer errors.

The decoding process is based on the previous decoding trellis. We will use the previous ½ rate encoder example and assume that the received message is: 10 10 00 10 10, representing a total of five (unknown) transmitted data bits each encoded into five bit pairs, i.e. total of ten encoded data bits. We further assume in this simplified example that the last 2 bits of the 5 data inputs were flushing zeros to reset the encoder and decoder.

Starting (after flushing) with the first received bit in position A in the encoder, we know that if a 1 had been input, (lower path) from the encoder figure the output should have been 11 as we moved to state C. If a 0 was input (upper path) we should have received 00 and moved to state B, see upper part of [link] .

What was actually received was 10, a Hamming distance of 1 from both these possibilities, so we draw that in the lower part of [link] onto the first stage of our decoding trellis.

First stage of trellis after decoding first two received data bits

Instead of reporting the expected outputs we next annotate the lower part of [link] with the separate distances between the received data and the trellis encoder on each path. We then add the cumulative Hamming distance to the states (B, C) in square brackets above the states B and C

Now consider the second pair of received data bits. Consider first state B. As before, we should have received 00 for a 0 input and 11 for a 1 input, see left hand side of [link] . What we actually received was 10, which is a Hamming distance of 1 from both possibilities so the right hand part of [link] is annotated with the individual and cumulative distances to states D and E.

Then consider state C. For a 0 input, (upper part) we should have received 01, but what was actually received was 10, a Hamming distance of 2. For a 1 input (lower path) we should have received 10 and this is exactly what was received, corresponding to a Hamming distance of 0! Again the right part of [link] is annotated with the individual distances on the paths and the new cumulative or summed distances to states F and G.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Communications source and channel coding with examples. OpenStax CNX. May 07, 2009 Download for free at http://cnx.org/content/col10601/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Communications source and channel coding with examples' conversation and receive update notifications?

Ask