<< Chapter < Page | Chapter >> Page > |
Block codes are one example of the forward error correcting coding (FECC) technique where we encode the signal by adding additional bits or digits of redundant data so that the decoder is then able to correct most of the errors which are introduced by transmission through a noisy channel. FECC was invented for deep space probes where the extremely long transmission propagation path loss results in received data with particularly low signal to noise ratio as the modest transmitter power is limited by the solar panel outputs.
As we are adding additional bits to generate each codeword this is a systematic encoder as the information data bits are included directly within the codewords. The additional bits required for the transmission of the redundant information increases the data rate which will consume more bandwidth if we wish to maintain the same throughput, but, if we seek to obtain low error rates, then this trade-off is usually acceptable.
[link] shows the typical error rate performance for uncoded data compared with FECC data. It plots the bit error probability, , against . is the measure of the energy per bit to the noise power spectral density ratio and is the normally used signal to noise ratio ratio measure on these error rate plots.
FECC is used widely in compact discs (CD), computer storage and data transmission, all manor of radio links, data modems, video, TV and cellphone transmissions, space communications etc. Note in [link] the ability of FECC to achieve a much lower error rate than for the uncoded data transmissions at low bit error probability, .
In some computer communication systems, information is sent as 7-bit ASCII codes with a parity check bit added on the end. Using even parity the 7-bit all zero ASCII code 0000000 expands into 00000000 while 0000001 codes to 00000011. This (and all other cases) thus has a binary digit difference or Hamming Distance of 2. [link] shows that we can use this to detect 1 (or an odd number of) errors.
The block length is then n = 8 and the number of information bits k = 7. This generally assists with error detection but is is insufficiently robust or redundant to achieve an error correction capability as the coding rate is only 7/8.
The minimum distance in binary digits between any two codewords is known as the minimum Hamming Distance, , which is 2 for the case of odd or even parity check in ASCII data transmission. We can then calculate the error detecting and correcting power of a code from the minimum distance in bits between error free blocks or codewords, see error correction capability module.
Although we shall look exclusively at coding schemes for binary systems, error correcting and detecting coding is not confined to binary digits. For example the ISBN numbers used on books have a checksum appended to them and these are calculated via modulo 11 arithmetic.
Block codes collect or arrange incoming information carrying data into groups of k binary digits and add coding (i.e. parity) bits to increase the coded block length up to n bits, where n>k.
The coding rate R is simply the ratio of data or information carrying bits to the overall block length, k/n. The number of parity check (redundant) bits is therefore n – k, [link] . This block code is usually described as a (n, k) code.
Suppose we want to code k = 4 information bits into a n = 7 bit codeword, giving a coding rate of 4/7. Code design is performed by using finite field algebra to achieve linear codes. We can achieve this (7, 4) block code using 3 input exclusive or (EX - OR) gates to form the three even parity check bits, , and , from the 4 information carrying bits, ... , as shown in [link] .
This circuitry can be represented by the logic gates in [link] or written either as a set of parity check equations or the corrresponding parity check matrix H, as in [link] .
Remember here that the “cross-in-the-circle” symbol indicates a bitwise exclusive-or (EX – OR) operation. This H matrix can also be used to directly generate codewords from the information bits via a closely related G matrix.
This is an example of a systematic code, where the data is included directly within the codeword. Convolutional FECC, see later module, is an example of a non-systematic coder where we do not explicitly include the information carrying data within the transmissions, although the transmitted coded data is derived from the information data.
Notification Switch
Would you like to follow the 'Communications source and channel coding with examples' conversation and receive update notifications?