<< Chapter < Page Chapter >> Page >
This module is part of the collection, A First Course in Electrical and Computer Engineering . The LaTeX source files for this collection were created using an optical character recognition technology, and because of this process there may be more errors than usual. Please contact us if you discover any errors.

Acknowledgment: Richard Hamming's book, Information Theory and Coding , Prentice-Hall, New York (1985) and C. T. Mullis's unpublished notes have influenced our treatment of binary codes. The numerical experiment wasdeveloped by Mullis.

We use this chapter to introduce students to the communication paradigm and to show how arbitrary symbols may be represented by binary codes. These symbols and their corresponding binary codes may be computer instructions, integer data, approximations to real data, and so on.

We develop some ad hoc tree codes for representing information and then develop Huffman codes for optimizing the use of bits. Hamming codesadd check bits to a binary word so that errors may be detected and corrected. The numerical experiment has the students design a Huffman code for coding Lincoln's Gettysburg Address.

Introduction

It would be stretching our imagination to suggest that Sir Francis had digital audio on his minde (sic) when he wrote the prophetic words

Sir francis bacon, 1623

...a man may express and signifie the intentions ofhis minde, at any distance... by... objects... capable of a twofold difference onely.

Nonetheless, this basic idea forms the basis of everything we do in digital computing, digital communications, and digital audio/video. In 1832, Samuel F. B. Morse used the very same idea to propose that telegram words be coded into binary addresses or binary codes that could be transmitted over telegraph lines and decoded at the receiving end to unravel the telegram. Morse abandoned his scheme, illustrated in Figure 1 , as too complicated and, in 1838, proposed his fabled Morse code for coding letters (instead of words) into objects (dots, dashes, spaces) capable of a threefold difference onely (sic).

This is block diagram of a generalized Coder-Decoder. On the far left there is the phrase word, w_i with an arrow pointing to the right to a box containing the phrase address generator (or codebook). Below this box there is the phrase (associative memory)(coder). A squiggly points from  the right side of the aforementioned box to the the box on the right. Above this arrow there is the phrase address, a_i=011010; and below the right side of this arrow is the phrase a_i. The arrow ends on the left side of the right box. Inside this box is the phrase: word generator (inverse codebook). Below this box is the phrase (memory)(decoder). From the right side of this box is an  arrow pointing to the right to the phrase w_i. This is block diagram of a generalized Coder-Decoder. On the far left there is the phrase word, w_i with an arrow pointing to the right to a box containing the phrase address generator (or codebook). Below this box there is the phrase (associative memory)(coder). A squiggly points from  the right side of the aforementioned box to the the box on the right. Above this arrow there is the phrase address, a_i=011010; and below the right side of this arrow is the phrase a_i. The arrow ends on the left side of the right box. Inside this box is the phrase: word generator (inverse codebook). Below this box is the phrase (memory)(decoder). From the right side of this box is an  arrow pointing to the right to the phrase w_i.
Generalized Coder-Decoder

The basic idea of Figure 1 is used today in cryptographic systems, where the “address a i " is an encyphered version of a message w i ; in vector quantizers, where the “address a i " is the address of a close approximation to data w i ; in coded satellite transmissions, where the “address a i " is a data word w i plus parity check bits for detecting and correcting errors; in digital audio systems, where the “address a i " is a stretch of digitized and coded music; and in computer memories, where a i is an address (a coded version of a word of memory) and w i is a word in memory.

In this chapter we study three fundamental questions in the construction of binary addresses or binary codes. First, what are plausible schemes for mapping symbols (such as words, letters, computer instructions, voltages, pressures, etc.) into binary codes? Second, what are plausible schemes for coding likely symbols with short binary words and unlikely symbols with long words in order to minimize the number of binary digits (bits) required to represent a message? Third, what are plausible schemes for “coding” binary words into longer binary words that contain “redundant bits” that may be used to detect and correct errors? These are not new questions. They have occupied the minds of many great thinkers. Sir Francis recognized that arbitrary messages had binary representations. Alan Turing, Alonzo Church, and Kurt Goedel studied binary codes for computations in their study of computable numbers and algorithms. Claude Shannon, R. C. Bose, Irving Reed, Richard Hamming, and many others have studied error control codes. Shan-non, David Huffman, and many others have studied the problem of efficiently coding information.

In this chapter we outline the main ideas in binary coding and illustrate the role that binary coding plays in digital communications. In your subsequent courses in electrical and computer engineering you will study integrated circuits for building coders and decoders and mathematical models for designing good codes.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, A first course in electrical and computer engineering. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10685/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'A first course in electrical and computer engineering' conversation and receive update notifications?

Ask