<< Chapter < Page Chapter >> Page >

Run-length codes (RLCs) are a simple and effective way of improving the efficiency of Huffman coding when one event ismuch more probable than all of the others combined. They operate as follows:

  • The pels of the subimage are scanned sequentially (usually in columns or rows) to form a long 1-dimensional vector.
  • Each run of consecutive zero samples (the most probable events) in the vector is coded as a single event.
  • Each non-zero sample is coded as a single event in the normal way.
  • The two types of event (runs-of-zeros and non-zero samples) are allocated separate sets of codewords in the same Huffmancode, which may be designed from a histogram showing the frequencies of all events.
  • To limit the number of run events, the maximum run length may be limited to a certain value (we have used 128) andruns longer than this may be represented by two or more run codes in sequence, with negligible loss of efficiency.
Hence RLC may be added before Huffman coding as an extra processing step, which converts the most probable event intomany separate events, each of which has p i 0.5 and may therefore be coded efficiently. shows the new probability histograms and entropies for level 1 of the Haar transform whenRLC is applied to the zero event of the three bandpass subimages. Comparing this with a previous figure , note the absence of the high probability zero events and the new statesto the right of the original histograms corresponding to the run lengths.

Probability histograms (dashed) and entropies (solid) of the four subimage of the Level 1 Haar transform of Lenna (see figure ) after RLC.

The total entropy per event for an RLC subimage is calculated as before from the entropyhistogram. However to get the entropy per pel we scale the entropy by the ratio of the number of events (runs and non-zero samples) in the subimage to the numberof pels in the subimage (note that with RLC this ratio will no longer equal one - it will hopefully be much less).

gives the entropies per pel after RLC for each subimage, which are now less than the entropies in this figure . This is because RLC takes advantage of spatial clustering of the zero samples in a subimage, ratherthan just depending on the histogram of amplitudes.

Clearly if all the zeros were clustered into a single run, this could be coded much more efficiently than if they aredistributed into many runs. The entropy of the zero event tells us the mean number of bits to code each zero pel if the zero pels are distributed randomly , ie if the probability of a given pel being zero does not depend on theamplitudes of any nearby pels.

In typical bandpass subimages, non-zero samples tend to be clustered around key features such as object boundaries andareas of high texture. Hence RLC usually reduces the entropy of the data to be coded. There are many other ways to takeadvantage of clustering (correlation) of the data - RLC is just one of the simplest.

In , comparing column 5 with column 3, we see the modest (7%) reduction in entropy perpel achieved by RLC, due clustering in the Lenna image. The main advantage of RLC is apparent in column 6, which shows the meanbit rate per pel when we use a real Huffman code on the RLC histograms of . The increase in bit rate over the RLC entropy is only 1.5071 1.4977 1 0.63 % compared with 14.4% when RLC is not used (columns 3 and 4).

Finally, comparing column 6 with column 3, we see that, relative to the simple entropy measure, combined RLC and Huffman codingcan reduce the bit rate by 1 1.5071 1.6103 6.4% The closeness of this ratio to unity justifies our use of simple entropy as a tool for assessing the information compressionproperties of the Haar transform - and of other energy compression techniques as we meet them.

The following is the listing of the M-file to calculate the Huffman entropy from a given histogram.

% Find Huffman code sizes: JPEG fig K.1, procedure Code_size. % huffhist contains the histogram of event counts (frequencies). freq = huffhist(:); codesize = zeros(size(freq)); others = -ones(size(freq)); %Pointers to next symbols in code tree. % Find non-zero entries in freq, and loop until only 1 entry left. nz = find(freq > 0); while length(nz) > 1, % Find v1 for least value of freq(v1) > 0. [y,i] = min(freq(nz)); v1 = nz(i); % Find v2 for next least value of freq(v2) > 0. nz = nz([1:(i-1) (i+1):length(nz)]); % Remove v1 from nz. [y,i] = min(freq(nz)); v2 = nz(i); % Combine frequency values. freq(v1) = freq(v1) + freq(v2); freq(v2) = 0; codesize(v1) = codesize(v1) + 1; % Increment code sizes for all codewords in this tree branch. while others(v1) > -1, v1 = others(v1); codesize(v1) = codesize(v1) + 1; end others(v1) = v2; codesize(v2) = codesize(v2) + 1; while others(v2) > -1, v2 = others(v2); codesize(v2) = codesize(v2) + 1; end nz = find(freq > 0); end % Generate Huffman entropies by multiplying probabilities by code sizes. huffent = (huffhist(:)/sum(huffhist(:))) .* codesize;

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Image coding. OpenStax CNX. Jan 22, 2004 Download for free at http://cnx.org/content/col10206/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Image coding' conversation and receive update notifications?

Ask