<< Chapter < Page Chapter >> Page >

As a consequence of the structure of y, it is easy to see that it can be compressed with the following redundancy,

r | T | ( r - 1 ) 2 log ( n | T | + O ( 1 ) ) + | T | log ( n ) ,

where the new | T | log ( n ) term arises from coding the locations of transitions between segments (states of the tree) in the BWT output. Not only is the BWT convenient for compression, but it is amenable to fast computation. Both the BWT and its inverse can be implemented in O ( n ) time. This combination of great compression and speed has made the BWT quite popular in compressors that have appeared since the late 1990s. For example, the bzip2 archiving package is very popular among network administrators.

That said, from a theoretical perspective the BWT suffers from an extraneous redundancy of | T | log ( n ) bits. Until this gap was resolved, the theoretical community still preferred the semi-predictive method or another approach based on mixtures.

Semi-predictive coding using the bwt

Another approach for using the BWT is to use y only for learning the MDL tree source T * . To do so, note that when the BWT is run, it is possible to track the correspondences between contexts and segments of the BWT output. Therefore, information about per-segment symbol count is available, and can be easily applied to perform the tree pruning procedure that we have seen. Not only that, but some BWT computation algorithms (e.g., suffix tree approaches) maintain this information for all context depths and not just bounded D . In short, the BWT allows to compute the minimizing tree T * in linear time  [link] .

Given the minimizing tree T * , it is not obvious how to determine which state generated each character of y (respectively, x ) in linear time. It has been shown by Martín et al.  [link] that this step can also be performed in linear time by developing a state machine whose states include the leaves of T * . The result is a two part code, where the first part computes the optimal T * via BWT, and the second part actually compresses x by tracking which state of T * generated each of the symbols. To summarize, we have a linear complexity algorithm for compressing and decompressing a source while achieving the redundancy bounds for the class of tree sources.

Context tree weighting

We discussed in [link] for the problem of encoding a transition between two known i.i.d. distributions that

1 n i = 1 n p θ i ( x ) > 1 n max i { p θ i ( x ) } .

Therefore, a mixture over all parameter values yields a greater probability (and thus lower coding length) than the maximizing approach. Keep in mind that finding the optimal MDL tree source T * is analogous to the plug-in approach, and it would reduce the coding length if we could assign the probability as a mixture over all possible trees, where we assign trees with fewer leaves a greater weight. That is, ideally we want to implement

Pr ( x ) = T 2 - | code ( T ) | · p T ( x ) ,

where | code ( T ) | is the length of the encoding procedure that we discussed for the tree structure T , and p T ( x ) is the probability for the sequence x under the model T .

Willems et al. showed how to implement such a mixture in a simple way over the class of tree sources of bounded depth D . As before, the algorithm proceeds in a bottom up manner from leaves toward the root. At leaves, the probability p s assigned to symbols that were generated within that context s is the Krichevsky-Trofimov probability, p K T ( s , x )   [link] . For s that is an internal node whose depth is less than D , the approach by Willems et al.  [link] is to mix ( i ) the probabilities of keeping the branches for 0s and 1s and ( ii ) pruning,

p s = 1 2 p K T ( s , x ) + 1 2 p 0 s · p 1 s .

It can be shown that this simple formula allows to implement a mixture over the class of bounded depth context tree sources, thus reducing the coding length w.r.t. the semi-predictive approach.

In fact, Willems later showed how to extend the context tree weighting (CTW) approach to tree sources of unbounded depth  [link] . Unfortunately, while the basic bounded depth CTW has complexity that is comparable to the BWT, the unboundedCTW has potentially higher complexity.

Questions & Answers

what is phylogeny
Odigie Reply
evolutionary history and relationship of an organism or group of organisms
AI-Robot
ok
Deng
what is biology
Hajah Reply
the study of living organisms and their interactions with one another and their environments
AI-Robot
what is biology
Victoria Reply
HOW CAN MAN ORGAN FUNCTION
Alfred Reply
the diagram of the digestive system
Assiatu Reply
allimentary cannel
Ogenrwot
How does twins formed
William Reply
They formed in two ways first when one sperm and one egg are splited by mitosis or two sperm and two eggs join together
Oluwatobi
what is genetics
Josephine Reply
Genetics is the study of heredity
Misack
how does twins formed?
Misack
What is manual
Hassan Reply
discuss biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles
Joseph Reply
what is biology
Yousuf Reply
the study of living organisms and their interactions with one another and their environment.
Wine
discuss the biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles in an essay form
Joseph Reply
what is the blood cells
Shaker Reply
list any five characteristics of the blood cells
Shaker
lack electricity and its more savely than electronic microscope because its naturally by using of light
Abdullahi Reply
advantage of electronic microscope is easily and clearly while disadvantage is dangerous because its electronic. advantage of light microscope is savely and naturally by sun while disadvantage is not easily,means its not sharp and not clear
Abdullahi
cell theory state that every organisms composed of one or more cell,cell is the basic unit of life
Abdullahi
is like gone fail us
DENG
cells is the basic structure and functions of all living things
Ramadan
What is classification
ISCONT Reply
is organisms that are similar into groups called tara
Yamosa
in what situation (s) would be the use of a scanning electron microscope be ideal and why?
Kenna Reply
A scanning electron microscope (SEM) is ideal for situations requiring high-resolution imaging of surfaces. It is commonly used in materials science, biology, and geology to examine the topography and composition of samples at a nanoscale level. SEM is particularly useful for studying fine details,
Hilary
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask