<< Chapter < Page Chapter >> Page >

Finally, trees can also be traversed in level-order, where we visit every node on a level before going to a lower level. This is also called Breadth-first traversal

Once the binary search tree has been created, its elements can be retrieved in order by recursively traversing the left subtree of the root node, accessing the node itself, then recursively traversing the right subtree of the node, continuing this pattern with each node in the tree as it's recursively accessed. The tree may also be traversed in pre-order or post-order traversals. The following is the implementation of these traversals:

preorder(node)

print node.value

if node.left ≠ null then preorder(node.left)

if node.right ≠ null then preorder(node.right)

inorder(node)

if node.left ≠ null then inorder(node.left)

print node.value

if node.right ≠ null then inorder(node.right)

postorder(node)

if node.left ≠ null then postorder(node.left)

if node.right ≠ null then postorder(node.right)

print node.value

All three sample implementations will require stack space proportional to the height of the tree. In a poorly balanced tree, this can be quite considerable.

5.2.5. sort

(From Wikipedia, the free encyclopedia)

A binary search tree can be used to implement a simple but inefficient sorting algorithm . Similarly to heapsort , we insert all the values we wish to sort into a new ordered data structure — in this case a binary search tree — and then traverse it in order, building our result:

def build_binary_tree(values):

tree = None

for v in values:

tree = binary_tree_insert(tree, v)

return tree

def traverse_binary_tree(treenode):

if treenode is None: return []

else:

left, value, right = treenode

return (traverse_binary_tree(left) + [value] + traverse_binary_tree(right))

The worst-case time of build_binary_tree is Θ(n2) — if you feed it a sorted list of values, it chains them into a linked list with no left subtrees. For example, build_binary_tree([1, 2, 3, 4, 5]) yields the tree (None, 1, (None, 2, (None, 3, (None, 4, (None, 5, None))))).

There are several schemes for overcoming this flaw with simple binary trees; the most common is the self-balancing binary search tree . If this same procedure is done using such a tree, the overall worst-case time is O (nlog n), which is asymptotically optimal for a comparison sort . In practice, the poor cache performance and added overhead in time and space for a tree-based sort (particularly for node allocation ) make it inferior to other asymptotically optimal sorts such as quicksort and heapsort for static list sorting. On the other hand, it is one of the most efficient methods of incremental sorting, adding items to a list over time while keeping the list sorted at all times.

5.3. types of binary search trees

(From Wikipedia, the free encyclopedia)

There are many types of binary search trees. AVL trees and red-black trees are both forms of self-balancing binary search trees . A splay tree is a binary search tree that automatically moves frequently accessed elements nearer to the root. In a treap ("tree heap "), each node also holds a priority and the parent node has higher priority than its children.

5.3.1. performance comparisons

D. A. Heger (2004) presented a performance comparison of binary search trees. Treap was found to have the best average performance, while red-black tree was found to have the smallest amount of performance fluctuations.

5.3.2. optimal binary search trees

If we don't plan on modifying a search tree, and we know exactly how often each item will be accessed, we can construct an optimal binary search tree, which is a search tree where the average cost of looking up an item (the expected search cost) is minimized.

Assume that we know the elements and that for each element, we know the proportion of future lookups which will be looking for that element. We can then use a dynamic programming solution, detailed in section 15.5 of Introduction to Algorithms by Thomas H. Cormen Sec Edition, to construct the tree with the least possible expected search cost.

Even if we only have estimates of the search costs, such a system can considerably speed up lookups on average. For example, if you have a BST of English words used in a spell checker , you might balance the tree based on word frequency in text corpuses, placing words like "the" near the root and words like "agerasia" near the leaves. Such a tree might be compared with Huffman trees , which similarly seek to place frequently-used items near the root in order to produce a dense information encoding; however, Huffman trees only store data elements in leaves and these elements need not be ordered.

If we do not know the sequence in which the elements in the tree will be accessed in advance, we can use splay trees which are asymptotically as good as any static search tree we can construct for any particular sequence of lookup operations.

Alphabetic trees are Huffman trees with the additional constraint on order, or, equivalently, search trees with the modification that all elements are stored in the leaves. Faster algorithms exist for optimal alphabetic binary trees (OABTs).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Data structures and algorithms. OpenStax CNX. Jul 29, 2009 Download for free at http://cnx.org/content/col10765/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Data structures and algorithms' conversation and receive update notifications?

Ask