Trie

From Free net encyclopedia

(Difference between revisions)
Revision as of 15:32, 20 April 2006
EatMyShortz (Talk | contribs)
/* Dictionary representation */ Example, mobile phone dictionary
Next diff →

Current revision

Image:Trie example.png

In computer science, a trie, or prefix tree, is an ordered tree data structure that is used to store an associative array where the keys are strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree shows what key it is associated with. All the descendants of any one node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that happen to correspond to keys of interest.

The term trie comes from "retrieval". Due to this etymology it is pronounced "tree", although some encourage the use of "try" in order to distinguish it from the more general tree.

In the shown example, keys are listed in the nodes and values below them. Each complete English word has an integer value associated with it. A trie can be seen as a deterministic finite automaton, although the symbol on each edge is often implicit in the order of the branches.

Note that it is not necessary for keys to be explicitly stored in nodes. (In the figure, words are shown only to illustrate how the trie works. The siblings nodes of "i" and "t" are out of order, and the "te" and "to" nodes are also out of order.)

Contents

Advantages and drawbacks

The following are the main advantages of tries over binary search trees (BSTs):

  • Looking up keys is faster. Looking up a key of length m takes worst case O(m) time. BST take O(lg n) time because lookups depend on the depth of the tree, which is logarithmic in the number of keys. Also, the simple operations tries use during lookup, such as array indexing using a character, are fast on real machines.
  • Tries can require less space when they contain a large number of short strings, because the keys are not stored explicitly and nodes are shared between keys.
  • Tries help with longest-prefix matching, where we wish to find the key sharing the longest possible prefix with a given key efficiently. They also allow one to associate a value with an entire group of keys that have a common prefix.
  • There is no need to keep a trie balanced, which for BSTs typically involves a great deal of complexity (see self-balancing binary search tree).

Its disadvantages are the following:

  • Tries can give an ordering of the keys, but it must correspond to some lexicographic ordering.
  • Tries can be considerably larger in certain circumstances, such as a trie containing a small number of very long strings (Patricia tries help to deal with this).
  • Trie algorithms are more complex than simple BSTs.
  • It is not always easy to represent data as strings, e.g. complex objects or floating-point numbers.

Although it seems restrictive to say a trie's key type must be a string, many common data types can be seen as strings; for example, an integer can be seen as a string of bits. Integers with common bit prefixes occur as map keys in many applications such as routing tables and address translation tables.

Tries are most useful when the keys are of varying lengths and we expect some key lookups to fail, because the key is not present. If we have fixed-length keys, and expect all lookups to succeed, then we can improve key lookup by combining every node with a single child (such as "i" and "in" above) with its child, producing a Patricia trie. This saves space in maps where long paths down the trie do not have branches fanning out, for example in maps where many keys have a long common prefix or where many portions of keys are composed of characters all unique.

Clarification about performance

It is acceptable to consider trie search time O(1). However, this is not entirely correct because it assumes that the length of the keys is constant. Given N distinct keys, the lower bound of the length of the longest key is actually logkN where k is the size of the alphabet. It can therefore be demonstrated that trie search time is O(log N) strictly speaking, which would appear to be the same as that of BST search.

This observation nonetheless does not take away from the benefits of tries, because the real advantage of tries is that they make each comparison operation cheaper: in a BST, we are performing string comparisons, which are O(k) in the worst case, while in a trie we are comparing single characters in constant time. This is not merely a theoretical difference in this case, because as we descend close to the leaves of the BST, the strings we compare will often have long common prefixes, causing string comparisons to be slow in practice. Therefore, BST and binary search time is actually O(log2N).

A similar argument applies to radix sort, which sorts bitstrings of length k in O(kn) time; because sorting is applied more often to small values than large values, this factor of k is often neglected.

Applications

As replacement of other data structures

As mentioned, a trie has a number of advantages over binary search trees. A trie can also be used to replace a hash table, over which it has the following advantages:

  • Average lookup speed is theoretically the same, but a trie is faster in practice.
  • Worst-case lookup speed in hash tables is O(N).
  • There are no key collisions.
  • Buckets are only necessary if a single key is mapped to more than one value.
  • There is no need to provide a hash function.
  • A trie can provide an alphabetical ordering of the entries by key.

Tries do have some drawbacks as well:

  • It is not easy to represent all keys as strings.
  • They are frequently less space-efficient than hash tables.
  • Unlike hash tables, they are generally not already available in programming language toolkits.

Dictionary representation

A common application of a trie is that of storing dictionary words, for example, in a mobile phone dictionary. Tries allow for very fast word lookup, insertion and deletion. Words also share nodes, so one might expect some space savings (which is not the case in practice due to space required by nodes). They are also well suited for the implementation of approximate matching algorithms in spell checking software, for example. If only the dictionary words need to be stored, however, without any additional information required to be stored along with each word, minimal acyclic deterministic finite automata are much more compact than tries.

Algorithms

The following pseudo-code represents the general algorithm for finding a given string in a trie:

 node = root
 c = first character in string
 while(node != null)
   node = node.children[c]
   c = next character in key
   if(c == null && node == null) return found
 return not found

Sorting

Lexicographic sorting of a set of keys can be accomplished with a simple trie-based algorithm as follows:

This algorithm is a form of radix sort.

A parallel algorithm for sorting N keys based on tries is Ω(1), Omega(1), if there are N processors and the length of the keys have a constant upper bound. There is the potential that the keys might collide by having common prefixes or by being identical to one another, reducing or eliminating the speed advantage of having multiple processors operating in parallel. In general, having a finite number of parallel processors does not change the O() time complexity of an algorithm.

Full text search

A special kind of trie, called a suffix tree, can be used to index all suffixes in a text in order to carry out fast full text searches.

See also

External links

References

  • Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Section 6.3: Digital Searching, pp.492–512.zh:Trie

de:Trie