Think about a trie we used to parse words. When we do search for a word, we need to start from the root and traverse down the tree to find each character in the word.
For a simple tree, it's the same. If I say I want to know the LARD about 'a' node vs. 'b' node in the tree, while all you have is a tree, and a reference to the root node. Then you have to go down the tree to find where 'a' or 'b' is. Retrieving pre and post values in constant time doesn't imply you already have a reference to that particular node, it's only after you find that node, that retrieving will become constant time.
It's also like I give you a URL and ask for its indegree. You have to do comparions of this URL against your graph structure, and when you see a match, you can get the indegree.
When you build a graph or a tree, there is only one node known to the user, which is where all the searches will start from. I don't think all the references to every node are available. In that case, we don't have to talk about O(f(n)) at all, because if we have references to all objects, why shall we use BST instead of a linked list?
Also, let me mention the possibility of using hashtable again. If hashtable is used, then we can quickly get the reference of an object, so find will be constant time. In all other cases, all searches shall start at the beginning of a date structure, and go through the structure to do a find. |