Here are some big-oh values for typical algorithms.

Here is a table of typical cases.

Type of Search | Big-Oh | Comments |
---|---|---|

Linear search array/vector | O(N) | |

Binary search sorted array/vector | O(log N) | Requires sorted data. |

Search balanced tree | O(log N) | |

Search linked list | O(N) | |

Search hash table | O(1) |

Algorithm | array vector | list |
---|---|---|

access front | O(1) | O(1) |

access back | O(1) | O(1) |

access middle | O(1) | O(N) |

insert at front | O(N) | O(1) |

insert at back | O(1) | O(1) |

insert in middle | O(N) | O(1) |

Some sorting algorithms show variability in their Big-Oh performance. It is therefore interesting to look at their best, worst, and average performance. For this description "average" means uniformly distributed values. The distribution of real values for any given application may be important in selecting a particular algorithm.

Type of Sort | Best | Worst | Average | Comments |
---|---|---|---|---|

BubbleSort | O(N) | O(N^{2}) | O(N^{2}) |
Not a good sort, except with ideal data. |

Selection sort | O(N^{2}) | O(N^{2}) | O(N^{2}) | |

QuickSort | O(N log N) | O(N^{2}) | O(N log N) | Good, but it worst case is O(N^{2}) |

HeapSort | O(N log N) | O(N log N) | O(N log N) | Typically slower than QuickSort, but worst case is much better. |

Example. I had to sort a large array of numbers. The values were almost always already in order, and even when they weren't in order there was typically only one number that was out of order. Only rarely were the values completely disorganized. I used a bubble sort because it was O(1) for my "average" data. This was many years ago when CPUs were 1000 times slower. Today I would simply use the library sort for the amount of data I had because a difference in execution time would probably be unnoticed. However, there are always data sets which are so large that a choice of algorithms really matters.

- This should name the appropriate library structures and functions.
- This should address
*reasons*for the particular performance. - Altho some of these algorithms have the same big-oh characteristics, they may differ by a factor of three (or more) in practical implementations. This should be addressed. Remember that big-oh notation ignores any constant overhead or proportionality. This can be substantial and can't be ignored in practical implementations.
- Discuss tradeoffs between memory and time.
- Other considerations, such as the effect on cache and virtual memory (locality of reference, modifications), maintainability, serializability, portability, external storage, ... should be discussed.

Well, I'll fill things in here as I have time, but don't hold your breath.