The Big-O Analysis for an Algorithm

Today, we will look at the much talked about Big-O analysis in the software programming realm.

How to do Big-O Analysis
The general process of Big-O run time analysis is as follows:
1. Figure out what the input is and what n represents
2. Express the number of operations the algorithm performs in terms of n
3. Eliminate all but the highest order terms
4. Remove all constant factors

Which Algorithm is better?

The fastest-possible running time for any runtime analysis is O(1), commonly referred to as constant running time. An algorithm with constant running time always takes the same amount of time to execute, regardless of the input size. This is the ideal run time for an algorithm, but it’s rarely achievable.
The performance of most algorithms depends on n, the size of the input. The algorithms can be classified as follows from best-to-worse performance:
O(log n) — An algorithm is said to be logarithmic if its running time increases logarithmically in proportion to the input size.
O(n) — A linear algorithm’s running time increases in direct proportion to the input size.
O(n log n) — A superlinearalgorithm is midway between a linear algorithm and a polynomial algorithm.
O(nc) — A polynomialalgorithm grows quickly based on the size of the input.
O(cn) — An exponentialalgorithm grows even faster than a polynomial algorithm.
O(n!) — A factorial algorithm grows the fastest and becomes quickly unusable for even small values of n.
The run times of different orders of algorithms separate rapidly as n gets larger. Consider the run time for each of these algorithm classes with n = 10:
log 10 = 1
10 = 10
10 log 10 = 10
102 = 100
210= 1,024
10! = 3,628,800
Now double it to n = 20
log 20 = 1.30
20 = 20
20 log 20= 26.02
202 = 400
220 = 1,048,576

Algorithms that run in constant, logarithmic, linear or super linear time are preferred.
Memory Footprint Analysis
Runtime analysis is not the only relevant metric for performance. A common request from the clients is to analyze how much memory does a program use. This is sometimes referred to as the memory footprint of the application. Memory use is sometimes as important as running time, particularly in constrained environments such as mobile devices.
In some cases, you will be asked about the memory usage of an algorithm. For this, the approach is to express the amount of memory required in terms of n, the size of the input, analogous to the preceding discussion of Big-O runtime analysis. The difference is that instead of determining how many operations are required for each item of input, you determine the amount of storage required for each item. That infers to that you show at least know how many bytes will the primitive data types would use. Based on that you could give a rough estimation of the memory footprint analysis.