Runtime analysis is, in the simplest terms, a way of looking at how fast a certain algorithm executes. For this, we can use Big O notation, expressed as O(f(n)), where f(n) is a function that describes how fast (or slow) the program runs in the worst case scenario. f(n) may be one of several different functions including c (constant), n (linear), n^2 (quadratic), logn, 2^n, and many more.
Having a runtime of O(c) is the best possible outcome, since it means that the time the algorithm takes to execute remains the same no matter what arguments are passed to the algorithm. One example may include:
def constant(s):
if s is not None:
print(s)
where the execution of function is independent of s.
Runtimes with n raised to some number are usually determined by the number of for loops that are nested within each other. In these cases, the speed can be directly proportional/linear (O(n)), double (O(n^2)), triple (O(n^3)), or even greater depending on what f(n) is. Of course, the higher the power, the slower and more inefficient the algorithm is. An interesting runtime is that of O(log n). In class, we discussed a search function for a binary search tree of runtime O(log2 n) that continuously halves the size of the argument with each iteration, thus reducing the impact of large inputs on execution time. In contrast to O(c) being the best outcome, O(n^n) is the worst.
As a final note, I found it weird (but understandable) that only the slowest component of f(n) mattered in determining the overall efficiency of the algorithm. For example, an algorithm with runtime 3n^4 + 7n^3 + 1 would be O(n^4); the constants and "lower" factors get dropped off. My only question is would there be some situation in which the other factors actually made a difference when comparing functions? I guess in the end only the fastest-growing factor would have a dramatic impact in execution speed, but I would like to know more about this topic in the future.