Let's start with a fundamental question. What is Big-O notation? What was the first thing that popped up in your head?
If you're like me and are just now diving deep into the world of Computer Science, you'd be surprised by how fast a computer can compute thousands, if not millions of lines in a single second.
When you write an algorithm, even a simple one that sorts data, you are somewhat using the concept of Big-O notation. In simple terms, Big-O is how hard the computer has to work to complete the set of Instructions issued by you, the programmer.
Formally defining Big-O notation in mathematical terms would look like
f(N) = O(g(N)), if there exists a positive constant c, N* such that: f(N) <= c * g(N) >= N* | Where N is the number of inputs
As we increase the complexity of the program, we will see an increase in time taken to finish (Time Complexity), an increase in Memory Utilisation (Space Complexity), or both.
Understanding Big-O Notation
O(1) — Constant Time
Often assumed the best notation since time does not vary with the number of inputs. Constant time algorithms will always take the same amount of time to be executed. Accessing a value in an indexable array is the best example.
arr = [1,5,2,6,8,3,9]
val = arr[3]
print(val)
#Output: 6 - O(1)
O(n) — Linear Time
An algorithm has linear complexity if the execution time varies linearly with the number of inputs. As the number of inputs increases, so will the time taken to complete. Indexing through an array varies linearly with time.
arr = [1,5,2,6,8,3,9]
find_num = 3
for num in arr:
if find_num == num:
print(f"Found the number {find_num} in the array")
# Output: Found the number 3 in the array - O(n)
O(log n) — Logarithmic Time
An algorithm has logarithmic time complexity if the time it takes to run the algorithm is proportional to the logarithm of the input size n. An example is binary search.
def binarySearch(array, item):
first = 0
last = len(array)-1
found = False
while first <= last and not found:
midpoint = (first + last)//2
if array[midpoint] == item:
found = True
else:
if item < array[midpoint]:
last = midpoint-1
else:
first = midpoint+1
return found
O(n^2) — Quadratic Time
An algorithm has quadratic time complexity if the time to execute is proportional to the square of the input size.
def quadratic(items):
for item in items:
for item2 in items:
print(item, ' ' ,item2)
quadratic([4, 5, 6, 8])
# O(2^n)
Simplifying Big-O
Often, our algorithms can not be simplified into a simple expression like O(n*log n) and are usually a combination of various notations added together. To simplify the expression, we only look at the most massive expression. For example, if a function has the complexity of O(2^n) + O(n)
, taking an example of 10 terms being passed through, we get 2^10 + 10 = 1034
, here O(2^n)
contributes majorly to the expression so we can simplify the expression to only O(2^n)
Terms related to Big-O
Here is a list of terms that are often associated with Big-O
- Big O : (O) describes the upper bound of the complexity.
- Omega : (Ω) describes the lower bound of the complexity.
- Theta : (Θ) describes the exact bound of the complexity.
- Little O : (o) describes the upper bound excluding the exact one.
This was just a brief introduction to Big-O. There is so much more to Big-O than making well-optimized code. Click here for a Big-O Cheat-Sheet.