Back to Blog

The Big O Notation

Date

9 min read

img of The Big O Notation

Your code may be too slow simply because you’ve chosen the inadequate approach. When coding, you should understand how algorithms perform when we give them more and more stuff to work with.

Time and space complexity are two important metrics that help us analyze the efficiency of an algorithm. Time complexity measures how the algorithm’s running time increases with the input size, while space complexity measures how the algorithm’s memory usage increases with the input size.

That’s the Big O.

What exactly is this Big O Notation?

Big O notation is a mathematical representation that describes the upper bound of the growth rate of an algorithm’s time or space complexity concerning its input size. It provides a simplified way to analyze the efficiency of an algorithm in the worst-case scenario. The “O” in Big O stands for “order of,” indicating the highest degree of the polynomial that describes the algorithm’s performance.

To put it simply: It’s an inexact indicator of how your code will perform in terms of time and memory usage as the input data increases.

Why inexact

In Big O notation, we ignore constant factors and lower-order terms, focusing only on the dominant term that determines the growth rate. This notation allows us to understand and compare the performance of different algorithms by quantifying how their runtime or space usage grows in relation to the input size. While the Big O notation provides a useful approximation, it is important to recognize that it is inexact.

So the next question could be: Why does this inexactness matter in the analysis of algorithms?

Theoretical Analysis vs. Real-world Performance

One of the main reasons why the Big O notation is inexact is that it represents a theoretical analysis of an algorithm’s efficiency, rather than its real-world performance. The Big O notation provides an upper bound on the growth rate of an algorithm, indicating the worst-case scenario. However, real-world performance can be influenced by various factors, such as hardware limitations, compiler optimizations, and input patterns. As a result, the actual runtime or space usage of an algorithm may (and usually will) differ from its theoretical analysis.

Constant Factors and Lower Order Terms

Another aspect that contributes to the inexactness of the Big O notation is the ignoring of constant factors and lower order terms. When determining the Big O complexity of an algorithm, we focus on the dominant term that grows fastest as the input size increases. This approach simplifies the analysis and enables us to compare algorithms more easily. However, it also means that the Big O notation overlooks the impact of constant factors and lower order terms, which can affect the actual performance of an algorithm.

There is an ‘n’ for which O(n) can be faster than O(1). Go and compare Set and Array operations on 30 items in JavaScript, then try it with 1500 items or more.

Best-case, Average-case, and Worst-case Scenarios

The Big O notation typically represents the worst-case scenario for an algorithm, which is the largest possible growth rate of the performance measure. However, different algorithms may exhibit different performance characteristics in different scenarios. For example, an algorithm may have a different Big O complexity in the best-case or average-case scenarios. By focusing solely on the worst-case scenario, the Big O notation fails to capture the full range of an algorithm’s performance and can be misleading in certain situations.

Practical Implications and Real-world Trade-offs

While the inexactness of the Big O notation might seem like a limitation, you should understand its significance in the analysis of algorithms. Real-world trade-offs often involve a balance between different performance measures, such as runtime, memory usage, and scalability. The Big O notation provides a high-level overview that enables your coworkers to compare algorithms and change your shitty code based on algorithms’ expected growth rates.

However, in highly scalable projects (which most of us will not participate in), it can be crucial to complement Big O analysis with real-world benchmarks and performance testing to ensure optimal algorithm selection in practical scenarios.

Common types of complexities

(time and space, time explanation below)

  1. O(1) - Constant Complexity

    Algorithms with constant time complexity have execution times that remain constant, regardless of the input size. Accessing an element in an array or performing a simple arithmetic operation are examples of O(1) operations.

  2. O(log n) - Logarithmic Complexity

    Logarithmic time complexity indicates that the algorithm’s performance grows logarithmically with the input size. Efficient search algorithms like binary search often exhibit O(log n) complexity.

  3. O(n) - Linear Complexity

    Algorithms with linear time complexity have execution times that grow linearly with the input size. Iterating through an array is a common example of O(n) complexity.

  4. O(n log n) - Linearithmic Complexity

    This complexity is often found in efficient sorting algorithms like Merge Sort and Heap Sort. It combines linear and logarithmic growth.

  5. O(n^2) - Quadratic Complexity

    Quadratic time complexity signifies that the execution time grows with the square of the input size. Nested loops are a typical scenario leading to O(n^2) complexity.

  6. O(2^n) - Exponential Complexity

    Algorithms with exponential time complexity should generally be avoided, as their running time grows rapidly with even moderately sized inputs. The Fibonacii sequence is one of the best examples of exponential growth.

Analyzing Complexity - examples

Check these small examples, you’ll get a better grip on how to analyze algorithm complexity and see performance changes in few lines of code. That way, we can choose the right algorithm for different kinds of problems with confidence.

1. The Maximum Element

Let’s consider the problem of finding the maximum element in an array as an example to demonstrate the concept of analyzing complexity.

Algorithm

To solve this problem, we can use a simple algorithm that iterates through the array and keeps track of the maximum element found so far. Here’s the algorithm:

  1. Initialize a variable max to the first element of the array.
  2. Iterate through the remaining elements of the array.
  3. For each element, compare it with the current max value. If the element is greater than max, update max to the element. Otherwise, continue to the next element.
  4. After iterating through all the elements, the max variable will contain the maximum element in the array.
// TypeScript O(n)
function maximumElement(arr: number[], target: number): boolean {
	let max = arr[0]
	for (const element of arr) {
		if (element > max) {
			max = element
		}
	}
	return max
}

Complexity Analysis

To analyze the complexity of this algorithm, we can consider the number of comparisons it makes.

  • In the worst-case scenario, where the maximum element is located at the last position of the array, the algorithm needs to compare each element with the current max value. This results in n-1 comparisons, where n is the size of the array.
  • In the best-case scenario, where the maximum element is located at the first position, the algorithm only requires n-1 comparisons.

Therefore, we can conclude that the complexity of finding the maximum element in an array using this algorithm is O(n) in both the best and worst cases, where n is the size of the array.

Now, let’s consider the binary search algorithm as another example to illustrate complexity analysis.

Algorithm

Binary search is a divide and conquer algorithm used to find a specific element in a sorted array efficiently. Here’s the algorithm:

  1. Initialize low as the first index of the array and high as the last index.
  2. While low is less than or equal to high: Calculate the middle index as (low + high) / 2. If the middle element is equal to the target element, return its index. If the middle element is greater than the target element, update high to middle - 1. If the middle element is less than the target element, update low to middle + 1.
  3. If the target element is not found after the while loop, return a -1.
// TypeScript O(log n)
function binarySearch(arr: number[], target: number): boolean {
	let low = 0
	let high = arr.length - 1
	while (low <= high) {
		const mid = Math.floor((low + high) / 2)
		if (arr[mid] === target) {
			return mid
		} else if (arr[mid] > target) {
			high = mid - 1
		} else {
			low = mid + 1
		}
	}
	return -1
}

Complexity Analysis

To analyze the complexity of binary search, we can consider the number of comparisons it makes.

  • In each iteration, the algorithm compares the target element with the middle element of the subarray it is currently considering.
  • Since the size of the subarray being considered is halved after each iteration, the number of comparisons can be represented by log₂(n), where n is the size of the array.
  • Therefore, the complexity of binary search is O(log n).

Practical Exercises

Let’s put theory into practice through activities. Develop practical skills that will help you succeed in the subject.

  • Write a function that calculates the time complexity of a given algorithm. Provide an example and explain step by step how to determine the time complexity.

  • Compare the time complexities of two different algorithms and explain why determining the exact time complexity is often difficult. Provide real-world examples where knowing the time complexity can make a significant difference.

  • Choose an algorithm from a problem-solving domain (such as searching, sorting, or graph traversal) and analyze its time and space complexity. Provide examples of worst-case, best-case, and average-case scenarios, and discuss the implications of the complexity analysis on the algorithm’s performance.

Send your solution to contact@exposecode.com with subject “The Big O Notation Solution”.

Wrap-up

  • Understanding time and space complexity is essential for analyzing the efficiency of algorithms. It allows us to evaluate how the input size affects the number of operations required and the amount of memory used. Additionally, it helps us identify bottlenecks and optimize our code for better performance.
  • The Big O notation provides a way to express the time and space complexity of an algorithm in terms of the input size. However, it’s important to note that it’s an inexact measure and only provides an upper bound on the growth rate. Nonetheless, it gives us a valuable tool for comparing and choosing algorithms based on their efficiency.
  • Analyzing complexity through examples helps us gain a practical understanding of the Big O notation. By breaking down algorithms and examining their individual steps, we can determine their time and space complexities. This enables us to make informed decisions when selecting algorithms for different problem scenarios.

Explore these additional resources discussing similar topics to broaden your understanding further.


Related Posts

There are no related posts yet. 😢

🌟Welcome to ExposeCode

I'm just getting started, and there's not a ton of content yet, but I'm working hard to create awesome posts for you.

Stay tuned for more than just blog updates!