Big O Notation: A Comprehensive Guide

Master algorithm complexity analysis with this complete guide

What is Big O Notation?

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, we use it to classify algorithms according to how their run time or space requirements grow as the input size grows.

Think of Big O notation as a way to express the worst-case scenario for your algorithm's performance. It answers the question: "How does my algorithm's performance scale when I double, triple, or 10x my input size?"

Big O notation focuses on the rate of growth rather than exact numbers. We ignore constants and lower-order terms because they become insignificant as the input size grows very large.

Common Time Complexities

Here are the most common time complexities you'll encounter, ordered from best to worst performance:

O(1) - Constant Time

O(1)

The algorithm takes the same amount of time regardless of input size. This is the best possible time complexity.

// Accessing an array element by index
function getFirstElement(arr) {
  return arr[0];  // Always takes the same time
}

// Hash table lookup
function getValue(map, key) {
  return map.get(key);  // O(1) average case

Examples: Array access, hash table lookup, simple arithmetic operations

O(log n) - Logarithmic Time

O(log n)

The algorithm's runtime grows logarithmically with input size. This means doubling the input size only increases the runtime by a constant amount. Very efficient for large datasets.

// Binary search in a sorted array
function binarySearch(arr, target) {
  let left = 0;
  let right = arr.length - 1;
  
  while (left <= right) {
    const mid = Math.floor((left + right) / 2);
    if (arr[mid] === target) return mid;
    if (arr[mid] < target) left = mid + 1;
    else right = mid - 1;
  }
  return -1;

Examples: Binary search, balanced tree operations, divide-and-conquer algorithms

O(n) - Linear Time

O(n)

The runtime grows linearly with the input size. If you double the input, the runtime doubles. This is common for algorithms that need to look at every element once.

// Finding the maximum value in an array
function findMax(arr) {
  let max = arr[0];
  for (let i = 1; i < arr.length; i++) {
    if (arr[i] > max) {
      max = arr[i];
    }
  }
  return max;
}

// Linear search
function linearSearch(arr, target) {
  for (let i = 0; i < arr.length; i++) {
    if (arr[i] === target) return i;
  }
  return -1;

Examples: Linear search, iterating through an array, finding min/max

O(n log n) - Linearithmic Time

O(n log n)

This complexity appears in efficient sorting algorithms. It's worse than linear but much better than quadratic for large inputs.

// Merge sort
function mergeSort(arr) {
  if (arr.length <= 1) return arr;
  
  const mid = Math.floor(arr.length / 2);
  const left = mergeSort(arr.slice(0, mid));
  const right = mergeSort(arr.slice(mid));
  
  return merge(left, right);
}

function merge(left, right) {
  const result = [];
  let i = 0, j = 0;
  
  while (i < left.length && j < right.length) {
    if (left[i] < right[j]) {
      result.push(left[i++]);
    } else {
      result.push(right[j++]);
    }
  }
  
  return result.concat(left.slice(i)).concat(right.slice(j));

Examples: Merge sort, quick sort (average case), heap sort

O(n²) - Quadratic Time

O(n²)

The runtime is proportional to the square of the input size. Common in algorithms with nested loops. Performance degrades quickly with larger inputs.

// Bubble sort
function bubbleSort(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length - i - 1; j++) {
      if (arr[j] > arr[j + 1]) {
        [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
      }
    }
  }
  return arr;
}

// Finding all pairs in an array
function findAllPairs(arr) {
  const pairs = [];
  for (let i = 0; i < arr.length; i++) {
    for (let j = i + 1; j < arr.length; j++) {
      pairs.push([arr[i], arr[j]]);
    }
  }
  return pairs;

Examples: Bubble sort, selection sort, insertion sort, nested loops

O(2ⁿ) - Exponential Time

O(2ⁿ)

The runtime doubles with each addition to the input. This is very inefficient and should be avoided for large inputs. Common in recursive algorithms that solve problems by breaking them into multiple subproblems.

// Naive recursive Fibonacci
function fibonacci(n) {
  if (n <= 1) return n;
  return fibonacci(n - 1) + fibonacci(n - 2);
}

// Generating all subsets of a set
function getAllSubsets(arr) {
  if (arr.length === 0) return [[]];
  
  const first = arr[0];
  const rest = arr.slice(1);
  const subsetsWithoutFirst = getAllSubsets(rest);
  const subsetsWithFirst = subsetsWithoutFirst.map(
    subset => [first, ...subset]
  );
  
  return [...subsetsWithoutFirst, ...subsetsWithFirst];

Examples: Recursive Fibonacci, generating all subsets, solving Tower of Hanoi

Space Complexity

While time complexity measures how long an algorithm takes to run, space complexity measures how much memory it uses. The same Big O notation applies.

When analyzing space complexity, we consider:

  • Auxiliary space: Extra space used by the algorithm (not including input)
  • Input space: Space required to store the input
  • Total space: Auxiliary space + Input space

O(1) Space - Constant Space

// Uses only a fixed number of variables
function sum(arr) {
  let total = 0;  // O(1) space
  for (let num of arr) {
    total += num;
  }
  return total;
          

O(n) Space - Linear Space

// Creates a new array of size n
function double(arr) {
  const result = [];  // O(n) space
  for (let num of arr) {
    result.push(num * 2);
  }
  return result;

O(n) Space - Recursive Call Stack

// Recursive calls use stack space
function factorial(n) {
  if (n <= 1) return 1;
  return n * factorial(n - 1);  // O(n) stack space

Best, Average, and Worst Case

Algorithms can have different performance characteristics depending on the input:

  • Best case: The input that makes the algorithm perform optimally (denoted Ω - Omega)
  • Average case: Expected performance for typical inputs (denoted Θ - Theta)
  • Worst case: The input that makes the algorithm perform most poorly (denoted O - Big O)

When we talk about Big O notation, we're usually referring to the worst case scenario, as it gives us a guarantee that the algorithm will never perform worse than this.

Example: Quick Sort

  • Best case: O(n log n) - pivot divides array evenly
  • Average case: O(n log n) - typical random input
  • Worst case: O(n²) - already sorted array with poor pivot selection

Rules for Calculating Big O

  1. Drop constants: O(2n) becomes O(n), O(500) becomes O(1)
  2. Drop non-dominant terms: O(n² + n) becomes O(n²), O(n + log n) becomes O(n)
  3. Different inputs use different variables: If you have two inputs of different sizes, use different variables like O(a + b) or O(a * b)
  4. Sequential steps add: If you do A then B, it's O(A + B)
  5. Nested steps multiply: If you do B for each A, it's O(A * B)

Practical Tips

  • ✅ Always consider the worst-case scenario when designing systems
  • ✅ For small inputs, a simpler O(n²) algorithm might be faster than a complex O(n log n) one due to lower constants
  • ✅ Space-time tradeoffs are common: you can often use more memory to make things faster
  • ✅ Premature optimization is the root of all evil - make it work first, then optimize if needed
  • ✅ Use our calculator to verify your complexity analysis
  • ✅ Practice with real examples to build intuition

Next Steps

Now that you understand Big O notation, try analyzing some real code! Visit our examples page to see common algorithms analyzed, or use our calculator to analyze your own code. If you have questions, check out our FAQ.