Decoding Big O: Analysing Time and Space Complexity with Examples in C#, JavaScript, and Python


Efficiency matters. Whether you’re optimising a search algorithm, crafting a game engine, or designing a web application, understanding Big O notation is the key to writing scalable, performant code. Big O analysis helps you quantify how your code behaves as the size of the input grows, both in terms of time and space (meaning memory usage).


Big O notation was introduced by German mathematician Paul Bachmann in the late 19th century and later popularised by Edmund Landau. It was originally part of number theory and later adopted into computer science for algorithm analysis. Big O notation gets its name from the letter “O,” which stands for “Order” in mathematics. It is used to describe the order of growth of a function as the input size grows larger, specifically in terms of how the function scales and dominates other factors. The “Big” in Big O emphasises that we are describing the upper bound of growth. Big O notation describes the upper bound of an algorithm’s growth rate as a function of input size. It tells you how the runtime or memory usage scales, providing a worst-case scenario analysis.

Key Terminology:

  • Input Size (n): The size of the input data
  • Time Complexity: How the runtime grows with n
  • Space Complexity: How memory usage grows with

Common Big O Classifications

These are common complexities, from most efficient to least efficient:

Big O NameDescription
O(1)Constant ComplexityPerformance doesn’t depend on input size.
O(log n)Logarithmic ComplexityDivides the problem size with each step.
O(n)Linear ComplexityGrows proportionally with the input size.
O(n log n)Log-Linear ComplexityThe growth rate is proportional to n times the logarithm of n. It is often seen in divide-and-conquer algorithms that repeatedly divide a problem into smaller subproblems, solve them, and then combine the solutions
O(n2) or O(nk)Quadratic or Polynomial ComplexityNested loops—performance scales with n
O(2n)

Exponential Complexity

Grows exponentially—terrible for large inputs.

O(n!)Factorial ComplexityExplores all arrangements or sequences.

Analysing Time Complexity

Constant Time

In the first type of algorithm, regardless of input size, the execution time remains the same.

Example: C# (Accessing an Array Element)

int[] numbers = { 10, 20, 30, 40 };
Console.WriteLine(numbers[2]); // Output: 30

Accessing an element by index is O(1), as it requires a single memory lookup.

Logarithmic Time

The next most efficient case happens when the runtime grows logarithmically, typically in divide-and-conquer algorithms.

Example: JavaScript (Binary Search)

function binarySearch(arr, target) {
    let left = 0, right = arr.length - 1;
    while (left <= right) {
        let mid = Math.floor((left + right) / 2);
        if (arr[mid] === target) return mid;
        else if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1;
}
console.log(binarySearch([1, 2, 3, 4, 5], 3)); // Output: 2

Each iteration halves the search space, making this O(log n).

Linear Time

Example: Python (Iterating Over a List)

numbers = [1, 2, 3, 4, 5]
for num in numbers:
print(num)

The loop visits each element once, so the complexity is O(n).

Log-linear Time

This one takes a more elaborated and complex example. First, let’s break it down:

  • n : This represents the linear work required to handle the elements in each step
  • log n : This comes from the recursive division of the problem into smaller subproblems. For example, dividing an array into halves repeatedly results in a logarithmic number of divisions.

Example: JavaScript (Sorting arrays with Merge and Sort)

function mergeSort(arr) {
// Base case: An array with 1 or 0 elements is already sorted
if (arr.length <= 1) {
return arr;
}

// Divide the array into two halves
const mid = Math.floor(arr.length / 2);
const left = arr.slice(0, mid);
const right = arr.slice(mid);

// Recursively sort both halves and merge them
return merge(mergeSort(left), mergeSort(right));
}

function merge(left, right) {
let result = [];
let leftIndex = 0;
let rightIndex = 0;

// Compare elements from left and right arrays, adding the smallest to the result
while (leftIndex < left.length && rightIndex < right.length) {
if (left[leftIndex] < right[rightIndex]) {
result.push(left[leftIndex]);
leftIndex++;
} else {
result.push(right[rightIndex]);
rightIndex++;
}
}

// Add any remaining elements from the left and right arrays
return result.concat(left.slice(leftIndex)).concat(right.slice(rightIndex));
}

// Example usage:
const array = [38, 27, 43, 3, 9, 82, 10];
console.log("Unsorted Array:", array);
const sortedArray = mergeSort(array);
console.log("Sorted Array:", sortedArray);

The array is repeatedly divided into halves until the subarrays contain a single element (base case), with complexity O(log n). The merge function combines two sorted arrays into a single sorted array by comparing elements (O(n)). This process is repeated as the recursive calls return, merging larger and larger sorted subarrays until the entire array is sorted.

Quadratic or Polynomial Time

In the simplest and obvious examples, nested loops lead to quadratic growth.

Example: C# (Finding Duplicate Pairs)

int[] numbers = { 1, 2, 3, 1 };
for (int i = 0; i < numbers.Length; i++) {
for (int j = i + 1; j < numbers.Length; j++) {
if (numbers[i] == numbers[j]) {
Console.WriteLine($"Duplicate: {numbers[i]}");
}
}
}

The outer loop runs n times, and for each iteration, the inner loop runs n-i-1 times. This results in O(n2).

Exponential Time

Generating all subsets (the power set) of a given set is a common example of exponential time complexity, as it involves exploring all combinations of a set’s elements.

Example: Python (Generating the Power Set)

def generate_subsets(nums):
def helper(index, current):
# Base case: if we've considered all elements
if index == len(nums):
result.append(current[:]) # Add a copy of the current subset
return

# Exclude the current element
helper(index + 1, current)

# Include the current element
current.append(nums[index])
helper(index + 1, current)
current.pop() # Backtrack to explore other combinations

result = []
helper(0, [])
return result

# Example usage
input_set = [1, 2, 3]
subsets = generate_subsets(input_set)

print("Power Set:")
for subset in subsets:
print(subset)

For a set of n elements, there are 2n subsets. Each subset corresponds to a unique path in the recursion tree. Therefore, the time complexity is O(2n

Factorial Time

These algorithms typically involve problems where all possible permutations, combinations, or arrangements of a set are considered.

Example: Javascript (Generating All Permutations)

function generatePermutations(arr) {
const result = [];

function permute(current, remaining) {
if (remaining.length === 0) {
result.push([...current]); // Store the complete permutation
return;
}

for (let i = 0; i < remaining.length; i++) {
const next = [...current, remaining[i]]; // Add the current element
const rest = remaining.slice(0, i).concat(remaining.slice(i + 1)); // Remove the used element
permute(next, rest); // Recurse
}
}

permute([], arr);
return result;
}

// Example usage
const input = [1, 2, 3];
const permutations = generatePermutations(input);

console.log("Permutations:");
permutations.forEach((p) => console.log(p));

For n elements, the algorithm explores all possible arrangements, leading to n! recursive calls.

Analysing Space Complexity

Space complexity evaluates how much additional memory an algorithm requires as it grows.

Constant Space

An algorithm that uses the same amount of memory, regardless of the input size.

Example: Python (Finding the Maximum in an Array)

def find_max(arr):
max_val = arr[0]
for num in arr:
if num > max_val:
max_val = num
return max_val

Only a fixed amount of memory is needed, regardless of the size of or n.

Logarithmic Space

Typically found in recursive algorithms that reduce the input size by a factor (e.g., dividing by 2) at each step. Memory usage grows slowly as the input size increases

Example: C# (Recursive Binary Search)

static void SearchIn(int target, string[] args)
{
int[] array = { 1, 3, 5, 7, 9, 11 };
int result = BinarySearch(array, target, 0, array.Length - 1);
if (result != -1)
{
Console.WriteLine($"Target {target} found at index {result}.");
}
else
{
Console.WriteLine($"Target {target} not found.");
}
}

static int BinarySearch(int[] arr, int target, int low, int high)
{
// Base case: target not found
if (low > high)
{
return -1;
}
// Find the middle index
int mid = (low + high) / 2;
// Check if the target is at the midpoint
if (arr[mid] == target)
{
return mid;
}
// If the target is smaller, search in the left half
else if (arr[mid] > target)
{
return BinarySearch(arr, target, low, mid - 1);
}
// If the target is larger, search in the right half
else
{
return BinarySearch(arr, target, mid + 1, high);
}
}

Binary Search halves the search space at each step. The total space usage grows logarithmically with the depth of recursion, resulting in  O(log n).

Linear Space

Memory usage grows proportionally with the input size.

Example: JavaScript (Reversing an Array)

function reverseArray(arr) {
let reversed = [];
for (let i = arr.length - 1; i >= 0; i--) {
reversed.push(arr[i]);
}
return reversed;
}

console.log(reverseArray([1, 2, 3])); // Output: [3, 2, 1]

The new reversed array requires space proportional to the input size.

Log-linear space complexity algorithm

This type of algorithm requires memory proportional to n log n, often due to operations that recursively split the input into smaller parts while using additional memory to store intermediate results.

Example: Python (Merge Sort)

def merge_sort(arr):
if len(arr) > 1:
# Find the middle point
mid = len(arr) // 2

# Split the array into two halves
left_half = arr[:mid]
right_half = arr[mid:]

# Recursively sort both halves
merge_sort(left_half)
merge_sort(right_half)

# Merge the sorted halves
merge(arr, left_half, right_half)

def merge(arr, left_half, right_half):
i = j = k = 0

# Merge elements from left_half and right_half into arr
while i < len(left_half) and j < len(right_half):
if left_half[i] <= right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1

# Copy any remaining elements from left_half
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1

# Copy any remaining elements from right_half
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1

# Example usage
if __name__ == "__main__":
array = [38, 27, 43, 3, 9, 82, 10]
print("Unsorted Array:", array)
merge_sort(array)
print("Sorted Array:", array)

The depth of recursion corresponds to the number of times the array is halved. For an array of size n, the recursion depth is log n. At each level, temporary arrays (left_half and right_half) are created for merging, requiring O(n) space. The total space complexity is given by
O(log n) recursion stack + O(n) temporary arrays = O(n log n).

Quadratic or Polynomial Space

This case encompasses algorithms that require memory proportional to the square or another polynomial function of the input size.

Example: Python (Longest Common Subsequence)

def longest_common_subsequence(s1, s2):
n, m = len(s1), len(s2)
dp = [[0] * (m + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for j in range(1, m + 1):
if s1[i - 1] == s2[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[n][m]

This algorithm requires a two dimensions table storing solutions for all substrings, therefore the space complexity is O(nk).

Exponential Space

An algorithm with exponential space complexity typically consumes memory that grows exponentially with the input size. 

Example: Javascript (Generating the Power Set)

function generatePowerSet(inputSet) {
function helper(index, currentSubset) {
if (index === inputSet.length) {
// Store a copy of the current subset
powerSet.push([...currentSubset]);
return;
}
// Exclude the current element
helper(index + 1, currentSubset);

// Include the current element
currentSubset.push(inputSet[index]);
helper(index + 1, currentSubset);
currentSubset.pop(); // Backtrack
}
const powerSet = [];
helper(0, []);
return powerSet;
}

// Example usage
const inputSet = [1, 2, 3];
const result = generatePowerSet(inputSet);

console.log("Power Set:");
console.log(result);

The recursion stack consumes O(n) space (depth of recursion). The memory for storing the power set is O(2n), which dominates the overall space complexity.

Factorial Space

These are algorithms found in problems that involve generating all permutations of a set.

Example: C# (Generating All Permutations)

static void Main(string[] args)
{
var input = new List<int> { 1, 2, 3 };
var permutations = GeneratePermutations(input);
Console.WriteLine("Permutations:");
foreach (var permutation in permutations)
{
Console.WriteLine(string.Join(", ", permutation));
}
}

static List<List<int>> GeneratePermutations(List<int> nums)
{
var result = new List<List<int>>();
Permute(nums, 0, result);
return result;
}

static void Permute(List<int> nums, int start, List<List<int>> result)
{
if (start == nums.Count)
{
// Add a copy of the current permutation to the result
result.Add(new List<int>(nums));
return;
}
for (int i = start; i < nums.Count; i++)
{
Swap(nums, start, i);
Permute(nums, start + 1, result);
Swap(nums, start, i); // Backtrack
}
}

static void Swap(List<int> nums, int i, int j)
{
int temp = nums[i];
nums[i] = nums[j];
nums[j] = temp;
}

The algorithm generates n! permutations, and each permutation is stored in the result list. For n elements, this requires O(n!) memory.

Wrapping It Up

Big O notation is a cornerstone of writing efficient, scalable algorithms. By analysing time and space complexity, you gain insights into how your code behaves and identify opportunities for optimisation. Whether you’re a seasoned developer or just starting, mastering Big O equips you to write smarter, faster, and leaner code.

With this knowledge in your arsenal, you’re ready to tackle algorithm design and optimisation challenges with confidence.