C# Arrays Explained: Types, Features, and Operations


Arrays are the workhorses of programming and, in C#, they play a pivotal role in managing collections of data efficiently. Understanding how to declare, manipulate, and apply them is key to unlocking the full potential of the language.


What is an Array in C#?

In C#, an array is a collection of elements of the same type, stored in contiguous memory locations. Arrays allow you to store and manage multiple values under a single variable name, making data handling more efficient.

Arrays have the following key Characteristics:

  • Fixed Size: Once an array is created, its size cannot be changed.
  • Zero-Based Indexing: Elements are accessed using zero-based indices.
  • Type Safety: All elements must be of the same type.

Declaring Arrays

Single-Dimensional Arrays

The simplest form of an array can be declared and initialised as follows:

// Declaration
int[] numbers;

// Initialisation
numbers = new int[5]; // Array of size 5

// Declaration and Initialisation
int[] primes = new int[] { 2, 3, 5, 7, 11 };

// Implicit Typing
var fruits = new[] { "Apple", "Banana", "Cherry" }; 
// Type inferred as string[]

// shorthand
int[] evenNumbers = { 2, 4, 6, 8, 10 }; // no need for new int[]

Multi-Dimensional Arrays

Rectangular array refers to a multidimensional array where all rows have the same number of columns. This structure is often referred to as a 2D array and can be declared as below:

int[,] matrix = new int[3, 3]; // 3x3 matrix

// Initialisation
int[,] grid = {
    { 1, 2, 3 },
    { 4, 5, 6 },
    { 7, 8, 9 }
};

We can also declare arrays with more than 2 dimensions:

int[,,] threeDArray = 
{
{
{ 1, 2, 3, 4 },
{ 5, 6, 7, 8 },
{ 9, 10, 11, 12 }
},
{
{ 13, 14, 15, 16 },
{ 17, 18, 19, 20 },
{ 21, 22, 23, 24 }
}
};

Jagged Arrays are arrays of arrays where rows can have different sizes:





int[][] jaggedArray = new int[3][];
jaggedArray[0] = new int[] { 1, 2 };
jaggedArray[1] = new int[] { 3, 4, 5 };
jaggedArray[2] = new int[] { 6 };

A few key points:

Featureint[,] (Rectangular Array)int[][] (Jagged Array)
StructureRectangular (fixed rows & columns)Jagged (rows can have varying lengths)
Memory AllocationSingle contiguous block of memorySeparate memory for each row
Access Syntaxarray[row, column]array[row][column]
InitialisationFixed size for all dimensionsFlexible size for each row
Performance Slightly faster due to contiguous memoryMay have slight overhead due to multiple references
FlexibilityLess flexible; fixed dimensionsHighly flexible; varying row lengths

Accessing and Iterating Over Arrays

We access the elements of an array using their index:

int[] arr = new int[] { 2, 3, 5, 7, 11 };
Console.WriteLine(arr[0]); // Output: 2
arr[2] = 13; // Update the value at index 2

int[,] multiArray = new int[2, 3]; // 2 rows, 3 columns

// Assign values
multiArray[0, 0] = 1;
multiArray[0, 1] = 2;
multiArray[1, 2] = 3;

// Access values
Console.WriteLine(multiArray[0, 1]); // Output: 2

int[][] jaggedArray = new int[2][]; // Array with 2 rows

// Initialize rows with different lengths
jaggedArray[0] = new int[3]; // First row has 3 elements
jaggedArray[1] = new int[2]; // Second row has 2 elements

// Assign values
jaggedArray[0][0] = 1;
jaggedArray[1][1] = 5;

// Access values
Console.WriteLine(jaggedArray[0][0]); // Output: 1
Console.WriteLine(jaggedArray[1][1]); // Output: 5

We can iterate over the elements of an array using a for loop:

for (int i = 0; i < arr.Length; i++) {
    Console.WriteLine(arr[i]);
}

We can also use a foreach loop:

foreach (var element in arr) {
    Console.WriteLine(element);
}

We can iterate over a multidimensional array with a few nested loops:

int[,,] threeDArray = new int[2, 2, 3]
{
{ { 1, 2, 3 }, { 4, 5, 6 } },
{ { 7, 8, 9 }, { 10, 11, 12 } }
};

for (int layer = 0; layer < threeDArray.GetLength(0); layer++)
{
for (int row = 0; row < threeDArray.GetLength(1); row++)
{
for (int col = 0; col < threeDArray.GetLength(2); col++)
{
Console.WriteLine($"Element at [{layer}, {row}, {col}] = {threeDArray[layer, row, col]}");
}
}
}

Getting Information About an Array

In C#, arrays have several built-in properties and methods that allow you to query and interact with arrays effectively.

Length returns the total number of elements in the array, regardless of its dimensions.

int[] numbers = { 1, 2, 3, 4, 5 };
Console.WriteLine($"Length: {numbers.Length}"); // Output: 5

int[,] matrix = { { 1, 2, 3 }, { 4, 5, 6 } };
Console.WriteLine($"Total Elements: {matrix.Length}"); // Output: 6

Rank returns the number of dimensions (or ranks) in the array.

int[,] matrix = { { 1, 2 }, { 3, 4 } };

Console.WriteLine($"Rank: {matrix.Rank}"); // Output: 2

int[] singleDimensional = { 1, 2, 3 };

Console.WriteLine($"Rank: {singleDimensional.Rank}"); // Output: 1

GetLength returns the size of a specific dimension in the array.

int[,] matrix = { { 1, 2, 3 }, { 4, 5, 6 } };
Console.WriteLine($"Number of Rows: {matrix.GetLength(0)}"); // Output: 2
Console.WriteLine($"Number of Columns: {matrix.GetLength(1)}"); // Output: 3

GetLowerBound returns the starting index of a specific dimension. For standard arrays in C#, this is always 0. However, it has its uses in scenarios with custom collections or systems where non-zero-based arrays are present (e.g., interop with other languages).

int[,] matrix = { { 1, 2 }, { 3, 4 } };

Console.WriteLine($"Lower Bound of Dimension 0: {matrix.GetLowerBound(0)}"); // Output: 0

Console.WriteLine($"Lower Bound of Dimension 1: {matrix.GetLowerBound(1)}"); // Output: 0

Get UpperBound returns the highest index (upper bound) of a specific dimension, which is useful for iterating through specific dimensions of multidimensional arrays.

int[,] matrix = { { 1, 2 }, { 3, 4 } };

Console.WriteLine($"Upper Bound of Rows: {matrix.GetUpperBound(0)}"); // Output: 1

Console.WriteLine($"Upper Bound of Columns: {matrix.GetUpperBound(1)}"); // Output: 1

Common Array Operations

C# is a fully object oriented language and, despite being fundamental data structures, arrays are reference types derived from System.Array and inherit from System.Object. We’ll explore this in future posts. For now, let’s address some of the most common operations with arrays using the functionalities provided by the System.Array class.

Array.Sort sorts the elements of an array in ascending order.

int[] numbers = { 5, 2, 8, 1, 3 };
Array.Sort(numbers);
Console.WriteLine("Sorted Array:");
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 5, 8

Array.Reverse reverses the order of elements in an array.

int[] numbers = { 1, 2, 3, 4, 5 };
Array.Reverse(numbers);
Console.WriteLine("Reversed Array:");
Console.WriteLine(string.Join(", ", numbers)); // Output: 5, 4, 3, 2, 1

Array.IndexOf Finds the index of the first occurrence of a specific value in the array.

string[] fruits = { "apple", "banana", "cherry", "date" };
int index = Array.IndexOf(fruits, "cherry");
Console.WriteLine($"Index of 'cherry': {index}"); // Output: 2

Array.LastIndexOf finds the index of the last occurrence of a specific value in the array.

int[] numbers = { 1, 2, 3, 2, 5 };
int index = Array.LastIndexOf(numbers, 2);
Console.WriteLine($"Last Index of 2: {index}"); // Output: 3

Array.Exists determines if an array contains elements that match a condition.

int[] numbers = { 10, 20, 30, 40 };
bool exists = Array.Exists(numbers, x => x > 25);
Console.WriteLine($"Exists element > 25: {exists}"); // Output: True

Array.Copy copies elements from one array to another.

int[] source = { 1, 2, 3, 4, 5 };
int[] destination = new int[3];
Array.Copy(source, 1, destination, 0, 3);
Console.WriteLine("Copied Array:");
Console.WriteLine(string.Join(", ", destination)); // Output: 2, 3, 4

Array.Clear clears the elements of an array, setting them to their default value.

int[] numbers = { 1, 2, 3, 4, 5 };
Array.Clear(numbers, 1, 3);
Console.WriteLine("Cleared Array:");
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 0, 0, 0, 5

Array.Resize changes the size of an array.

int[] numbers = { 1, 2, 3 };
Array.Resize(ref numbers, 5);
numbers[3] = 4;
numbers[4] = 5;
Console.WriteLine("Resized Array:");
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Array.Find returns the first element that matches a condition.

int[] numbers = { 5, 15, 25, 35 };
int result = Array.Find(numbers, x => x > 20);
Console.WriteLine($"First element > 20: {result}"); // Output: 25

Array.FindAll returns all elements that match a condition.

int[] numbers = { 1, 2, 3, 4, 5, 6 };
int[] evens = Array.FindAll(numbers, x => x % 2 == 0);
Console.WriteLine("Even Numbers:");
Console.WriteLine(string.Join(", ", evens)); // Output: 2, 4, 6

Array.ForEach performs an action on each element of the array.

int[] numbers = { 1, 2, 3 };
Array.ForEach(numbers, x => Console.WriteLine(x * x)); // Output: 1, 4, 9

Best Practices for Working with Arrays in C#

A few good practices when for the use of arrays:

  • Use Arrays when the size of the collection is fixed or predictable and you need a fast, index-based structure with minimal memory overhead.
  • Use Descriptive Variable Names. Name arrays based on their purpose or contents for better code readability.
  • Initialise arrays properly to prevent runtime errors.
  • Always validate array indexes to prevent runtime errors.
  • Leverage the Array class methods to simplify common tasks like sorting and searching.
  • Arrays are fixed in size, so resizing involves creating a new array and copying elements, which is inefficient. Use Array.Resize only when absolutely necessary.
  • For efficiency, choose the type that fits your data structure and access patterns.
  • For simple iterations, prefer foreach as it’s cleaner and avoids index-related bugs.
  • Leverage array properties like Length, Rank, GetLowerBound, and GetUpperBound for safer and dynamic programming.
  • Keep array operations simple and avoid deeply nested loops when possible. If logic becomes too complex, encapsulate it in methods.
  • For large arrays, explicitly nullify or dispose of references to free memory earlier if they are no longer needed (helpful in long-running applications).
int[] largeArray = new int[1_000_000];
// Use the array
largeArray = null; // Mark for garbage collection

Wrapping It Up

Arrays are foundational to programming in C#, offering a simple yet powerful way to handle collections of data. From basic declarations to advanced manipulations, mastering these concepts will make you a more effective C# developer.

Whether you’re building applications that require performance optimization or just managing data efficiently, arrays are a tool you’ll return to time and again.

Decoding Big O: Analysing Time and Space Complexity with Examples in C#, JavaScript, and Python


Efficiency matters. Whether you’re optimising a search algorithm, crafting a game engine, or designing a web application, understanding Big O notation is the key to writing scalable, performant code. Big O analysis helps you quantify how your code behaves as the size of the input grows, both in terms of time and space (meaning memory usage).


Big O notation was introduced by German mathematician Paul Bachmann in the late 19th century and later popularised by Edmund Landau. It was originally part of number theory and later adopted into computer science for algorithm analysis. Big O notation gets its name from the letter “O,” which stands for “Order” in mathematics. It is used to describe the order of growth of a function as the input size grows larger, specifically in terms of how the function scales and dominates other factors. The “Big” in Big O emphasises that we are describing the upper bound of growth. Big O notation describes the upper bound of an algorithm’s growth rate as a function of input size. It tells you how the runtime or memory usage scales, providing a worst-case scenario analysis.

Key Terminology:

  • Input Size (n): The size of the input data
  • Time Complexity: How the runtime grows with n
  • Space Complexity: How memory usage grows with

Common Big O Classifications

These are common complexities, from most efficient to least efficient:

Big O NameDescription
O(1)Constant ComplexityPerformance doesn’t depend on input size.
O(log n)Logarithmic ComplexityDivides the problem size with each step.
O(n)Linear ComplexityGrows proportionally with the input size.
O(n log n)Log-Linear ComplexityThe growth rate is proportional to n times the logarithm of n. It is often seen in divide-and-conquer algorithms that repeatedly divide a problem into smaller subproblems, solve them, and then combine the solutions
O(n2) or O(nk)Quadratic or Polynomial ComplexityNested loops—performance scales with n
O(2n)

Exponential Complexity

Grows exponentially—terrible for large inputs.

O(n!)Factorial ComplexityExplores all arrangements or sequences.

Analysing Time Complexity

Constant Time

In the first type of algorithm, regardless of input size, the execution time remains the same.

Example: C# (Accessing an Array Element)

int[] numbers = { 10, 20, 30, 40 };
Console.WriteLine(numbers[2]); // Output: 30

Accessing an element by index is O(1), as it requires a single memory lookup.

Logarithmic Time

The next most efficient case happens when the runtime grows logarithmically, typically in divide-and-conquer algorithms.

Example: JavaScript (Binary Search)

function binarySearch(arr, target) {
    let left = 0, right = arr.length - 1;
    while (left <= right) {
        let mid = Math.floor((left + right) / 2);
        if (arr[mid] === target) return mid;
        else if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1;
}
console.log(binarySearch([1, 2, 3, 4, 5], 3)); // Output: 2

Each iteration halves the search space, making this O(log n).

Linear Time

Example: Python (Iterating Over a List)

numbers = [1, 2, 3, 4, 5]
for num in numbers:
print(num)

The loop visits each element once, so the complexity is O(n).

Log-linear Time

This one takes a more elaborated and complex example. First, let’s break it down:

  • n : This represents the linear work required to handle the elements in each step
  • log n : This comes from the recursive division of the problem into smaller subproblems. For example, dividing an array into halves repeatedly results in a logarithmic number of divisions.

Example: JavaScript (Sorting arrays with Merge and Sort)

function mergeSort(arr) {
// Base case: An array with 1 or 0 elements is already sorted
if (arr.length <= 1) {
return arr;
}

// Divide the array into two halves
const mid = Math.floor(arr.length / 2);
const left = arr.slice(0, mid);
const right = arr.slice(mid);

// Recursively sort both halves and merge them
return merge(mergeSort(left), mergeSort(right));
}

function merge(left, right) {
let result = [];
let leftIndex = 0;
let rightIndex = 0;

// Compare elements from left and right arrays, adding the smallest to the result
while (leftIndex < left.length && rightIndex < right.length) {
if (left[leftIndex] < right[rightIndex]) {
result.push(left[leftIndex]);
leftIndex++;
} else {
result.push(right[rightIndex]);
rightIndex++;
}
}

// Add any remaining elements from the left and right arrays
return result.concat(left.slice(leftIndex)).concat(right.slice(rightIndex));
}

// Example usage:
const array = [38, 27, 43, 3, 9, 82, 10];
console.log("Unsorted Array:", array);
const sortedArray = mergeSort(array);
console.log("Sorted Array:", sortedArray);

The array is repeatedly divided into halves until the subarrays contain a single element (base case), with complexity O(log n). The merge function combines two sorted arrays into a single sorted array by comparing elements (O(n)). This process is repeated as the recursive calls return, merging larger and larger sorted subarrays until the entire array is sorted.

Quadratic or Polynomial Time

In the simplest and obvious examples, nested loops lead to quadratic growth.

Example: C# (Finding Duplicate Pairs)

int[] numbers = { 1, 2, 3, 1 };
for (int i = 0; i < numbers.Length; i++) {
for (int j = i + 1; j < numbers.Length; j++) {
if (numbers[i] == numbers[j]) {
Console.WriteLine($"Duplicate: {numbers[i]}");
}
}
}

The outer loop runs n times, and for each iteration, the inner loop runs n-i-1 times. This results in O(n2).

Exponential Time

Generating all subsets (the power set) of a given set is a common example of exponential time complexity, as it involves exploring all combinations of a set’s elements.

Example: Python (Generating the Power Set)

def generate_subsets(nums):
def helper(index, current):
# Base case: if we've considered all elements
if index == len(nums):
result.append(current[:]) # Add a copy of the current subset
return

# Exclude the current element
helper(index + 1, current)

# Include the current element
current.append(nums[index])
helper(index + 1, current)
current.pop() # Backtrack to explore other combinations

result = []
helper(0, [])
return result

# Example usage
input_set = [1, 2, 3]
subsets = generate_subsets(input_set)

print("Power Set:")
for subset in subsets:
print(subset)

For a set of n elements, there are 2n subsets. Each subset corresponds to a unique path in the recursion tree. Therefore, the time complexity is O(2n

Factorial Time

These algorithms typically involve problems where all possible permutations, combinations, or arrangements of a set are considered.

Example: Javascript (Generating All Permutations)

function generatePermutations(arr) {
const result = [];

function permute(current, remaining) {
if (remaining.length === 0) {
result.push([...current]); // Store the complete permutation
return;
}

for (let i = 0; i < remaining.length; i++) {
const next = [...current, remaining[i]]; // Add the current element
const rest = remaining.slice(0, i).concat(remaining.slice(i + 1)); // Remove the used element
permute(next, rest); // Recurse
}
}

permute([], arr);
return result;
}

// Example usage
const input = [1, 2, 3];
const permutations = generatePermutations(input);

console.log("Permutations:");
permutations.forEach((p) => console.log(p));

For n elements, the algorithm explores all possible arrangements, leading to n! recursive calls.

Analysing Space Complexity

Space complexity evaluates how much additional memory an algorithm requires as it grows.

Constant Space

An algorithm that uses the same amount of memory, regardless of the input size.

Example: Python (Finding the Maximum in an Array)

def find_max(arr):
max_val = arr[0]
for num in arr:
if num > max_val:
max_val = num
return max_val

Only a fixed amount of memory is needed, regardless of the size of or n.

Logarithmic Space

Typically found in recursive algorithms that reduce the input size by a factor (e.g., dividing by 2) at each step. Memory usage grows slowly as the input size increases

Example: C# (Recursive Binary Search)

static void SearchIn(int target, string[] args)
{
int[] array = { 1, 3, 5, 7, 9, 11 };
int result = BinarySearch(array, target, 0, array.Length - 1);
if (result != -1)
{
Console.WriteLine($"Target {target} found at index {result}.");
}
else
{
Console.WriteLine($"Target {target} not found.");
}
}

static int BinarySearch(int[] arr, int target, int low, int high)
{
// Base case: target not found
if (low > high)
{
return -1;
}
// Find the middle index
int mid = (low + high) / 2;
// Check if the target is at the midpoint
if (arr[mid] == target)
{
return mid;
}
// If the target is smaller, search in the left half
else if (arr[mid] > target)
{
return BinarySearch(arr, target, low, mid - 1);
}
// If the target is larger, search in the right half
else
{
return BinarySearch(arr, target, mid + 1, high);
}
}

Binary Search halves the search space at each step. The total space usage grows logarithmically with the depth of recursion, resulting in  O(log n).

Linear Space

Memory usage grows proportionally with the input size.

Example: JavaScript (Reversing an Array)

function reverseArray(arr) {
let reversed = [];
for (let i = arr.length - 1; i >= 0; i--) {
reversed.push(arr[i]);
}
return reversed;
}

console.log(reverseArray([1, 2, 3])); // Output: [3, 2, 1]

The new reversed array requires space proportional to the input size.

Log-linear space complexity algorithm

This type of algorithm requires memory proportional to n log n, often due to operations that recursively split the input into smaller parts while using additional memory to store intermediate results.

Example: Python (Merge Sort)

def merge_sort(arr):
if len(arr) > 1:
# Find the middle point
mid = len(arr) // 2

# Split the array into two halves
left_half = arr[:mid]
right_half = arr[mid:]

# Recursively sort both halves
merge_sort(left_half)
merge_sort(right_half)

# Merge the sorted halves
merge(arr, left_half, right_half)

def merge(arr, left_half, right_half):
i = j = k = 0

# Merge elements from left_half and right_half into arr
while i < len(left_half) and j < len(right_half):
if left_half[i] <= right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1

# Copy any remaining elements from left_half
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1

# Copy any remaining elements from right_half
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1

# Example usage
if __name__ == "__main__":
array = [38, 27, 43, 3, 9, 82, 10]
print("Unsorted Array:", array)
merge_sort(array)
print("Sorted Array:", array)

The depth of recursion corresponds to the number of times the array is halved. For an array of size n, the recursion depth is log n. At each level, temporary arrays (left_half and right_half) are created for merging, requiring O(n) space. The total space complexity is given by
O(log n) recursion stack + O(n) temporary arrays = O(n log n).

Quadratic or Polynomial Space

This case encompasses algorithms that require memory proportional to the square or another polynomial function of the input size.

Example: Python (Longest Common Subsequence)

def longest_common_subsequence(s1, s2):
n, m = len(s1), len(s2)
dp = [[0] * (m + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for j in range(1, m + 1):
if s1[i - 1] == s2[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[n][m]

This algorithm requires a two dimensions table storing solutions for all substrings, therefore the space complexity is O(nk).

Exponential Space

An algorithm with exponential space complexity typically consumes memory that grows exponentially with the input size. 

Example: Javascript (Generating the Power Set)

function generatePowerSet(inputSet) {
function helper(index, currentSubset) {
if (index === inputSet.length) {
// Store a copy of the current subset
powerSet.push([...currentSubset]);
return;
}
// Exclude the current element
helper(index + 1, currentSubset);

// Include the current element
currentSubset.push(inputSet[index]);
helper(index + 1, currentSubset);
currentSubset.pop(); // Backtrack
}
const powerSet = [];
helper(0, []);
return powerSet;
}

// Example usage
const inputSet = [1, 2, 3];
const result = generatePowerSet(inputSet);

console.log("Power Set:");
console.log(result);

The recursion stack consumes O(n) space (depth of recursion). The memory for storing the power set is O(2n), which dominates the overall space complexity.

Factorial Space

These are algorithms found in problems that involve generating all permutations of a set.

Example: C# (Generating All Permutations)

static void Main(string[] args)
{
var input = new List<int> { 1, 2, 3 };
var permutations = GeneratePermutations(input);
Console.WriteLine("Permutations:");
foreach (var permutation in permutations)
{
Console.WriteLine(string.Join(", ", permutation));
}
}

static List<List<int>> GeneratePermutations(List<int> nums)
{
var result = new List<List<int>>();
Permute(nums, 0, result);
return result;
}

static void Permute(List<int> nums, int start, List<List<int>> result)
{
if (start == nums.Count)
{
// Add a copy of the current permutation to the result
result.Add(new List<int>(nums));
return;
}
for (int i = start; i < nums.Count; i++)
{
Swap(nums, start, i);
Permute(nums, start + 1, result);
Swap(nums, start, i); // Backtrack
}
}

static void Swap(List<int> nums, int i, int j)
{
int temp = nums[i];
nums[i] = nums[j];
nums[j] = temp;
}

The algorithm generates n! permutations, and each permutation is stored in the result list. For n elements, this requires O(n!) memory.

Wrapping It Up

Big O notation is a cornerstone of writing efficient, scalable algorithms. By analysing time and space complexity, you gain insights into how your code behaves and identify opportunities for optimisation. Whether you’re a seasoned developer or just starting, mastering Big O equips you to write smarter, faster, and leaner code.

With this knowledge in your arsenal, you’re ready to tackle algorithm design and optimisation challenges with confidence.