Master Algorithms: Improve Your Coding & Problem-Solving

May 27, 2025

Unlock algorithms! Learn core concepts like Big O notation, binary search, and how to write more efficient code. Perfect for beginner programmers.

Introduction

If you've got some coding basics down and are wondering about algorithms, you're in the right place. This post was inspired by the book Grokking Algorithms, which is a great read.

Perhaps you're trying to figure out a smart way to solve a coding challenge you're facing. Or maybe you just want to get a better idea of how algorithms can make your code better or solve common problems. I just really enjoy solving problems.

All coding example will be done in Python.

Enjoy!

Big O Notation

Big O notation helps us describe how efficiently an algorithm handles increasing amounts of data. Big O notation is a way to measure how the performance of an algorithm changes as the input size grows. Big O describes the worst-case scenario.

Examples notations are:

  • O(log n) - Logarithmic Time
  • O(n) - Linear Time
  • O(n2) - Quadratic Time
  • O(n * log n) - Linearithmic Time
  • O(n!) - Factorial Time

Binary Search

Analogy: You have a sorted phone book and you want to find a name. You flip to the middle, then discard half the book again and again until you find it. Super fast!

Binary search is an O(log n) algorithm that cuts the problem in half with every step. So if you have 1,000 items, it might only take around 10 steps to find your answer. That’s O(log n).

def binary_search(arr, target):
    start = 0
    end = len(arr) - 1

    while start <= end:
        mid = (end - start) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            start = mid + 1
        else:
            end = mid - 1

    return -1 # Did not find number

my_numbers = [2, 5, 7, 8, 11, 12, 18, 25, 30, 33, 40, 45, 50]
answer_index = binary_search(my_numbers, 7) # Returns 2

Each time, binary search cuts the problem size in half. Let's compare O(n) with O(log n).

O(n) vs O(log n)

Input Size O(n) O(log n)
10 10 3
100 100 7
1,000 1,000 10
1,000,000 1,000,000 20

Even for a million items, binary search only needs about 20 steps. That's why O(log n) is so much faster than O(n) as n grows larger and larger.

Selection Sort

Selection sort is a simple sorting algorithm, but is not very efficient for large lists.

How it works:

  1. Start at the beginning of the list.
  2. Find the smallest element in the list.
  3. Swap it with the first element.
  4. Then look at the rest of the list (excluding the first), find the smallest element, and swap it with the second element.
  5. Repeat this until the whole list is sorted.
def selection_sort(arr):
    n = len(arr)
    for i in range(n):
        min_index = i
        for j in range(i + 1, n):
            if arr[j] < arr[min_index]:
                min_index = j
        arr[i], arr[min_index] = arr[min_index], arr[i]
    return arr
                             
            
arr = [4, 2, 12, 5, 6, 21, 18, 7, 16, 25, 3]
arr_sorted = selection_sort(arr)
print(arr_sorted) # [2, 3, 4, 5, 6, 7, 12, 16, 18, 21, 25]

The time complexity of Selection Sort is O(n²), because for each element, it searches the rest of the list to find the minimum

Quicksort

Quicksort is a sorting algorithm that uses D&C (divide and conquer). It's recursive, and it works by:

  1. Picking a pivot element from the list
  2. Splitting the rest of the list into two parts
  3. Recursively sorting those two parts (less or greater than the pivot)
  4. Combining the results
def quicksort(arr):
    if len(arr) < 2:
        return arr
    pivot = arr[0]
    less = [i for i in arr[1:] if i <= pivot]
    greater = [j for j in arr[1:] if j > pivot]
    return quicksort(less) + [pivot] + quicksort(greater)


arr = [5, 2, 7, 1, 3, 8, 4, 5]
sorted_arr = quicksort(arr)
print(sorted_arr) # [1, 2, 3, 4, 5, 5, 7, 8]

The best case we can get is O(n log n). This happens when the pivot splits the list into two equal halves every time.

The worst case is O(n²). This happens when the pivot is always the worst possible choice. For example: Sorting a list like: [1, 2, 3, 4, 5] and you always pick the first element in the list as the pivot.

But even with random pivots, on average, the splits aren’t perfect but good enough. That means you still get O(n log n) on average. So in general, quicksort performs very well in practice.

Quicksort

Case Time Complexity
Best O(n log n)
Average O(n log n)
Worst O(n²)

There are ways to possibly avoid the worst case. For example, you can choose a random pivot or pick the median value from the first, middle, and last items in the list.

Breath-First Search

Breadth-First Search (BFS) is an algorithm for searching through a graph or tree. It explores the graph level by level, starting from a given node and visiting all its neighbors before moving on to their neighbors. It's used for finding the shortest path, crawling websites, networking, solving puzzles and more.

BFS uses:

  • A queue (FIFO - first in, first out) to keep track of what to visit next
  • A set of visited nodes to avoid duplicates

BFS Time Complexity: O(V + E) - Where V is the number of vertices (nodes) and E is the number of edges (connections between nodes)

from collections import deque

graph = {
    "Flo": ["Bob", "Claire"],
    "Bob": ["Dan", "Eva"],
    "Claire": ["Frank!"],
    "Dan": [],
    "Eva": [],
    "Frank!": []
}

def person_is_found(name):
    return name[-1] == '!'

def bfs():
    search_queue = deque()
    search_queue += graph['Flo']
    searched = set()
    while search_queue:
        person = search_queue.popleft()
        if person not in searched:
            if person_is_found(person):
                print(person + " is found!")
                print(f'searched: {searched}')
                return True
            else:
                search_queue += graph[person]
                searched.add(person)
    print(searched)
    return False

print(bfs())

Depth-First Search

Depth-First Search (DFS) is also an algorithm for traversing a graph. Unlike Breadth-First Search, which explores all neighbors at the current depth before moving deeper, DFS goes as far as possible down a branch before backtracking.

graph = {
    "Flo": ["Bob", "Claire"],
    "Bob": ["Dan", "Eva"],
    "Claire": ["Frank"],
    "Dan": [],
    "Eva": [],
    "Frank": []
}

def dfs(graph, node, visited=None):
    if visited is None:
        visited = set()

    if node not in visited:
        print(node)
        visited.add(node)
        for neighbor in graph[node]:
            dfs(graph, neighbor, visited)

dfs(graph, "Flo")

Dijkstra's Algorithm

Dijkstra's algorithm is a shortest path algorithm. It finds the shortest path from a starting node to all other nodes in a weighted graph. I added a bit more explanation for each line because the concept was hard to understand. The explanation shows one iteration of the while loop.

graph_data = {
    'A': [('B', 4), ('C', 2)],
    'B': [('A', 4), ('D', 5), ('E', 12)],
    'C': [('A', 2), ('D', 8), ('F', 10)],
    'D': [('B', 5), ('C', 8), ('E', 3), ('F', 6)],
    'E': [('B', 12), ('D', 3), ('F', 5)],
    'F': [('C', 10), ('D', 6), ('E', 5)]
}


def dijkstra(graph, start_node):
    # Create a dict and set all distances to infinity
    distances = { node: float('inf') for node in graph }

    # Set the start node distance to 0
    # distances['A'] = 0
    distances[start_node] = 0

    # Keep track of all visited Nodes
    visited = set()

    # As long as there are nodes to visit, loop through all nodes
    while len(visited) < len(graph):
        # Set new distance to infinity
        current_distance = float('inf')

        # Set current node as None
        current_node = None

        # Check if there's a node that is not visited yet
        for node in graph:
            # A = 0 which is less than 'infinity'
            if node not in visited and distances[node] < current_distance:
                # Set node (A) to current_node
                current_node = node
                # Set current distance to 0
                current_distance = distances[node]

        # There are no more nodes accessible, we break the loop
        if current_node is None:
            break

        # Loop through all neighbors of the current node
        # A has 2 neighbors -> B:4 and C:2
        for neighbor, weight in graph[current_node]:
            # current_distance = 0, weight of B = 4 ( 0 + 4 )
            new_distance = current_distance + weight
            # B: 4 < 'infinity' True -> C: 2 < 'infinity'? True
            # In our distances dict{} set B to 4
            if new_distance < distances[neighbor]:
                distances[neighbor] = new_distance
        
        # Add 'A' to visited nodes
        visited.add(current_node)
    
    return distances


shortest_path = dijkstra(graph_data, 'A')
print(shortest_path) 
# {'A': 0, 'B': 4, 'C': 2, 'D': 9, 'E': 12, 'F': 12}
# We can see the shortest path to each node in our graph
# The shortest path from A to F, for example, is 12

This is another way to do it with Python’s heapq module. I have also added a dictionary to keep track of the path and created a function to retrieve the shortest path from a specific node.

def dijkstra_heapq(graph, start_node):
    distances = { node: float('inf') for node in graph }
    distances[start_node] = 0
    previous_nodes = { node: None for node in graph }
    
    queue = [(0, start_node)]

    while queue:
        current_distance, current_node = heapq.heappop(queue)

        if current_distance > distances[current_node]:
            continue

        for neighbor, weight in graph[current_node]:
            new_distance = current_distance + weight
            if new_distance < distances[neighbor]:
                distances[neighbor] = new_distance
                previous_nodes[neighbor] = current_node
                heapq.heappush(queue, (new_distance, neighbor))

    return distances, previous_nodes


def create_path(parents, start_node, target_node):
    path = []
    current_node = target_node

    while current_node is not None:
        path.append(current_node)
        if current_node == start_node:
            break
        current_node = parents.get(current_node)
    
    return path[::-1]


shortest_path, parents = dijkstra_heapq(graph_data, 'A')
print(f'Shortest Paths: {shortest_path}')
print(parents)
print(create_path(parents, 'A', 'F'))

Greedy Algorithm

Greedy algorithms make decisions one step at a time based on what looks best. They're simple, but don't always guarantee the optimal solution. The time complexity of this algorithm is O(2n), because there are 2n subsets.

def greedy_coin_change(coins, amount):
    coins = sorted(coins, reverse=True) # coins = [25, 10, 5, 1]
    result = []

    total = 0

    for coin in coins:
        while (total + coin) <= amount:
            total += coin
            result.append(coin)

    return result 

coins = [1, 5, 10, 25]
amount = 63
solution = greedy_coin_change(coins, amount)

print("Coins used:", solution)
print("Total coins:", len(solution))

# Coins used: [25, 25, 10, 1, 1, 1]
# Total coins: 6

Dynamic Programming

Dynamic Programming is a technique for solving problems by breaking them down into smaller and simpler subproblems. This approach can turn problems that would take an long time into problems that can be solved more efficiently. I will break it down with the Fibonacci sequence.

import time

# Recursive approach - O(n2) Time Complexity
def fib(n):
    if n < 2:
        return n
    return fib(n-1) + fib(n-2)

# Tabulation (Bottom Up) approach - O(n) time Complexity
# Much faster!
def dynamic_fib(n):
    if n < 2:
        return n

    x, y = 0, 1
    for _ in range(2, n+1):
        x_old = x
        x = y
        y = x_old + x
    return y

# Measure the execution time of both functions
start = time.time()
print(fib(40))
end = time.time()

fib_time = end - start

start = time.time()
print(dynamic_fib(40))
end = time.time()

dynamic_fib_time = end - start

print(f'Time Recursive Fib: {fib_time:.2f}')
print(f'Time Dynamic Fib: {dynamic_fib_time:.2f}')

# 102334155
# 102334155
# Time Recursive Fib: 7.31
# Time Dynamic Fib: 0.00

Fin.