Every step shows why the algorithm makes each choice — not just what it does. Built for true beginners.
01 Sliding Window02 Two Pointers03 Binary Search04 Fast & Slow05 Merge Intervals06 BFS07 Tree DFS08 Cyclic Sort09 List Reversal10 Top K Elements11 Subsets12 K-way Merge13 Topological Sort14 Dynamic Programming
Pattern 01
Sliding Window
Array / StringEasy → MediumO(n) time · O(1) space
The problem: Find the smallest subarray whose sum ≥ target.
Brute force tries every pair of start/end indices — O(n²). The sliding window gets it to O(n) by maintaining a running sum.
Instead of recomputing from scratch, just add the new right element and subtract the leaving left element.
The aha moment
Watch the "operations" counter below. Brute force re-sums the same elements over and over.
Sliding window: every element enters the window once and leaves once.
That's why it's O(n) — not O(n²).
Try it yourselfArrayTarget
Pattern 02
Fast & Slow Pointers
Linked ListMediumO(n) time · O(1) space
The problem: Detect a cycle in a linked list using zero extra space.
A HashSet solution uses O(n) space. Floyd's algorithm uses two pointers at different speeds. The analogy: Two runners on a looped track. The faster runner always laps the slower one.
If there's no loop, fast simply reaches the end.
The aha moment
Inside a cycle of length L, fast gains exactly 1 step per iteration.
So they will always meet within L iterations of both entering the cycle.
Watch the "gap" counter close to zero — it's inevitable.
Choose a scenario
Pattern 03
BFS — Level Order
Tree / GraphMediumO(V+E) time · O(V) space
The problem: Visit all nodes level by level, or find the shortest path in an unweighted graph.
DFS would go deep first and might find a long path before the short one.
BFS explores all nodes at distance 1 before any at distance 2 — the queue enforces this.
The aha moment
Watch the live queue in the demo. Notice that all nodes at level N are in the queue
before any node at level N+1. FIFO ordering is the only reason BFS guarantees shortest paths.
Swap the queue for a stack and you get DFS — which makes no such guarantee.
Live queue visualization
Pattern 04
Dynamic Programming
Multiple typesHardO(n) vs O(2ⁿ)
DP solves problems by breaking them into overlapping subproblems and caching results — never recomputing the same thing twice.
Three examples below, each harder than the last. Switch between them to see that the same principle applies across very different problems.
The pattern to recognise
Can this problem be described as: "the answer to size n depends on answers to smaller sizes"?
If yes — DP applies. Watch each example: naive recursion recomputes subproblems exponentially. Bottom-up solves each subproblem exactly once.
Pick a problem
n=
Pattern 02
Two Pointers
Sorted ArrayEasy → MediumO(n) time · O(1) space
The problem: Find a pair in a sorted array that sums to a target.
Brute force checks every pair — O(n²). Two pointers exploits the sorted order: one pointer starts at the smallest value, one at the largest. If the sum is too big, shrink it by moving the right pointer left. If too small, grow it by moving left pointer right.
When to use Two Pointers
The array must be sorted. The key insight: when the sum is too large, the only way to reduce it is to move the right pointer left (smaller right value). When too small, move left pointer right. This eliminates a whole row of comparisons at each step.
Try it yourselfSorted arrayTarget
Pattern 03
Binary Search
Sorted ArrayEasy → HardO(log n) time · O(1) space
The problem: Find an element in a sorted array. Linear scan is O(n). Binary search is O(log n) by eliminating half the remaining candidates every single step.
With 1,000,000 elements: linear scan = up to 1,000,000 checks. Binary search = at most 20 checks.
The key rule — never think about the whole array
Only think about your current search space [lo..hi]. Check the middle. If target is bigger, the left half is useless — throw it away (lo = mid+1). If target is smaller, throw away the right half (hi = mid-1). Repeat until found or space is empty.
Try it yourselfSorted arrayTarget
Pattern 05
Merge Intervals
Array / SortingMediumO(n log n) time · O(n) space
The problem: Given a list of intervals (like meeting times), merge all overlapping ones.
The trick: sort by start time first. Then a single left-to-right pass can merge everything — you only ever need to compare the current interval against the last merged one.
The overlap test — one simple check
After sorting, two intervals [a,b] and [c,d] overlap if and only if c ≤ b (the next start ≤ current end). If they overlap, merge to [a, max(b,d)]. If not, the current interval is done — push it to results and start fresh with [c,d].
Watch intervals merge step by step
Pattern 07
Tree DFS — Depth First Search
Tree / GraphMediumO(n) time · O(h) space
The problem: Visit every node in a tree, exploring each branch completely before moving to the next.
DFS dives deep — all the way to a leaf — then backtracks to try the next branch. The call stack IS the algorithm. When you return from a recursive call, you've backtracked automatically.
DFS vs BFS — the one-word difference
BFS uses a Queue (FIFO) → explores level by level. DFS uses a Stack (LIFO, or the call stack) → explores depth first. Swap Queue for Stack in BFS code and you get DFS. Watch the call stack panel on the right.
Pre-order DFS (Root → Left → Right)
Pattern 08
Cyclic Sort
ArrayEasyO(n) time · O(1) space
The problem: Sort an array where numbers are in range [1..n]. Each number has an exact home: value v belongs at index v−1.
Instead of a comparison sort, just place each element directly at its correct index. If arr[i] is already home, move on. If not, swap it with whoever is currently living at its home.
Why O(n) and not O(n²)
Each swap puts at least one element into its correct position permanently. So you can do at most n swaps total across the whole algorithm — even though the outer loop looks like it could be quadratic.
Watch each element find its home
Pattern 09
In-place List Reversal
Linked ListMediumO(n) time · O(1) space
The problem: Reverse a linked list without using extra space.
The trick is three pointers: prev (the already-reversed portion), curr (current node being processed), and next (temporary save before we break the link). At each step: save next, flip the arrow, advance both pointers.
The critical step — save next before flipping
When we do curr.next = prev, we sever curr's link to the rest of the list. If we hadn't saved next = curr.next beforehand, we'd lose the rest of the list forever. That temporary save is the entire secret.
Watch pointers flip one by one
Pattern 10
Top K Elements
HeapMediumO(n log k) time · O(k) space
The problem: Find the K largest elements from a stream or array without sorting everything.
Sorting costs O(n log n). A smarter approach: maintain a min-heap of size K. The heap's minimum is always at the top. When a new element beats the minimum, swap it in. Result: K largest elements in O(n log k).
Why a min-heap for the K largest?
Counter-intuitive but powerful: keeping a min-heap of size K means the top is always the smallest of the K largest. When a new element arrives, ask: "Is it bigger than the smallest in my top-K?" If yes, evict the smallest and add the new one. The heap does this comparison in O(log k).
Try itArrayK
Pattern 11
Subsets (Power Set)
BFS / BacktrackingMediumO(2ⁿ) time · O(2ⁿ) space
The problem: Generate all possible subsets of a set. For [1,2,3] the answer is 8 subsets: [], [1], [2], [3], [1,2], [1,3], [2,3], [1,2,3].
The iterative approach: start with just the empty set. For each new number, duplicate every existing subset and append the new number to each duplicate. Every element exactly doubles the total count.
Why 2ⁿ subsets?
Each element has exactly two choices: in the subset or out. With 3 elements: 2 × 2 × 2 = 8. With n elements: 2ⁿ. Watch each new element literally double the count of subsets as we add it.
Watch subsets double with each new element
Pattern 12
K-way Merge
Heap / Linked ListHardO(n log k) time · O(k) space
The problem: Merge K sorted arrays into one sorted array.
Naive approach: merge one by one — O(nk). Smart approach: use a min-heap holding exactly K elements — one from each array. Always pop the global minimum, then push the next element from that same array. The heap handles all comparisons.
The heap as a priority router
Think of K sorted lists as K queues. The heap is a router that always directs you to whichever queue has the current global minimum. You never compare all K front elements manually — the heap does it in O(log k).
3 sorted arrays merged via min-heap
Pattern 13
Topological Sort
Graph / DAGMediumO(V+E) time · O(V) space
The problem: Given tasks with dependencies (A must finish before B), find a valid order to complete all tasks.
Only works on a DAG (Directed Acyclic Graph — no circular dependencies). Kahn's algorithm: repeatedly pick any task with zero remaining prerequisites, complete it, then remove it as a prerequisite from its dependents.
In-degree is the key concept
In-degree = number of prerequisites a task still has. A task is ready when its in-degree hits zero. As we complete tasks, we decrement the in-degree of their dependents — possibly unlocking new tasks. If we finish with tasks left but none have zero in-degree, there's a cycle (impossible schedule).