Greedy Algorithms: When Being Selfish Gives the Optimal Answer
The Greedy Gamble
A greedy algorithm is simple to describe: at each step, make the choice that looks best right now, without reconsidering past decisions or looking ahead at future consequences. No recursion tree, no table of subproblems — just a sequence of locally optimal decisions.
The gamble is that these local decisions add up to a globally optimal solution. Sometimes they do. Sometimes they catastrophically do not.
Coin change with coins [1, 5, 6], amount = 10
Greedy (always pick largest coin that fits):
Pick 6 → remaining 4
Pick 1 → remaining 3
Pick 1 → remaining 2
Pick 1 → remaining 1
Pick 1 → remaining 0
Total: 5 coins
Optimal:
Pick 5 → remaining 5
Pick 5 → remaining 0
Total: 2 coinsGreedy fails here because a locally large choice (6) forces many small ones later. The optimal solution required a "sacrifice" at step one — something greedy never does.
The lesson: greedy is not always wrong, but you cannot assume it is right. You need to prove it, or at least convince an interviewer with a clear argument. The rest of this post gives you the tools to do both.
When Does Greedy Work? The Exchange Argument
The standard technique for proving a greedy algorithm correct is the exchange argument. The structure is always the same:
Assume there exists an optimal solution that differs from the greedy solution.
Find the first position where they differ — the greedy made choice A, optimal made choice B.
Show that you can swap B for A in the optimal solution without making it worse.
Repeat until the optimal solution looks identical to the greedy solution.
Conclude: the greedy solution is at least as good as any optimal solution, so it is optimal.
Example — Activity Selection (Interval Scheduling Maximization):
Problem: Given intervals, select the maximum number of non-overlapping intervals. Greedy choice: Always pick the interval that ends earliest.
Exchange argument: Suppose optimal solution S picks interval X before it picks the interval E with the earliest end time. Since E ends no later than X (E has the globally earliest end time), swapping X for E in S produces a solution that is no worse — it keeps the same number of intervals and the remaining choices are at least as available as before (E ends earlier, so it conflicts with fewer future intervals). Therefore, always picking the earliest-ending interval is safe.
This structure — "swapping in the greedy choice doesn't hurt" — is the core of every greedy correctness proof.
The 5 Greedy Patterns
Pattern 1: Interval Scheduling
Interval problems are the most common greedy context in interviews. Three variants appear repeatedly, each with a different greedy choice:
Maximize non-overlapping intervals → sort by END time, greedily pick:
def erase_overlap_intervals(intervals):
"""Minimum intervals to remove so remaining are non-overlapping."""
if not intervals: return 0
intervals.sort(key=lambda x: x[1]) # sort by end time
count = 0
last_end = intervals[0][1]
for start, end in intervals[1:]:
if start < last_end:
# Overlap: remove current (keep the one ending earlier = last_end unchanged)
count += 1
else:
last_end = end # no overlap: extend our selection
return count
def interval_scheduling_max(intervals):
"""Maximum number of non-overlapping intervals."""
intervals.sort(key=lambda x: x[1])
count, last_end = 0, float('-inf')
for start, end in intervals:
if start >= last_end:
count += 1
last_end = end
return countMinimum meeting rooms → sort by START time, use min-heap of end times:
import heapq
def min_meeting_rooms(intervals):
intervals.sort(key=lambda x: x[0])
heap = [] # min-heap of end times (one entry per active room)
for start, end in intervals:
if heap and heap[0] <= start:
heapq.heapreplace(heap, end) # reuse earliest-ending room
else:
heapq.heappush(heap, end) # no free room: open a new one
return len(heap)Merge overlapping intervals → sort by START time, merge greedily:
def merge_intervals(intervals):
intervals.sort(key=lambda x: x[0])
merged = [intervals[0]]
for start, end in intervals[1:]:
if start <= merged[-1][1]:
merged[-1][1] = max(merged[-1][1], end) # extend
else:
merged.append([start, end]) # new interval
return mergedThe pattern behind all three: Sort by the dimension that defines your greedy choice. End time for maximizing selections. Start time for merging or allocating resources.
Pattern 2: Greedy on Sorted Input
Many greedy problems become straightforward after sorting. Sorting imposes an order where the locally optimal choice at each step is provably safe.
def assign_cookies(children, cookies):
"""
Minimum cookies to satisfy maximum children.
Each child has a greed factor; each cookie has a size.
Assign smallest sufficient cookie to the least greedy child.
"""
children.sort()
cookies.sort()
child = cookie = 0
while child < len(children) and cookie < len(cookies):
if cookies[cookie] >= children[child]:
child += 1 # satisfied this child
cookie += 1 # try next cookie regardless
return child
def two_city_scheduling(costs):
"""
Send n people to city A and n to city B.
costs[i] = [costA, costB]. Minimize total cost.
Greedy: sort by (costA - costB). Negative = cheaper to send to A.
Send first n to A, last n to B.
"""
costs.sort(key=lambda x: x[0] - x[1])
n = len(costs) // 2
return sum(c[0] for c in costs[:n]) + sum(c[1] for c in costs[n:])
def largest_number(nums):
"""
Arrange integers to form the largest number.
Greedy comparison: prefer a before b if str(a)+str(b) > str(b)+str(a).
"""
from functools import cmp_to_key
nums = list(map(str, nums))
nums.sort(key=cmp_to_key(lambda a, b: 1 if a+b > b+a else -1), reverse=True)
result = ''.join(nums)
return '0' if result[0] == '0' else resultPattern 3: Greedy with Priority Queue
When the greedy choice at each step is "the current best among all available options" (not just the next in sorted order), a heap replaces sorting as the data structure.
def task_scheduler(tasks, n):
"""
Minimum time to run all tasks with cooldown n between same tasks.
Greedy: always run the most frequent available task.
"""
from collections import Counter
import heapq
freq = Counter(tasks)
max_heap = [-f for f in freq.values()] # max-heap via negation
heapq.heapify(max_heap)
time = 0
queue = [] # (available_at_time, frequency)
while max_heap or queue:
time += 1
if max_heap:
freq = heapq.heappop(max_heap) + 1 # run task (increment neg freq)
if freq < 0: # still has remaining runs
queue.append((time + n, freq)) # available again after cooldown
if queue and queue[0][0] == time:
heapq.heappush(max_heap, queue.pop(0)[1])
return time
def find_minimum_waiting_time(queries):
"""
Minimum total waiting time if you process queries in optimal order.
Greedy: process shortest queries first (Shortest Job First).
"""
queries.sort()
total = 0
for i, q in enumerate(queries):
total += q * (len(queries) - 1 - i) # q waits for all queries after it
return total
def reorganize_string(s):
"""
Rearrange so no two adjacent characters are the same.
Greedy: always place the most frequent available character.
"""
from collections import Counter
import heapq
freq = Counter(s)
heap = [(-f, c) for c, f in freq.items()]
heapq.heapify(heap)
result = []
prev_freq, prev_char = 0, ''
while heap:
freq, char = heapq.heappop(heap)
result.append(char)
if prev_freq < 0:
heapq.heappush(heap, (prev_freq, prev_char))
prev_freq, prev_char = freq + 1, char # decrement (less negative)
return ''.join(result) if len(result) == len(s) else ''Pattern 4: Greedy on Arrays (Jump / Reach)
These problems ask whether you can reach a goal, or what is the minimum number of moves. The greedy insight: at each position, track the farthest you can currently reach, and greedily extend it.
def can_jump(nums):
"""
Jump Game: can you reach the last index?
Greedy: track the maximum reachable index.
"""
max_reach = 0
for i, jump in enumerate(nums):
if i > max_reach:
return False # current position is unreachable
max_reach = max(max_reach, i + jump)
return True
def jump_game_ii(nums):
"""
Jump Game II: minimum number of jumps to reach last index.
Greedy: at each 'level', greedily pick the farthest-reaching jump.
"""
jumps = 0
current_end = 0 # farthest index reachable with current jumps
farthest = 0 # farthest index reachable with one more jump
for i in range(len(nums) - 1):
farthest = max(farthest, i + nums[i])
if i == current_end: # exhausted current jump range
jumps += 1
current_end = farthest # commit to best next jump
return jumps
def gas_station(gas, cost):
"""
Can you complete a circuit? If yes, find the starting station.
Greedy insight: if total gas >= total cost, a solution exists.
The starting point is where cumulative tank first goes negative.
"""
if sum(gas) < sum(cost):
return -1
start = tank = 0
for i in range(len(gas)):
tank += gas[i] - cost[i]
if tank < 0:
start = i + 1 # reset: start after the station that depleted us
tank = 0
return startThe gas station proof: If total gas ≥ total cost, a valid starting point exists. After a failed attempt starting at 0 that fails at station k, every station from 1 to k also fails as a starting point — they would encounter the same deficit at k plus the negative balance carried from their starting point. So the next valid candidate is k+1.
Pattern 5: Constructive Greedy (Build Optimal Structure)
Some problems ask you to build a string, sequence, or arrangement that satisfies a constraint. Greedy constructs it character by character, making the locally safe choice.
def remove_k_digits(num, k):
"""
Remove k digits to form the smallest possible number.
Greedy: remove a digit when the next digit is smaller (monotonic stack).
"""
stack = []
for digit in num:
while k and stack and stack[-1] > digit:
stack.pop()
k -= 1
stack.append(digit)
# If k remaining, remove from the end (already in ascending order)
stack = stack[:-k] if k else stack
result = ''.join(stack).lstrip('0')
return result or '0'
def create_maximum_number(nums1, nums2, k):
"""
Create the maximum number of length k from two arrays.
Greedy: try all splits (i from nums1, k-i from nums2),
pick best subsequence from each, then merge.
"""
def max_subsequence(nums, length):
drop = len(nums) - length
stack = []
for num in nums:
while drop and stack and stack[-1] < num:
stack.pop(); drop -= 1
stack.append(num)
return stack[:length]
def merge(s1, s2):
result = []
while s1 or s2:
if s1 >= s2:
result.append(s1[0]); s1 = s1[1:]
else:
result.append(s2[0]); s2 = s2[1:]
return result
return max(
merge(max_subsequence(nums1, i), max_subsequence(nums2, k-i))
for i in range(k+1)
if i <= len(nums1) and k-i <= len(nums2)
)When Greedy Fails: Switch to DP
Recognizing when greedy will fail is as important as knowing when it works. The signals:
Signal 1 — The greedy choice depends on future decisions. If making the locally best choice now might foreclose a better global path, greedy is unsafe. Coin change with arbitrary denominations is the canonical example.
Signal 2 — The problem has overlapping subproblems. If you can reach the same state via multiple paths with different accumulated costs, you need DP to compare all paths — greedy only follows one.
Signal 3 — The exchange argument fails. If swapping the greedy choice for an alternative in any optimal solution can make it worse, greedy is wrong.
Problem: "Minimum/maximum value" or "count of ways"?
│
├─ Count of ways → Almost always DP
│
├─ Minimum/maximum value
│ │
│ ├─ Has overlapping subproblems? → DP
│ │
│ └─ Greedy choice is "safe"?
│ │
│ ├─ Exchange argument holds → Greedy ✅
│ └─ Exchange argument fails → DP
│
└─ Can you prove locally optimal = globally optimal?
│
├─ Yes (via exchange argument) → Greedy ✅
└─ No / unsure → DP (safe fallback)Quick test: Try the greedy on a small adversarial case. If you can construct a counterexample in 30 seconds, it fails. If you cannot, think about the exchange argument. If the exchange argument is clear, go greedy. Otherwise, fall back to DP.
The 9 Must-Know Problems
# | Problem | Pattern | Difficulty |
|---|---|---|---|
1 | Interval scheduling | Medium | |
2 | Interval merging | Medium | |
3 | Interval + heap | Medium | |
4 | Array reach | Medium | |
5 | Array reach | Medium | |
6 | Array reach | Medium | |
7 | Greedy + heap | Medium | |
8 | Constructive + monotonic stack | Medium | |
9 | Greedy on sorted input | Medium |
Work through 1–3 first — interval problems establish the core intuition. Then 4–6 (array reach pattern — Jump Game II and Gas Station both require non-obvious greedy insights). Problem 7 introduces the heap-assisted greedy pattern. Problems 8–9 are the constructive type where the greedy choice is a custom comparison.
Bottom Line
Greedy is the fastest algorithm when it applies — no table, no recursion, just one pass. The difficulty is never the implementation. It is always the justification.
Before coding any greedy solution in an interview, say this out loud: "My greedy choice is X. I believe this is safe because swapping it for any alternative in an optimal solution would not improve the result." If you can complete that sentence convincingly, proceed. If you cannot, reconsider whether DP is the right tool.
The candidates who use greedy confidently in interviews are not the ones who have memorized more problems. They are the ones who have internalized the exchange argument and can apply it on the spot.
🧭 What's Next in Chapter 3
Post 14: Divide & Conquer — the strategy behind merge sort, binary search, and fast power; master the split-solve-merge pattern and the Master Theorem for complexity analysis
Related
Divide & Conquer: The Strategy Behind the Fastest Algorithms
Divide and conquer is behind merge sort, binary search, and fast power. Master the split-solve-merge pattern and the Master Theorem for complexity analysis in one focused post.
Dynamic Programming: From Scary to Systematic in One Post
DP is not about memorizing problems. Recognize two signals — optimal substructure and overlapping subproblems — then apply 6 patterns that make DP finally approachable.
Recursion & Backtracking: Think in Trees, Code in Functions
Every recursive solution is secretly a tree traversal. Once you see that, pruning becomes obvious. Master the backtracking template and 4 problem types found in every interview.
Comments