Algorithm
Greedy Algorithms: When Greed Leads to the Solution
Discover greedy algorithms, an intuitive approach to solving complex problems.
In the vast and sometimes daunting world of algorithms, there exists a family of solutions that is as appealing for its simplicity as it is formidable in its efficiency: greedy algorithms. Whether you’re optimizing a robot’s path or simply giving change at a grocery store checkout, you’re likely applying this logic without even realizing it.
However, as we’ll see, simplicity comes at a cost. While greedy methods excel in speed, they don’t always guarantee the optimal solution. Let’s dive together into the mechanics of these algorithms to understand not only when to use them, but, more importantly, when to be wary of them.
Fundamentals and Philosophy of the Greedy Approach
Principle of local decision
The very essence of a greedy algorithm lies in its decision-making strategy. When faced with a complex problem that requires a sequence of actions, the algorithm makes the choice that appears best at the current moment, without considering future consequences. This is known as a local optimum.
The idea is to build the solution step by step. At each step, the algorithm selects the most immediately advantageous option and commits to it permanently, there is no backtracking. The algorithm assumes that a series of locally optimal choices will inevitably lead to a globally optimal solution. It’s a philosophy rooted in the present: take the largest available piece right now, which is precisely why it’s called “greedy.”
Distinction Between Algorithm and Heuristic
It is crucial for a developer to clearly distinguish between these two terms, as the final outcome depends on it. When the greedy method mathematically guarantees an optimal solution, that is, the best possible result, every time, it is rightfully called a greedy algorithm. This is the ideal scenario.
However, for many complex problems, this strategy yields a solution that is “good enough” or “acceptable,” but not necessarily the absolute best. In such cases, we no longer refer to it as an algorithm in the strict sense, but rather as a greedy heuristic. It is an approximate problem-solving method, highly valuable when finding the perfect solution would require unreasonable computational time (such as evaluating all possible combinations). A heuristic represents a pragmatic trade-off between accuracy and response time.
Efficiency and Performance
Why use a greedy approach if it isn’t always perfect? The answer comes down to one word: efficiency. Since the algorithm never revisits or undoes its previous decisions, it moves linearly toward a solution without exploring all branches of a possibility tree.
This characteristic makes greedy algorithms extremely fast and straightforward to implement. Their time complexity is often low, as most of the computational effort usually lies in a preprocessing step—such as an initial sorting of data. For engineers, this makes the greedy approach a natural first candidate to explore: it’s quick to code and can serve as a baseline to evaluate the performance of more sophisticated solutions.
The Classic Example: The Coin Change Problem

Modeling the Role of the Cashier
The most famous example used to illustrate this concept is the coin change problem. Imagine you are a cashier who must return an exact amount (for example, 49 cents) to a customer using the fewest number of coins possible. You have an unlimited supply of coins of different denominations (defined by the monetary system).
This is a classic optimization problem. The input consists of the amount to return and the list of available coin denominations. The expected output is the smallest possible set of coins whose total value equals the target amount. The intuitive approach that any human would naturally take is greedy, repeatedly select the largest coin that does not exceed the remaining amount, and continue until the balance reaches zero.
Technical Implementation in Python
Let’s translate this logic into code. The algorithm consists of iterating through the coin denominations from the highest to the lowest value, and taking as many coins as possible of each denomination while the remaining amount allows it.
Here is a standard implementation in Python:
def rendu_glouton(restant, pieces):
# We assume the 'pieces' list is sorted in ascending order
# Start with the largest coin (end of the list)
indice_pieces = len(pieces) - 1
rendu = []
while restant > 0 and indice_pieces >= 0:
valeur_piece = pieces[indice_pieces]
if valeur_piece <= restant:
# Take the coin: greedy choice
restant -= valeur_piece
rendu.append(valeur_piece)
else:
# Coin is too large; move to the next smaller one
indice_pieces -= 1
return rendu
# Example using the Euro coin system
systeme_euro = [1, 2, 5, 10, 20, 50, 100, 200, 500]
print(rendu_glouton(49, systeme_euro))
# Expected output: [20, 20, 5, 2, 2]In this script, the while loop embodies the repetition of local decisions. At each iteration, we make the best possible choice, selecting the largest coin that fits, to reduce the remaining amount, without ever revising or undoing previously selected coins.
Termination Analysis
A fundamental question in algorithmics is whether a program will eventually terminate. In the case of the coin change problem, the greedy algorithm always terminates, provided one specific condition is met: the monetary system must include the unit coin (denomination 1).
Indeed, if a coin of value 1 is available, it is always possible to complete the change for any remaining amount, no matter how small. This ensures the algorithm cannot get stuck in an infinite loop or fail to produce an exact solution. This property provides a crucial guarantee of robustness for such systems.
The Pitfalls of the Greedy Approach: When Intuition Fails

Illustration Through a Counterexample
This is where things get interesting. The greedy algorithm appears flawless with standard currencies like the Euro, but it can fail dramatically with other coin systems. Consider a hypothetical system with denominations (1, 3, 4) and the goal of making change for the amount 6.
The greedy algorithm will apply its blind logic as follows:
- It selects the largest coin less than or equal to 6. It chooses 4.
- There remains 2 to return (6 – 4).
- The largest coin less than or equal to 2 is 1.
- There remains 1. It takes another 1.
Greedy result: 4 + 1 + 1 (3 coins).
Yet a human observer immediately sees a more optimal solution: 3 + 3, which uses only 2 coins! In this specific case, the locally optimal choice (taking the 4) prevented the algorithm from reaching the globally optimal solution. Therefore, for this coin system, the greedy algorithm is not correct, it fails to produce the minimal number of coins.
The Concept of a Canonical Coin System
Why does it work with the euro but not with the (1, 3, 4) system? Monetary systems for which the greedy algorithm always yields the optimal solution are called canonical coin systems.
Unfortunately, there is no simple, universal rule to instantly determine whether a given system is canonical. However, it can be mathematically proven that the Euro coin system is indeed canonical. This fortunate property simplifies life for cashiers and automated payment systems alike—but it is not a general rule in optimization. In other words, the success of the greedy approach in everyday currency is the exception, not the norm.
The Local Maximum Dilemma
To visualize this pitfall, we often use the mountain climber analogy (or hill climbing). Imagine you’re in thick fog on a mountainous landscape, trying to reach the highest possible point, the global maximum (M).
Your greedy strategy is to always climb the steepest slope available from your current position. By following this rule, you will eventually reach a peak, but there’s no guarantee it’s the highest one in the entire range. You might end up stuck on a secondary summit (a local maximum, denoted m), while the true global peak (M) lies just beyond a valley.
Because the greedy algorithm never moves downhill, no backtracking, no exploration of alternatives—it remains trapped at this suboptimal solution, unable to “see” or reach the better peak hidden by the terrain.
Advanced Applications: Graphs and Travelers
Network Optimization (Prim and Kruskal)
Beyond making change, greedy algorithms reign supreme in graph theory—particularly for solving the Minimum Spanning Tree (MST) problem. Imagine you need to connect several cities with fiber-optic cables while minimizing the total length of cable used.
Prim’s and Kruskal’s algorithms are two brilliant examples of greedy strategies that work flawlessly for this task. Both build the MST by repeatedly selecting the cheapest available edge that doesn’t create a cycle, but they do so in different ways:
- Kruskal’s algorithm sorts all edges by weight and adds them one by one from smallest to largest, using a union-find structure to avoid cycles.
- Prim’s algorithm starts from an arbitrary node and grows the tree by always adding the shortest edge that connects a new vertex to the existing tree.
In this context, the greedy choice does lead to a globally optimal solution, thanks to the mathematical properties of spanning trees, making these algorithms both efficient and exact.
Strategies for the Traveling Salesman Problem
The famous Traveling Salesman Problem (TSP), visiting a list of cities and returning to the starting point while minimizing total distance, is far more challenging. There is no known efficient algorithm that can guarantee the perfect solution in reasonable time; it is an NP-complete problem.

Here, the greedy approach becomes a valuable heuristic. The nearest neighbor method consists of always moving to the closest unvisited city. Although this technique does not yield the absolute shortest route, it produces a feasible solution extremely quickly.
Other variants, such as the best insertion heuristic, where a new city is inserted into the existing tour at the position that adds the least extra distance, allow for refined and often higher-quality results. While still not guaranteed to be optimal, these greedy-based heuristics strike a practical balance between computational efficiency and solution quality, making them widely used in real-world routing and logistics applications.
Constraint Management (Graph Coloring)
Finally, let’s consider a classic scheduling problem, graph coloring. The goal is to assign a color to each node in a graph such that no two adjacent nodes share the same color, while using as few colors as possible.
Finding the absolute minimum, the chromatic number, is computationally very hard for complex graphs (it’s an NP-hard problem). However, a simple greedy heuristic offers a practical solution: process the vertices one by one, and assign each the first available color not already used by its colored neighbors.
This approach yields a valid coloring quickly and efficiently, though it may use more colors than the theoretical minimum. Despite this limitation, the greedy coloring heuristic is widely used in real-world applications such as register allocation in compilers, frequency assignment in telecommunications, and exam timetabling, where speed and feasibility often outweigh the need for perfect optimality.
