Alpha-beta pruning is a technique that makes game-playing algorithms faster by skipping over moves that can’t possibly affect the final decision. It works on top of the minimax algorithm, the standard method computers use to play two-player games like chess, checkers, and Go. Instead of evaluating every possible move in the game tree, alpha-beta pruning cuts off entire branches that one player would never rationally choose, letting the computer search deeper in the same amount of time.
How Minimax Works (The Starting Point)
To understand alpha-beta pruning, you first need the basics of minimax. In a two-player game, one player tries to maximize their score (the “maximizing” player) while the other tries to minimize it (the “minimizing” player). The minimax algorithm builds a tree of all possible future moves, alternating between the two players at each level, and works backward from the end states to figure out the best move right now.
The problem is that this tree grows explosively. If each position has 30 possible moves and you want to look 10 moves ahead, you’re evaluating roughly 30^10 positions, over 590 trillion. Even fast computers can’t handle that. Alpha-beta pruning solves this by recognizing that many of those positions don’t need to be evaluated at all because they’ll never come up in optimal play.
What Alpha and Beta Actually Mean
The algorithm tracks two values as it searches the game tree. Alpha is the minimum score the maximizing player is already guaranteed to get. Beta is the maximum score the minimizing player is already guaranteed to achieve. Together, they form a window: any move whose value falls outside this window is irrelevant, because one player or the other would avoid it.
At the start, alpha is set to negative infinity and beta to positive infinity, meaning anything is possible. As the algorithm evaluates positions, these values tighten. Alpha rises whenever the maximizing player finds a better option. Beta drops whenever the minimizing player finds a better option. The critical moment happens when beta drops to or below alpha. At that point, the two players’ guaranteed outcomes have crossed, meaning the current branch can never produce a result that both players would allow. The algorithm stops exploring that branch entirely.
When Branches Get Cut
There are two types of cuts. An alpha cut happens at a minimizing node: the minimizing player finds an option so good (so low in score) that the maximizing player at the parent node would never allow play to reach this branch, because the maximizer already has a better alternative elsewhere. A beta cut is the mirror image at a maximizing node: the maximizing player finds a score so high that the minimizing player above would never permit it.
Here’s a concrete example. Suppose the maximizing player has already found a move elsewhere that guarantees a score of 3. Now the algorithm explores a new branch where the minimizing player’s first option leads to a score of 2. Since the minimizing player can force a score of at most 2 here, and the maximizing player already has a guaranteed 3 from a different branch, there’s no reason to keep exploring. The remaining children of this branch are pruned, no matter how many there are.
The Algorithm Step by Step
The recursive process works like this. At a maximizing node, the algorithm starts with a running value of negative infinity and evaluates each child by calling the minimizing function. After each child returns a value, the algorithm checks two things: whether the running value has reached or exceeded beta (if so, stop and return immediately, because the minimizing player above won’t allow this outcome), and whether alpha should be updated to reflect a better guaranteed minimum for the maximizer.
At a minimizing node, the process mirrors. The running value starts at positive infinity, each child is evaluated by calling the maximizing function, and after each child the algorithm checks whether the running value has dropped to or below alpha (if so, prune) and whether beta should be updated. Leaf nodes, whether they’re actual game endings or positions at the search depth limit, simply return their evaluated score.
How Much Faster It Gets
In the worst case, alpha-beta pruning examines the same number of nodes as plain minimax, providing no speedup at all. This happens when moves are ordered from worst to best, so the algorithm never gets a chance to prune early. In the best case, when the best move is always examined first, the effective branching factor drops to roughly the square root of the original. If a game has 36 legal moves per position, the best-case effective branching factor is about 6 instead of 36.
This best-case performance effectively doubles the search depth a computer can reach in the same amount of time. A chess engine that could look 8 moves ahead with plain minimax might look 16 moves ahead with perfectly ordered alpha-beta pruning. In practice, the speedup falls somewhere between these extremes, but it’s consistently dramatic enough to make the algorithm essential for competitive game AI.
Why Move Ordering Matters So Much
The gap between best and worst case makes move ordering the single most important factor in alpha-beta’s efficiency. If the algorithm happens to examine the best move first at every node, it prunes the maximum number of branches. If it examines the worst move first, it learns nothing useful and can’t prune anything until much later.
Game engines use several strategies to get closer to optimal ordering. Static approaches use game knowledge to guess which moves are likely best: in chess, captures and checks are typically examined before quiet moves. Dynamic approaches extract ordering hints from the search itself. For instance, if a move causes a cutoff at one depth, it’s likely strong at similar depths too, so the engine tries it first next time. Research on chess engines shows that at nodes where a cutoff occurs, the first move examined is responsible for the cutoff a large majority of the time when ordering is done well. There’s also an interesting asymmetry: move ordering tends to be more effective at odd levels of the tree than even levels, due to the alternating structure of maximizing and minimizing nodes.
Heuristic Evaluation at Depth Limits
Real games like chess have far too many possible positions to search all the way to checkmate from the opening. Instead, the engine searches to a fixed depth and then evaluates the resulting positions using a heuristic evaluation function. This function assigns a numeric score to a board position based on features like material balance, piece activity, king safety, and pawn structure. It’s a computationally cheap estimate of who’s winning.
Alpha-beta pruning works exactly the same way whether the leaf nodes are true game endings or heuristic evaluations. The quality of the evaluation function determines how well the engine plays at a given depth, while alpha-beta pruning determines how deep it can afford to search. Both components matter, and improvements to either one make the engine stronger.
Where Alpha-Beta Pruning Is Used
Alpha-beta pruning has been the backbone of competitive game-playing programs for decades. Stockfish, one of the strongest chess engines in the world, uses alpha-beta pruning to evaluate millions of positions per second. IBM’s Deep Blue, which defeated world champion Garry Kasparov in 1997, relied heavily on it as well. The algorithm applies naturally to any two-player, zero-sum game with perfect information: chess, checkers, Othello, Connect Four, and many others.
Even newer AI systems incorporate pruning concepts. AlphaZero, which taught itself to play chess, shogi, and Go through deep reinforcement learning, uses a form of tree search that prunes unpromising branches, though its approach (Monte Carlo tree search guided by neural networks) differs from classical alpha-beta. The core insight, that you don’t need to evaluate every possibility when some can be ruled out logically, remains central to game AI regardless of the specific implementation.