A risk assessment matrix is a grid that plots each risk you’ve identified along two axes: how likely it is to happen and how severe the consequences would be. You multiply or cross-reference those two scores to get an overall risk level, then color-code the result so the highest-priority threats are impossible to ignore. Building one from scratch takes about an hour once you understand the components, and the process works whether you’re managing a construction project, launching a product, or running a safety audit.
How the Two Axes Work
Every risk matrix rests on two dimensions. The horizontal axis (x-axis) measures likelihood: the probability that a given risk event will actually occur. The vertical axis (y-axis) measures impact: how much damage, cost, or disruption it would cause if it did happen. Each axis is divided into a set number of levels, and the cell where a risk’s likelihood and impact intersect becomes its overall risk rating.
This two-dimensional approach comes from a principle in formal risk management: risk is never just about whether something is likely, and never just about whether something is severe. A highly likely event with trivial consequences may not need your attention. A catastrophic event that’s nearly impossible might. The matrix forces you to weigh both factors together, which prevents you from over-focusing on one dimension.
Choose Your Grid Size
The three most common formats are 3×3, 4×4, and 5×5 grids. Your choice depends on how much granularity you need.
A 3×3 matrix uses three levels on each axis (for example, Low, Medium, High). It’s fast to fill out and easy for small teams or straightforward projects where you just need a rough prioritization. The downside is that many risks cluster into the same cell, making it hard to distinguish between them.
A 5×5 matrix uses five levels on each axis, giving you 25 possible cells. This is the most widely used format in industries like aviation, construction, and enterprise risk management because it separates risks more precisely. The tradeoff is that people sometimes struggle to distinguish between adjacent levels (is this risk “Probable” or “Occasional”?), so you need clear definitions for each level.
A 4×4 matrix splits the difference. It offers decent resolution without overwhelming a team that’s new to the process. If you’re unsure, start with a 5×5. You can always simplify later, but it’s harder to add granularity after the fact.
Define Your Likelihood Scale
Each level on your likelihood axis needs a plain-language description so that different people on your team rate the same risk the same way. Without these anchors, one person’s “Unlikely” is another person’s “Rare,” and the whole matrix becomes unreliable. Here’s a typical five-level scale:
- Rare: Could theoretically happen but almost never does. You’d be surprised if it occurred even once.
- Unlikely: Has happened before in similar situations, but infrequently.
- Possible: Comes up sporadically. Not expected on every project, but wouldn’t be shocking.
- Likely: Encountered several times across similar projects or time periods.
- Almost Certain: Happens continuously or on nearly every project. Treat it as a near-guarantee.
If you can attach rough percentages or frequencies to each level, do it. For example, “Rare” might mean less than a 5% chance within the project timeline, while “Almost Certain” means greater than 90%. Numeric anchors reduce the guesswork significantly.
Define Your Impact Scale
Impact levels should reflect the specific type of consequences your organization cares about. A hospital’s matrix will weight patient safety. A software company might weight financial loss or downtime. A five-level impact scale often looks like this:
- Negligible: Minor inconvenience. No meaningful cost, delay, or harm.
- Minor: Small financial loss or short delay. Handled within normal operations.
- Moderate: Noticeable cost or schedule disruption. Requires management attention and extra resources.
- Major: Significant financial hit, project failure, serious injury, or major reputational damage.
- Catastrophic: Threatens the survival of the project, organization, or involves loss of life.
Just as with likelihood, attach concrete thresholds wherever you can. “Moderate” financial impact might mean $50,000 to $250,000 in losses for one organization and $5,000 to $25,000 for another. The numbers should reflect your actual scale of operations so ratings stay consistent across the team.
Calculate Risk Scores
Assign a numeric value to each level on both axes. For a 5×5 matrix, the simplest approach is to number each level 1 through 5. Then multiply the likelihood score by the impact score for each risk. A risk rated “Likely” (4) with “Major” impact (4) gets a score of 16. A “Rare” (1) risk with “Minor” impact (2) scores 2.
With a 5×5 grid, scores range from 1 to 25. You then group those scores into color-coded bands:
- Green (Low): Scores of 1 to 4. These risks need monitoring but no immediate action.
- Yellow (Moderate): Scores of 5 to 9. Evaluate whether you can reduce these risks with reasonable effort.
- Orange (High): Scores of 10 to 16. These require a mitigation plan and active management.
- Red (Critical): Scores of 17 to 25. Work cannot proceed as planned without changes to lower the risk.
One important rule: any risk with catastrophic potential consequences should be flagged for further review regardless of its overall score. Even if the likelihood is low, the severity alone justifies a closer look. This prevents the math from burying a low-probability disaster under a modest number.
Build the Matrix Step by Step
With your scales and scoring method ready, here’s the sequence for putting it all together:
1. List your risks. Gather your team and brainstorm every risk relevant to the project, process, or operation. Pull from past project data, incident reports, and lessons learned. Don’t filter at this stage; capture everything.
2. Rate each risk. For every risk on the list, assign a likelihood level and an impact level using the scales you defined. Do this as a group when possible, because different perspectives catch blind spots. If two people disagree on a rating, discuss the reasoning rather than splitting the difference.
3. Score and plot. Multiply each risk’s likelihood and impact values. Then place each risk in the corresponding cell on your grid. You can do this in a spreadsheet, a whiteboard, or dedicated risk management software. The visual layout is the whole point: your eye should immediately be drawn to the red and orange zones.
4. Assign response actions. For each color band, define what happens next. Green risks go on a watch list. Yellow risks get evaluated for cost-effective mitigation. Orange and red risks get assigned owners, deadlines, and specific mitigation plans. Any procedure rated high or critical should be modified before moving forward.
5. Review and update. A risk matrix isn’t a one-time document. Revisit it at regular intervals, after major milestones, or whenever conditions change. Risks shift over time: a “Possible” risk can become “Almost Certain” as a deadline approaches, and a “Major” impact can shrink after you put controls in place.
Reducing Bias in Your Ratings
The biggest weakness of any risk matrix is subjectivity. People tend to be optimistic about their own projects, anchor on the first number someone suggests, and underestimate risks they haven’t personally experienced. Research on cognitive bias in risk assessment has identified several practical countermeasures.
First, ground your ratings in historical data whenever possible. Rather than guessing how likely a supply chain delay is, look at how often it happened on your last five projects. This “outside view” approach, originally proposed by psychologists Daniel Kahneman and Amos Tversky, corrects for the natural tendency to treat your current project as special.
Second, get an independent review. Have someone outside the core project team examine your risk ratings and mitigation plans for reasonableness. Insiders often share the same assumptions, and an external set of eyes breaks through groupthink.
Third, run a “premortem.” Before you finalize the matrix, ask the team to imagine the project has already failed and work backward to explain why. This technique surfaces risks that people might hesitate to raise during a normal brainstorming session, especially low-likelihood, high-consequence scenarios that feel unlikely but carry devastating potential.
Finally, use more than one method to identify risks. Combine expert judgment, checklists from past projects, what-if scenario exercises, and structured interviews. Cross-checking across methods catches gaps that any single approach would miss.
A Quick Example
Imagine you’re managing a product launch and you’ve identified “key supplier misses delivery deadline” as a risk. Your team rates the likelihood as “Likely” (4 out of 5) because this supplier has been late on two of your last four orders. You rate the impact as “Moderate” (3 out of 5) because you have a backup supplier, but switching would cost extra money and push the launch back by a week. The score is 4 × 3 = 12, which lands in the orange (high) zone. That means you assign an owner, set a deadline for securing a backup agreement, and check in weekly until the delivery is confirmed.
Now compare that to “office internet outage during launch day.” Your team rates likelihood as “Rare” (1) and impact as “Minor” (2), giving a score of 2. That’s green. You note it on the register and move on to more pressing concerns. The matrix just saved you from spending equal time on both risks, which is exactly its purpose.