What Are Sensitivity Analyses? Definition and Types

Sensitivity analyses are a way of testing whether the conclusions of a study or model hold up when you change the assumptions behind them. Every statistical model, every cost estimate, every clinical trial result rests on choices the researchers made: which data to include, how to handle gaps, what values to plug into equations. Sensitivity analysis systematically tweaks those choices to see if the final answer stays the same or falls apart. When results remain consistent despite these changes, they’re considered “robust.”

Why Sensitivity Analyses Matter

No study captures reality perfectly. Researchers make assumptions at every step, from estimating how effective a treatment is to deciding what a hospital visit costs. Some of those assumptions are well-supported, others are educated guesses, and a few might be flat-out wrong. Sensitivity analysis asks a simple but powerful question: if we got one of those assumptions wrong, would it change our conclusion?

The answer has real consequences. If a health policy decision hinges on a cost-effectiveness model, and that model’s conclusion flips when you adjust a single input by 10%, decision-makers should be cautious. If the conclusion holds steady even when you push every assumption to its extreme, there’s much stronger reason to trust it. This is why sensitivity analysis is considered a crucial part of grading the strength of evidence a study provides.

One-Way Sensitivity Analysis

The simplest version changes one input at a time while holding everything else constant. Say you’re modeling the cost-effectiveness of a screening program, and one of your inputs is the rate at which the disease progresses. A one-way sensitivity analysis would vary that progression rate across a plausible range and track how your final cost-effectiveness result responds. You’d then repeat this for each input, one by one.

This approach is straightforward and easy to interpret, but it has a blind spot: it can’t detect interactions between inputs. In reality, two or more assumptions might shift simultaneously, and their combined effect could be very different from what you’d predict by changing each one alone.

Multi-Way and Global Approaches

Multi-way sensitivity analysis addresses that blind spot by varying two or more inputs at the same time. A two-way analysis, for instance, might change both the disease progression rate and the cost of treatment simultaneously to see how the combination affects the result. The challenge is scale. For a model with 20 inputs, testing every possible combination of just two values per input would require over a million simulation runs. In practice, multi-way analysis is usually limited to a handful of the most important parameters.

Global sensitivity analysis takes a broader view. Instead of nudging one or two inputs, it explores the entire range of all inputs together, mapping out which factors drive the most variation in results across the full space of possibilities. Local methods (like one-way analysis) focus on small changes around a single starting point. Global methods survey the whole landscape, which makes them more comprehensive but also more computationally demanding.

Probabilistic Sensitivity Analysis

In health economics and cost-effectiveness research, probabilistic sensitivity analysis (PSA) is the standard approach for dealing with uncertainty. Rather than using a single “best guess” for each input, PSA assigns each parameter a probability distribution, reflecting the range of values it could realistically take. The model then runs thousands of times, each time drawing random values from those distributions.

This process, sometimes called Monte Carlo simulation, produces not one answer but a cloud of possible answers. You can then report the probability that a treatment is cost-effective at a given willingness-to-pay threshold, which is far more informative than a single point estimate. International reporting standards like the CHEERS guidelines specifically require researchers to describe any sensitivity analyses they conducted and to report the results of those analyses alongside their main findings.

Handling Missing Data in Clinical Trials

Clinical trials almost always have missing data. Participants drop out, skip visits, or stop responding to surveys. The default assumption in most analyses is that data is “missing at random,” meaning the reason someone dropped out can be explained by information the researchers already have. But what if that’s not true? What if participants who dropped out did so because the treatment wasn’t working, or because side effects were intolerable?

Sensitivity analysis tackles this by deliberately violating the “missing at random” assumption in controlled ways. One common technique involves imputing (filling in) the missing values, then systematically shifting those imputed values in a negative direction to simulate what would happen if dropouts were actually doing worse than the model assumes. Researchers might shift the imputed values by up to two standard deviations from what the model predicts and then check whether the trial’s conclusion still holds. The FDA recommends this type of sensitivity analysis for trials with meaningful amounts of missing data.

Reading a Tornado Diagram

Results of one-way sensitivity analyses are often displayed in a tornado diagram. Each input variable gets a horizontal bar showing how much the final result changes when that variable is pushed across its plausible range. The variable with the widest bar (the biggest impact) sits at the top, and bars get progressively narrower as you move down. The result looks like a funnel or tornado shape.

These diagrams are useful for quickly identifying which assumptions matter most. If one bar stretches far enough to cross a decision threshold (the point where you’d switch from “yes, adopt this treatment” to “no, don’t”), that parameter deserves extra scrutiny. If all bars are narrow, the model’s conclusion is relatively insensitive to any single input, which is reassuring.

Sensitivity Analysis in Systematic Reviews

Systematic reviews, which pool results from multiple studies, use sensitivity analysis to test whether their conclusions depend on any single study or methodological choice. A common approach is to re-run the overall analysis after excluding studies that have a high risk of bias, or studies with unusual designs. If the pooled result barely changes, reviewers can be more confident it reflects a genuine effect rather than an artifact of one or two questionable studies. Cochrane guidelines recommend this practice as a standard part of any meta-analysis.

What Makes a Good Sensitivity Analysis

The value of a sensitivity analysis depends entirely on the assumptions it tests. Testing trivial inputs that no one disputes adds little. Testing the assumptions that are most uncertain or most likely to influence the result is what builds genuine confidence in a study’s conclusions. A well-designed sensitivity analysis identifies the key vulnerabilities in a model, varies them across realistic ranges, and reports whether the direction and size of the main finding hold steady.

When reading a study, look for how the authors describe their sensitivity analyses. Did they test only one scenario, or did they systematically explore the inputs most likely to be wrong? Did the results stay consistent, or did small changes flip the conclusion? A robust result that survives aggressive sensitivity testing carries far more weight than one that was never stress-tested at all.