How to Calculate Sensitivity: Step-by-Step Formula

Sensitivity measures how well a test catches people who actually have a condition. The formula is: sensitivity = true positives ÷ (true positives + false negatives). The result is a proportion, typically expressed as a percentage. A sensitivity of 95% means the test correctly identifies 95 out of every 100 people who truly have the condition.

The Formula Explained

To calculate sensitivity, you only need two numbers from your test results:

  • True positives (TP): People who have the condition and tested positive.
  • False negatives (FN): People who have the condition but tested negative.

The formula is:

Sensitivity = TP ÷ (TP + FN)

The denominator, TP + FN, represents everyone who actually has the condition. So sensitivity is really asking: of all the people who are sick, what fraction did the test correctly flag? Notice that people without the condition don’t factor into this calculation at all. Sensitivity is purely about the test’s performance among those who are positive.

Step-by-Step Calculation

Suppose you’re evaluating a rapid strep test. You test 200 people, and a gold-standard throat culture confirms that 50 of them truly have strep throat. Your rapid test correctly identifies 45 of those 50 as positive but misses 5, calling them negative.

Step 1: Identify your values. True positives = 45. False negatives = 5.

Step 2: Add them together. 45 + 5 = 50 (the total number of people who actually have strep).

Step 3: Divide. 45 ÷ 50 = 0.90.

Step 4: Convert to a percentage. 0.90 × 100 = 90%.

Your test has a sensitivity of 90%, meaning it catches 9 out of every 10 true cases. The remaining 10% are false negatives, people who slip through undetected.

Reading a 2×2 Table

Diagnostic test results are commonly organized in a 2×2 table (also called a contingency table or confusion matrix). The columns represent the true condition (disease present or absent), and the rows represent the test result (positive or negative). The four cells are:

  • Cell A (top left): True positives. Has the condition, tested positive.
  • Cell B (top right): False positives. Does not have the condition, tested positive.
  • Cell C (bottom left): False negatives. Has the condition, tested negative.
  • Cell D (bottom right): True negatives. Does not have the condition, tested negative.

Sensitivity uses only the left column: A ÷ (A + C). If cell A is 369 and cell C is 15, sensitivity is 369 ÷ 384, or about 96.1%. Cells B and D are irrelevant for this particular calculation.

Why Sensitivity Matters

High sensitivity is especially important for screening tests, where the goal is to avoid missing anyone who has a condition. A highly sensitive test produces few false negatives. That matters because a false negative gives someone the misleading impression that they’re healthy. They may skip follow-up testing, delay treatment, or unknowingly spread an infectious disease. For conditions where early treatment improves outcomes, like breast cancer or HIV, missed cases can directly increase the risk of serious harm or death.

This is why screening tests tend to prioritize sensitivity even at the cost of some false positives. A false positive leads to extra testing, which is inconvenient but correctable. A false negative can be dangerous.

Sensitivity vs. Specificity

Sensitivity and specificity answer two different questions. Sensitivity asks: of everyone who has the condition, how many did the test catch? Specificity asks the mirror question: of everyone who does not have the condition, how many did the test correctly rule out?

Specificity = true negatives ÷ (true negatives + false positives).

A test can be highly sensitive but not very specific, meaning it catches nearly all true cases but also flags a lot of healthy people. The reverse is also possible. In practice, there’s often a tradeoff. Adjusting the cutoff threshold on a test to catch more true positives (raising sensitivity) tends to increase false positives (lowering specificity). The right balance depends on the clinical context and the consequences of each type of error.

The False Negative Rate

Sensitivity and the false negative rate are two sides of the same coin. If you know one, you know the other:

False negative rate = 1 − sensitivity

A test with 90% sensitivity has a 10% false negative rate. This is a quick way to reframe sensitivity in terms of the errors it produces, which can be more intuitive when you’re evaluating whether a test is good enough for a particular purpose.

Does Prevalence Affect Sensitivity?

In theory, sensitivity is an intrinsic property of the test itself and shouldn’t change based on how common the disease is in the population being tested. In practice, the picture is more complicated. A large analysis of nearly 7,000 diagnostic studies found that higher disease prevalence was associated with higher estimated sensitivity. Compared to the lowest quartile of prevalence, studies in the highest quartile had 47% greater odds of identifying a true positive case. The reasons likely involve differences in disease severity across study populations, since groups with higher prevalence often include more advanced or obvious cases that are easier for a test to detect.

For most practical purposes, you can treat sensitivity as a fixed characteristic of the test. But if you’re comparing sensitivity values across studies with very different populations, prevalence differences may partly explain the variation.

Adding a Confidence Interval

A single sensitivity value calculated from a study sample is a point estimate. To express how precise that estimate is, you can add a 95% confidence interval using this formula:

Sensitivity ± 1.96 × √(sensitivity × (1 − sensitivity) ÷ n)

Here, n is the total number of people who truly have the condition (your TP + FN). Using the strep test example, with a sensitivity of 0.90 and n = 50:

0.90 ± 1.96 × √(0.90 × 0.10 ÷ 50) = 0.90 ± 0.083

That gives a 95% confidence interval of roughly 81.7% to 98.3%. The wider the interval, the less certain you can be about the true sensitivity. Larger sample sizes produce narrower, more precise intervals.

This formula works well for large samples where the sensitivity isn’t extremely close to 0% or 100%. When sample sizes are small or sensitivity is near the extremes, more accurate methods like the Clopper-Pearson exact method or Wilson method should be used instead, because the simple formula can produce confidence intervals that fall outside the 0 to 1 range.