What Is an Attack Rate in Epidemiology?

Epidemiology is the branch of public health science devoted to studying the distribution and determinants of health-related states or events in specific populations. The attack rate is a fundamental measure employed during acute events, such as outbreaks. It provides a simple, direct metric to quantify the probability that an exposed individual will develop an illness within a specific, usually short-term, period. This measure is used to guide rapid responses during public health investigations.

What the Attack Rate Measures

The attack rate measures the likelihood of developing a disease within a defined group during an epidemic. It is a proportion, often used interchangeably with cumulative incidence or risk, that measures the probability of illness among a population initially free of the disease. Although it contains the word “rate,” it is technically an incidence proportion because it does not incorporate a time interval beyond the specified outbreak period.

To calculate this proportion, epidemiologists first establish a clear case definition, which may rely on clinical signs or laboratory confirmation. The numerator is the total number of new cases meeting this definition during the outbreak. The denominator is the specific population considered at risk of acquiring the disease at the start of that period. This final proportion indicates the overall severity of the outbreak, showing what percentage of the exposed population became ill.

The Basic Calculation

The formula for calculating the overall attack rate is straightforward: (Number of new cases / Population at risk) x 100%. This calculation expresses the risk as a percentage, allowing for easy communication. For instance, if a community of 1,000 people experiences 18 cases during a short outbreak, the overall attack rate is 1.8%.

In a foodborne illness investigation, if 99 attendees ate potato salad and 30 developed gastroenteritis, the calculation (30 / 99) x 100% yields a food-specific attack rate of 30.3%. This percentage represents the risk of illness specifically associated with consuming that item. The attack rate provides a rapid assessment of risk because the time period is limited to the duration of the outbreak, making the formula simple and readily applicable in the field.

When Attack Rates Are Most Useful

Attack rates are the preferred metric for investigating acute, time-limited events where exposure occurs over a short duration, such as foodborne outbreaks or single-source environmental exposures. The metric quickly quantifies the impact of the infectious agent or exposure, making it valuable in field epidemiology for informing immediate public health responses.

A primary use is identifying the specific source of exposure by calculating “exposure-specific” attack rates. Epidemiologists compare the rate among individuals exposed to a specific item against the rate among those who were not exposed. A significantly higher attack rate in the exposed group suggests that item is the likely source of the illness. For example, investigators may compare the illness risk for those who ate a chicken dish versus those who did not. This technique is practical for localized outbreaks because the attack rate assumes a closed population and a fixed period, ideal for guiding targeted interventions.

Understanding Primary and Secondary Rates

The attack rate is specialized into primary and secondary rates to distinguish between different stages of disease spread. The primary attack rate refers to initial cases that acquired the infection directly from a common source, such as contaminated food or water. This rate reflects the risk associated with the initial, single exposure event and is often used interchangeably with the overall attack rate.

The secondary attack rate (SAR) is a specialized measure used to assess the person-to-person spread of the disease after it has been introduced into a group. The SAR measures the rate of infection among close contacts of the primary cases, such as household members. It is typically calculated in closed settings where intimate contact facilitates transmission.

To calculate the SAR, the numerator consists of new cases developing among contacts within one incubation period. The denominator is the total number of susceptible contacts, often approximated by subtracting the primary cases from the total population of the closed setting. A high SAR suggests a highly contagious agent, making it a tool for evaluating contagiousness and the effectiveness of control measures like isolation.