How to Calculate Screen Failure Rate in Clinical Trials

The screen failure rate is calculated by dividing the number of patients who fail screening by the total number of patients screened, then multiplying by 100 to get a percentage. If you screen 200 patients and 50 fail to meet eligibility criteria, your screen failure rate is 25%. Simple as the formula is, understanding what drives that number and what it costs you is where the real value lies.

The Basic Formula

Screen failure rate (SFR) is expressed as:

SFR = (Number of Screen Failures ÷ Total Number of Patients Screened) × 100

A screen failure is any patient who enters the screening process but does not proceed to randomization or enrollment. The denominator includes every patient who signed an informed consent form and began any screening procedures, regardless of how far they got. The numerator counts everyone who dropped out or was excluded during that window.

You can also flip the formula to calculate how many patients you need to screen to hit your enrollment target. If your expected SFR is 30% and you need 100 enrolled patients, divide 100 by (1 minus 0.30), which gives you approximately 143 patients to screen. This adjusted screening target is critical for realistic recruitment planning.

How Screen Failures Affect Your Budget

Every screen failure costs money without contributing data. The screening process typically involves labs, imaging, physical exams, and site staff time. A commonly cited benchmark puts the cost of screening at roughly $400 per patient, though this varies widely by therapeutic area and protocol complexity.

The financial math is straightforward. At a 10% screen failure rate, every nine enrolled patients effectively subsidize one failed screen. That adds about $44 per enrolled patient to cover the loss. But as the rate climbs, so does the per-patient cost burden. At 30%, you’re absorbing one failure for roughly every 2.3 enrolled patients, and the added cost per enrollee more than triples. Clinical trial agreements sometimes cap how many screen failures the sponsor will reimburse (for example, four screen failures per enrolled subject), so rates above those thresholds can leave sites absorbing costs directly.

Typical Rates by Therapeutic Area

Knowing what’s “normal” helps you benchmark your own trial. In oncology, screen failure rates of 20% to 30% are common. Phase III prostate cancer trials average about 26%, with individual studies ranging from 12% to 45%. Kidney cancer trials cluster around 25%, while bladder cancer trials tend to be slightly lower at 19%. These figures come from a review of contemporary randomized phase II and III genitourinary trials.

Rare disease trials face even steeper challenges. Screening and randomization difficulties are a major driver of the roughly four additional years that rare disease programs spend in development compared to more common indications. Smaller patient pools mean more candidates need to be evaluated to find those who qualify, pushing screen failure rates higher and extending timelines substantially.

Why Patients Fail Screening

Screen failures generally fall into a few categories. A large analysis of early-phase oncology trials broke them down this way:

  • Radiological reasons (29%): Imaging reveals disqualifying findings. Newly discovered brain metastases were the single most common cause, followed by disease that wasn’t measurable by study criteria or absence of a suitable target for a required biopsy.
  • Biological reasons (24%): Lab results show organ dysfunction or blood values outside protocol limits. About two-thirds of these were due to vital organ problems (liver, kidney, or bone marrow function).
  • Clinical deterioration (12%): The patient’s overall health worsened between pre-screening and the formal assessment, making them too unwell to participate.
  • Administrative reasons (11%): Paperwork issues, insurance problems, or logistical barriers unrelated to the patient’s medical status.
  • Patient refusal: Some patients decide not to continue after learning more about the trial requirements.

The balance of these causes shifts depending on the disease area and phase. In later-phase cardiovascular or metabolic trials, lab-based exclusions and patient refusal tend to dominate. In oncology, imaging findings play an outsized role because protocols often require measurable disease confirmed by recent scans.

Calculating the Timeline Impact

Screen failures don’t just cost money. They extend your enrollment period. If your original timeline assumes screening 10 patients per month with a 15% failure rate, you’d expect to enroll about 8.5 patients monthly. But if the actual failure rate turns out to be 35%, that drops to 6.5 enrolled patients per month, a 24% slowdown.

To estimate the adjusted enrollment duration, use this approach:

Adjusted monthly enrollment = Monthly screening capacity × (1 − SFR)

Adjusted enrollment duration = Target enrollment ÷ Adjusted monthly enrollment

Running these numbers with a realistic (not optimistic) screen failure rate at the planning stage prevents the mid-study scramble of adding sites or extending timelines. When building your recruitment model, use rates from comparable published trials rather than assumptions.

How to Reduce Your Rate

Lowering screen failure rates starts with understanding exactly why patients are failing. The most effective approach involves structured data collection built into your electronic data capture system, requiring sites to record a specific reason every time a patient fails screening. Categorize those reasons into consistent buckets: inclusion/exclusion criteria failures, patient refusal, site operational issues, and other. Without clean categorization, your data is too messy to act on.

Once you have enough data to see patterns, the interventions become clearer. If most failures stem from a single lab value or imaging criterion, you might tighten your pre-screening checklist so patients are informally evaluated before entering the formal (and expensive) screening process. If a disproportionate number of failures come from specific sites, the problem may be operational: staff training gaps, misunderstanding of eligibility criteria, or facility limitations that require targeted support from the sponsor.

Protocol amendments are the heavier tool. If a particular inclusion criterion is eliminating a large share of otherwise-eligible patients without a strong scientific rationale, relaxing that criterion can meaningfully improve your rate. This is a judgment call that involves balancing data quality against feasibility, but it’s a conversation worth having early rather than after months of poor enrollment.