What Percent of Animal Testing Fails?

Animal testing, also known as in vivo testing, has long served as a mandated preclinical step in the development of new medicines and chemical substances. This stage aims to establish the initial safety and potential effectiveness of a compound before it is administered to human volunteers. Regulatory bodies require these studies to ensure a basic level of assessment for toxicity and biological activity in a whole-organism system. The reliance on animal models is rooted in a historical framework intended to safeguard public health from unforeseen harm. However, the accuracy of this predictive process in reliably translating results from the laboratory to the human patient has become a subject of intense scientific scrutiny.

How Failure is Measured in Preclinical Research

Failure in the drug development pipeline is measured by attrition, which tracks the percentage of compounds that cease development. A compound is considered a failure when it successfully completes preclinical animal testing but proves unsuitable during human clinical trials. These human trials are structured in three distinct phases, and failure can occur in any of them for two primary reasons: safety failure or efficacy failure.

A safety failure occurs when a drug demonstrates unexpected or unmanageable toxicity in humans, even though it was deemed safe in animal studies. An efficacy failure means the drug is unable to produce the desired therapeutic effect in human patients, despite showing positive outcomes in animal models. Regulatory agencies, such as the U.S. Food and Drug Administration (FDA), track this attrition rate closely, as it represents billions of dollars in wasted investment and years of lost time.

The High Rate of Failure in Translational Medicine

The overall drug attrition rate demonstrates the magnitude of failure in translational medicine, the process of moving research from the lab bench to the bedside. A widely cited statistic indicates that approximately 90% of all experimental drugs that enter human clinical trials ultimately fail to achieve regulatory approval. This means that for every ten compounds that appear promising based on animal testing, nine will not make it to market. This rate has been reported to be even higher in certain therapeutic areas, such as oncology, where failure rates can approach 96%.

The majority of these failures occur in the later stages of human testing, specifically Phase II and Phase III trials, which are the most expensive and time-consuming parts of the process. Phase II trials often reveal a lack of efficacy, accounting for up to half of all clinical failures, indicating the animal models failed to predict human therapeutic response. Unexpected toxicity accounts for the remainder of the failures, highlighting the inability of animal safety screens to fully capture human risk. The high attrition rate translates directly into increased costs and delays for patients awaiting new treatments.

Species Differences and Predictive Limitations

The primary scientific explanation for the high failure rate lies in the biological differences between the test species and humans. Despite sharing many biological processes, species like mice, rats, and even primates possess distinct physiological, genetic, and metabolic profiles that affect how they respond to drugs. Differences in the activity of metabolic enzymes, particularly in the liver, can cause a compound to be processed into harmless or toxic byproducts in an animal that would not occur in a human, or vice versa. This variation in drug processing, known as pharmacokinetics, often leads to misleading safety or efficacy data.

Furthermore, many human diseases are poorly replicated in animal models, making them unreliable predictors of therapeutic outcomes. Complex human conditions such as Alzheimer’s disease, Parkinson’s disease, and stroke are frequently modeled in animals, but the pathology rarely mirrors the full complexity of the human condition. Laboratory animals are also often genetically homogenous and tested under narrowly controlled conditions. This lack of robustness makes it difficult to predict drug behavior in a genetically diverse human population with varied lifestyles and co-morbidities.

Modernizing Testing with Non-Animal Models

To address the limitations of traditional methods, the scientific community is rapidly developing and adopting New Approach Methodologies (NAMs) designed to provide more human-relevant data. These NAMs include advanced in vitro testing, which uses human-derived cells and tissues in a laboratory dish to assess safety and efficacy. Such systems allow for the testing of compounds directly on human biology.

One promising technology is microphysiological systems, often referred to as “organ-on-a-chip” devices, which mimic the structure and function of human organs like the lung, liver, or kidney. These devices use microfluidic channels and living human cells to simulate blood flow and organ-level responses, offering a more accurate prediction of toxicity and drug function. Computational models, including artificial intelligence (AI) and machine learning algorithms, also fall under NAMs. These models use vast datasets to predict chemical toxicity and drug behavior (in silico modeling) with greater speed and efficiency. By focusing on human-specific biology, these modernized methods hold the potential to significantly improve predictive accuracy and lower the failure rate of drugs entering clinical trials.