The term “bad blood” is most often used today as an idiom for lingering animosity or a feud between people. Before this common usage, however, the phrase had a specific and concerning place in U.S. medical history. This historical context refers not to ill will, but to a serious chronic infectious disease, most commonly syphilis. Examining the medical use of “bad blood” reveals a history of public health efforts and a profound failure of medical ethics that forever changed how human research is conducted.
“Bad Blood” as a Historical Diagnosis
In the early 20th century, the term “bad blood” served as a widespread euphemism for chronic, infectious conditions, especially among medical professionals speaking to patients. While it could refer to ailments like anemia or fatigue, it was most frequently associated with syphilis, a sexually transmitted infection caused by the bacterium Treponema pallidum. Syphilis progresses through stages, starting with primary sores and secondary rashes, before entering a long-term latent phase. During this latent period, the infection remains active, slowly causing devastating damage to the internal organs.
Untreated syphilis can eventually lead to severe complications, including blindness, paralysis, heart disease, brain damage, and premature death. Because of the severe and long-lasting nature of these effects, syphilis was a deeply feared disease. The colloquial phrase “bad blood” provided a less stigmatizing way for doctors to communicate the diagnosis to patients, who understood it referred to a serious, potentially fatal, blood-borne illness.
The Tuskegee Study of Untreated Syphilis
The historical use of this term became central to one of the most infamous medical studies in U.S. history: the U.S. Public Health Service (PHS) Study of Untreated Syphilis in the Negro Male. Beginning in 1932 in Macon County, Alabama, the study enrolled 600 impoverished African American sharecroppers. Of these, 399 had latent syphilis, while 201 served as a control group. The official purpose of the study was to observe the full, natural progression of untreated syphilis, culminating in an autopsy after death.
Researchers from the PHS, working with the Tuskegee Institute, deceived the participants by telling them they were being treated for “bad blood,” a familiar local term. The men were offered incentives like free medical exams, meals, and burial insurance to ensure their cooperation. In reality, the men received no true treatment for syphilis, only placebos, aspirin, and nutritional supplements. The study was originally intended to last six months but continued for 40 years.
The deception became even more egregious starting around 1947, when penicillin became widely available as the highly effective treatment for syphilis. Researchers deliberately withheld the life-saving antibiotic from the infected men for the next 25 years. The PHS actively worked to prevent the men from receiving treatment from other sources, including local physicians and military draft boards. By the time the study was finally exposed by a whistleblower and ended in 1972, many men had died from syphilis-related complications, and the disease had been unknowingly transmitted to their wives and children.
How This Study Changed Medical Ethics
The 1972 public revelation of the Tuskegee Study sparked national outrage and led to immediate, systemic changes in federal policy regarding human research. In 1974, Congress passed the National Research Act, a direct legislative response to the ethical breaches exposed by the study. This act created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, tasked with developing foundational ethical principles.
The Commission’s work led to the establishment of two regulatory pillars that govern research today. The first mandated the creation of Institutional Review Boards (IRBs) at any institution receiving federal funding for human research. These independent committees review and approve all research protocols to ensure they meet ethical standards and protect the rights of participants.
The second pillar established “informed consent” as a mandatory requirement for all human research. This means participants must be fully told the study’s purpose, procedures, risks, and benefits before agreeing to take part.