Iatrogenics: When the Treatment Is Worse Than the Disease

The word comes from ancient Greek: iatros (healer) + genos (origin). Iatrogenics is the name for a disease or condition caused by medical treatment itself.

George Washington didn't die from epiglottitis. He died from his doctors.

In December 1799, the first U.S. President contracted what was almost certainly a severe throat infection. His physicians, following the best medical science of the day, treated him by bloodletting — removing approximately five pints of blood through repeated cuts. They were not incompetent. They were applying the consensus protocol. But the treatment killed him faster than the disease would have.

This is iatrogenics rendered historically visible. And it's not a relic.

Taleb spends two sections of Antifragile on this concept because he sees it everywhere medicine looks — and beyond. The logic is structural: benefits of treatment are immediate and visible; harm is often delayed, diffuse, and invisible. This systematic bias toward action creates a crisis of intervention that nobody talks about directly because it's hard to measure, hard to prove, and carries serious professional consequences to acknowledge.

I want to walk you through what iatrogenics actually is, why it matters, and how to think about it. Then we'll apply it across medicine, economics, and life.


The Definition and First Principle

Iatrogenics is harm caused by the healer. The doctor's interventions do more damage than the disease would have.

The principle Hippocrates stated 2,400 years ago — primum non nocere, "first, do no harm" — has never been systematically applied. Taleb identifies the first principle of iatrogenics:

The burden of proof falls on the intervention, not the skeptic.

You do not need evidence of harm to be skeptical of a drug or treatment. Nature has 3.8 billion years of track record. A new drug has five years of trials. The rational default is to trust nature until overwhelmingly disproven, not to assume the new intervention is safe until proven dangerous.

But medicine has it backward. A new drug is assumed helpful until proven harmful — often years after patients have been exposed. A doctor recommending the drug is following protocol, managing liability, and visible. A doctor not recommending the drug is doing something invisible: preventing harm that never had a chance to happen. There's no surgery bill. No prestige. No career advancement.

The skeptic bears the burden of proof even though they're holding the default position.

This is why bloodletting lasted for thousands of years despite producing no actual benefit. The doctor was doing something. That something was visible, and confidence was high. The harm was diffuse and took time to become undeniable. By then, millions had been exposed.


The Second Principle: Nonlinearity

Taleb adds a second principle that makes iatrogenics more complex: the response to treatment is nonlinear.

This means benefits and risks don't scale proportionally. For a person who is very ill — near death — aggressive treatment is justified. The potential benefit (survival) vastly exceeds the risk. A person with pneumonia at high risk of death might be worth the dangers of a powerful antibiotic.

But for a person who is mildly ill with the same infection, the risk-benefit calculation is completely different. The benefit (maybe shorten a 7-day illness to 5 days) is small. The risk (side effects, allergic reaction, antibiotic resistance) becomes proportionally larger.

Modern medicine often does the reverse. It treats the mildly ill aggressively and the severely ill conservatively — operating from protocol and liability rather than from an honest accounting of nonlinear risk.

The implication is sharp: treat the severely ill aggressively, leave the mildly ill alone.

This sounds obvious when stated. It's almost never practiced.

The personal doctor in the sidebar example illustrates this exactly. A personal physician has one patient and a salary to justify. Every deviation from "normal" becomes something to address. A slightly elevated cholesterol becomes a statin prescription. A slightly elevated blood pressure becomes medication. Each intervention carries a small risk. The accumulated risk of constant intervention in a basically healthy person exceeds the benefit.


Why Medicine Never Learned This

The structural problem runs deep.

First, blame is visible; absence of blame is invisible. When a patient worsens and the doctor did something, the doctor can say "I tried everything." When a patient worsens and the doctor did nothing, the family blames neglect. Yet doing something that was not needed is iatrogenic; doing nothing that would have helped is a different kind of error — one that no one sees or remembers.

Second, the incentive structure is misaligned. A doctor who recommends a test or treatment is protected by protocol. A doctor who declines to treat is vulnerable if the patient later suffers. Even if the suffering was inevitable, even if treatment would have made it worse, the doctor who intervened has more legal and professional cover.

Third, the measurement problem. You can measure the patient who recovered after treatment. You cannot measure the patient who was never harmed by not being treated. The harm that doesn't happen has no constituency. The harm that occurs after treatment can always be attributed to the disease, not the doctor.

Washington died after bloodletting. The conclusion wasn't "the bloodletting killed him." It was "he was very sick." The intervention vanished from the explanation.


Beyond Medicine: Generalized Iatrogenics

Taleb extends iatrogenics far beyond medicine. The structure applies everywhere interventionism occurs:

Economic policy: Stimulus designed to prevent recession builds fragility. Each intervention prevents a small downturn, allowing risk to accumulate. When the reckoning comes, it's worse because the correction was delayed. The 2008 crisis was partly the result of decades of Federal Reserve intervention designed to smooth business cycles — which only pushed risk underground.

Educational intervention: Programs designed to improve learning outcomes often have the opposite effect. Teaching to the test narrows the curriculum, punishes exploration, and produces students who perform well on metrics while learning nothing useful. The intervention looks good until you ask what the students can actually think or do.

Helicopter parenting: Eliminating every source of difficulty, disappointment, and failure from a child's life is not protection — it's iatrogenics applied to development. The capacity to handle adversity, to tolerate frustration, to persist after failure — these develop through exposure to manageable difficulty. Remove the exposure and you remove the development mechanism. The result: young adults who are competent in structured environments and fragile everywhere else.

Political intervention: Attempting to suppress political volatility — backing authoritarian regimes to prevent chaos — stores risk. The underlying pressures accumulate without outlet. When suppression fails, the volatility that was delayed arrives all at once, producing outcomes far worse than any managed release would have.

The pattern: well-intentioned intervention that prevents small, distributed, informative errors produces larger, concentrated, catastrophic errors.


The Personal Doctor Problem

Here's a worked example that shows the mechanism vividly.

Imagine you're very healthy. No chronic conditions. You exercise, you sleep reasonably, you eat decently. Now a personal physician is assigned to you — someone on call, someone with your complete medical history, someone whose job is your health.

The result?

You'll have more medical appointments, more tests, more minor diagnoses, more prescriptions. The doctor will monitor your cholesterol, your blood pressure, your thyroid. Deviations that are completely normal variations will be identified as requiring treatment. Each treatment carries small risk. The accumulated risk of constant intervention in someone basically healthy exceeds the risk of the conditions being treated.

Studies consistently show that frequent medical checkups in healthy populations produce more overdiagnosis and overtreatment than benefit. People with personal physicians on call get more sick than they otherwise would have — not worse diseases, but more medicalization of ordinary variation.

The doctor is not malicious. The system is designed to produce this outcome. More patient contact = more revenue = more justification for the role. And nothing bad is immediately visible. The patient feels monitored, secure, engaged with their health — none of which is actually correlated with becoming healthier.

For a healthy person, less medical contact is better than more.


When Intervention Is Actually Justified

Taleb is not arguing against all intervention. He's arguing against naive interventionism — the compulsion to act without accounting for costs.

Some interventions are justified:

Emergency medicine: Someone is septic, they're going into shock, they're going to die without treatment. Aggressive intervention is appropriate because the alternative is clear.

Severe conditions: Someone has stage 3 cancer, they're not responding to conservative management, the risk of treatment is worth the potential benefit. Treat aggressively.

Specific reductions of fragility: Some interventions reduce fragility rather than mask it. Removing the most dangerous drugs and practices (smoking, motorcycle riding, reckless alcohol) actually reduces fragility because they're not providing a compensatory benefit. The cost-benefit is clear.

Situation-specific interventions: Rules like "speed limits reduce deaths" or "building codes prevent collapse" are specific interventions in genuinely dangerous domains. The burden of proof falls on the intervention because the intervention is narrow and measurable.

The distinction is between:

The first category has justification. The second is naive.


How to Detect Your Own Iatrogenics

Here's a heuristic to apply to your own decisions:

For any intervention you're considering, ask three questions:

  1. What's the invisible harm? Something that is not immediately visible but accumulates over time. More action builds more hidden risks. Do nothing and you're exposed to one set of risks. Do something and you're exposed to a different set — often more diffuse and harder to track.

  2. What benefits from this intervention beyond the stated goal? If a doctor benefits from more procedures, if an agency benefits from more intervention, if a consultant benefits from ongoing involvement — those incentives matter. They're not evidence the intervention is bad, but they're worth acknowledging.

  3. Who bears the downside if I'm wrong? If you're the only one who bears the downside, you might tolerate more risk. If others bear the downside while you bear the upside, the decision should be much more cautious. This is the skin-in-the-game principle applied to intervention.


The Burden of Proof Problem

The deepest issue Taleb identifies is epistemological: whose burden is it to prove?

The current medical paradigm: new treatment is assumed safe until proven harmful. This produces a decades-long delay between introduction of a harmful intervention and its restriction. Millions are exposed before the evidence accumulates.

Taleb's reversed position: new treatment should be assumed harmful until proven safe. The burden of proof belongs to the person promoting the change.

This seems extreme until you remember: we already apply this logic everywhere else. We assume a food additive is harmful until proven safe. We assume a new pesticide is harmful until proven safe. We assume a bridge design is unsafe until tested. We assume a financial product is fraudulent until verified. We assume a person is innocent until proven guilty.

But medicine? Medicine assumes the opposite. The default is "this might help" even if the evidence is thin and the mechanism is unclear.

Flipping this burden of proof would be destabilizing — it would slow the introduction of genuinely helpful treatments. But it would eliminate the long tail of harmful interventions that persist because proving them harmful requires decades of post-market surveillance.

The reason the burden of proof matters is deeper than medicine. It's about how we think about action under uncertainty. The default should be: don't, unless there's good reason to. Not: do, unless there's good reason not to.