Iatrogenics in Medicine: When Doctors Do More Harm Than Good

George Washington got sick on a cold December day in 1799. A severe sore throat. What we'd now recognize as epiglottitis — a dangerous swelling of the tissue in the throat that makes breathing difficult.

His physicians assembled. These weren't incompetent quacks. They were the best medical minds of the era, applying the best science available.

Their treatment: bloodletting.

They made incisions and bled him. Not a small amount. Over the course of his illness, they removed approximately five pints of blood from a 200-pound man through repeated cuts. By modern standards, they drained roughly 25-30% of his blood volume.

Washington died within 48 hours.

The question isn't whether the bloodletting killed him — it did. The question is why anyone thought it was helping him.

The answer is structural: benefits are visible, harms are invisible.

When the physician sees blood flowing from the incision, something is being done. Action is visible. Confidence is justified. The patient might recover, and the doctor will have done everything. If the patient dies, well, the disease was severe.

But if the doctor declines to treat? If the doctor waits for the body to handle its own infection? That action is invisible. If the patient recovers, people assume the natural course would have killed them — the doctor saved them by waiting. If the patient dies, the doctor failed to act.

The visibility bias runs only one direction.


The Tonsillectomy Audit

Here's a clearer example because it's quantifiable.

In the 1930s, researchers conducted a study in New York on a simple question: how many children need tonsillectomies?

They took 389 children and had them examined by physicians. 174 were recommended for surgery.

The remaining 215 were examined by different doctors. 99 were recommended for surgery.

The remaining 116 were examined by yet more doctors. 52 were recommended for surgery.

The children were the same. The only variable was the doctor.

Each set of doctors was filling their operating schedules. The surgery is profitable. The procedure is routine. The risks are "manageable." And the children kept getting examined until someone recommended cutting out their tonsils.

We now know that routine tonsillectomies have real morbidity and mortality risk. Every unnecessary procedure was iatrogenic — harm done by helpers following protocol.


The Personal Doctor Trap

I want to give you one more example because it reveals the mechanism.

Imagine you're healthy. No chronic conditions. Good habits. Now imagine a personal physician is assigned to you — someone on call, with full access to your health history, whose job is your health.

What happens?

More appointments. More tests. Minor deviations from average get identified as requiring treatment. Your cholesterol is 215 instead of 200 — time for a statin. Your blood pressure is 135/85 instead of 120/80 — time for medication. Each treatment carries small risk.

You're basically healthy. The risks of constant medical intervention in a basically healthy person exceed the benefits. But nobody sees the benefit that didn't happen — the disease you didn't develop because you weren't over-treated.

Studies confirm this. Frequent medical checkups in healthy populations produce more overdiagnosis and overtreatment than benefit. The personal physician, intended to optimize your health, actually makes you more likely to be medicalized.

Less contact is better than more for the basically healthy.


Why This Happens

The structural problem is almost impossible to fix from within medicine.

First, blame is visible; absence of blame is invisible. If a doctor does nothing and a patient worsens, the family sees neglect. If a doctor intervenes and a patient worsens anyway, the doctor can say "I did everything." The harm disappears into the disease's explanation.

Second, the incentive structure is misaligned. A doctor following protocol is protected. A doctor declining to treat is vulnerable. Even if the patient would have done better without intervention, the doctor who declined to act bears the reputational and legal risk.

Third, measurement failure. You can measure the patient who recovered after treatment. You cannot measure the patient who was never harmed by not being treated. The harm that doesn't happen has no constituency.

When bloodletting was standard practice, every recovery after bloodletting "proved" it worked. The deaths were attributed to the disease. It took two hundred years to accumulate enough evidence that the evidence was overwhelming.


The Solution

Taleb proposes a simple inversion: flip the burden of proof.

Instead of assuming a new treatment is safe until proven harmful, assume it's harmful until proven safe. This reversal would be destabilizing — it would slow the introduction of genuinely helpful treatments. But it would eliminate the long tail of harmful interventions that persist because proving harm takes decades.

And it's not extreme. We already do this everywhere else. We assume a food additive is harmful until proven safe. We assume a financial product is fraudulent until verified. We assume a new drug is dangerous until tested.

But medicine? Medicine assumes the opposite. The default is "this might help," even with thin evidence and unclear mechanism.

Flipping the burden of proof is the only way to align incentives with actual health. It would slow medicine down. It would prevent doctors from "doing something." And it would eliminate a vast amount of invisible harm.