The Robert Rubin Problem: When Bankers Have No Skin in the Game

Robert Rubin was chairman and CEO of Citibank during the housing bubble of the 2000s.

Over roughly a decade, Rubin earned approximately $120 million in bonuses. These bonuses were compensation for overseeing a strategy that appeared to be wildly successful: the bank was profitable, growing, generating returns. The bonuses were justified.

In 2008, the housing market crashed. The strategies Rubin had overseen — strategies that made him rich — had created enormous hidden risks in the bank's balance sheet. Citibank required a $45 billion government bailout to avoid collapse.

Rubin kept his $120 million.

The taxpayers bore the losses.

This is the absence of skin in the game rendered visible and nakedly obvious.


The Asymmetry

The structure of Rubin's compensation created a catastrophic misalignment:

Upside: Rubin's bonuses were tied to short-term profitability. When the strategy produced profits, he was paid.

Downside: Rubin's downside was capped. Even if the strategy blew up, he was personally protected. The bank failed; he kept the money.

This is antifragility at others' expense — exactly what Taleb identifies as the central ethical problem of modern finance.

Rubin's decisions were perfectly rational given his incentive structure. If you're compensated for profits but not penalized for risks that materialize later, you take maximum risk. The expected return to you is positive because all the upside is captured and the downside is transferred.

The irony is bitter: Rubin wasn't being dishonest or malicious. He was being rational. The system was designed to produce this outcome.


The Moral Hazard

Economics has a term for this: moral hazard.

Moral hazard occurs when someone can benefit from taking a risk while someone else bears the consequence if that risk materializes. The person with moral hazard is incentivized to take excessive risk because they don't bear the downside.

Banking is saturated with moral hazard:

In each case, the incentive structure is inverted: more risk-taking is rewarded, but the risk-taker is protected from the downside.


Why This Destroys Information

Here's the deeper problem: when decision-makers don't bear consequences, they lose the signal that tells them whether their strategy is actually working.

A trader who profits from speculative bets doesn't know if they're lucky or skilled. A profitable strategy could be sound or it could be a hidden catastrophe waiting to happen. The trader has no way to tell because they don't experience the consequences.

Meanwhile, the institution bearing the consequences — the bank — is being steered by someone insensitive to its actual fragility.

This is the epistemic problem that Taleb keeps returning to: you cannot know whether your decision-making is sound unless you have something to lose by being wrong.

Rubin thought his strategy was sound. By the measure that mattered to him — personal profit — it was. But by the measure that mattered to the bank and the economy, it was creating concentrated risk.

The absence of skin in the game meant Rubin never learned the truth.


The 2008 Pattern

The same pattern repeated across the financial system:

At every level, decision-makers were separated from consequences. The entire system was optimized for short-term profit extraction by people with no exposure to long-term fragility.


The Consequence

The result was predictable: catastrophic hidden leverage, interconnection, and risk accumulation. When the system finally broke, the consequences were borne by people with no decision-making power and no ability to prevent it.

Homeowners lost their houses. Retirees lost their pensions. The government ran trillion-dollar deficits trying to bail out the system. The poor bore the costs while the decision-makers kept their bonuses.

This is what happens when you violate Hammurabi's principle: "Thou shalt not have antifragility at the expense of the fragility of others."


What Would Hammurabi Do

Under Hammurabi's Code, Rubin would have borne the consequence of his decisions. He wouldn't have had to die, but he would have had to repay the damages.

The modern equivalent would be: Rubin is liable for the losses his strategy created. He keeps any legitimate profits he earned during profitable years, but the $45 billion bailout cost is claw-back from all bonuses earned during the period when hidden risks were accumulating.

Would Rubin have pursued the same strategy if this were the rule? Absolutely not. The strategy was profitable for Rubin precisely because the downside was transferred. Remove the downside transfer and the strategy becomes unprofitable.

This is the power of skin in the game: it doesn't require external enforcement. It just aligns incentive with consequence.


The Unresolved Problem

Since 2008, the financial system has become more concentrated, not less. The banks deemed "too big to fail" are now even larger. The moral hazard problem persists.

The reason is simple: the people who create policy are often the same people who benefited from the prior system. Rubin himself held government positions after his banking career. The incentive to fix the system that enriched you is minimal.

Fixing this requires something more than regulation: it requires restructuring incentives so that decision-makers bear consequences for their decisions.

Until that happens, the pattern will repeat. Decision-makers will take risks they don't fully understand, protected from downside by institutional structure and government guarantee. When the risk materializes, others bear the cost.

Hammurabi solved this problem 3,800 years ago. We've forgotten the solution.