The Soviet-Harvard Delusion: The Myth of Expert Planning
The Soviet Union tried to plan the entire economy from the top down.
Experts would identify what needed to be produced. They'd set targets. They'd allocate resources. The system would execute.
It didn't work. The Soviet economy was plagued by shortages, misallocations, and inefficiency. All designed by intelligent experts. All failing because the system removed the feedback mechanisms that bottom-up economies depend on.
Taleb calls this the "Soviet-Harvard Delusion": the belief that top-down rational planning by experts can improve on bottom-up organic evolution.
It's a delusion because it misunderstands how complex systems actually work.
The Feedback Problem
Bottom-up systems (markets, evolution, entrepreneurial ecosystems) work through feedback.
A restaurant fails because it's not what customers want. That failure is the feedback. The market corrects. Better restaurants replace worse ones. The system improves.
A top-down system tries to prevent failure. It allocates resources to prevent the feedback. The result: failures accumulate silently until they become catastrophic.
The Soviet system tried to ensure that all needed goods were produced. It eliminated market prices (which are feedback signals). It eliminated competition (which is error correction). The result: it couldn't adapt to conditions because it had removed its information channel.
The information channel isn't propaganda. It's prices. It's competition. It's failure and correction.
Applied to Innovation
The belief is that if we hire the best people and have them plan innovation, we'll get better innovation.
The reality is that innovation requires error signals and correction. Good ideas get tested. Bad ideas fail. From this testing, innovation emerges.
Top-down planning removes the error signal. A large research program operates on the assumption that the plan is correct. If the plan is wrong, the entire program is wrong and wastes enormous resources before realizing it.
A distributed system of many small tinkering efforts generates error signals quickly. Most fail. The ones that work scale up. The system self-corrects.
Harvard approach to innovation: Hire brilliant researchers. Have them identify important problems. Allocate grants. Measure outputs.
Bottom-up approach to innovation: Many people tinker. Some stumble on something interesting. Others notice and build on it. What works scales. What doesn't fades.
Empirically, bottom-up produces more innovation. Yet institutions keep trying top-down because it's measurable and feels like it should work.
The Allure of Planning
Top-down planning is appealing because:
- It feels professional and scientific
- You can measure and report on it
- It creates the sense of control
- It allocates credit to planners (who look smart when it works)
Bottom-up emergence is less appealing because:
- It's messy and hard to measure
- You can't predict the outcome
- You feel like you're not in control
- Credit goes to luck and circumstance, not to anyone specific
So organizations keep choosing the appealing path, even though the empirical path leads to better results.
The Cost of Removal
When you remove the error-correction mechanism (competition, prices, failure), you don't get safety.
You get fragility.
The system looks stable because the obvious failures are prevented. But the hidden fragility is accumulating. When the system finally can't prevent failure any longer, the failure is catastrophic because no small corrections have happened along the way.
The Soviet Union appeared stable for decades. It was accumulating massive fragility: shortages, technological stagnation, military overextension. When the fragility finally manifested, the system collapsed.
Modern economies are doing the same thing with different tools: preventing small corrections (bailouts, stimulus), removing price discovery (quantitative easing, price controls), treating symptoms rather than correcting underlying problems.
The Application to Tech
The "Soviet-Harvard Delusion" is visible in tech: the belief that you can plan a platform or ecosystem.
Facebook planned to become a platform for connection. Twitter planned to become an information network. YouTube planned to be a video platform. But once built, people used them for unexpected things.
The platforms that tried to control how they were used (constraining developers, enforcing single vision) were less innovative than platforms that let users and developers experiment.
Blockchain and crypto attempted to create a system with no central planner. It has massive inefficiencies and wastes. But it also has rapid iteration and error correction. Centralized systems are more "efficient" in normal conditions but fragile.
The Practical Insight
The insight is: accept that you can't plan complex systems.
Instead of planning, create conditions for:
- Multiple experiments happening in parallel
- Fast feedback on what works and what doesn't
- Failure of bad ideas quickly, not slowly
- Scaling of good ideas automatically
- Adaptation as conditions change
This is antifragile by design. The system doesn't depend on any planner being right. It depends on having error correction built in.