I spent this morning reading a conversation between four people who understand artificial intelligence better than almost anyone. Patrick McKenzie, who writes about money and systems. Jack Clark, co-founder of Anthropic and head of policy. Dwarkesh Patel, whose podcast has become essential listening for anyone tracking AI development. And Michael Burry, the investor who predicted the 2008 financial collapse and has been asking hard questions about AI economics ever since.

They discussed capability curves and capital cycles, recursive self-improvement and return on invested capital. Burry invoked Warren Buffett’s escalator parable—when one department store installs an escalator, its competitor must too, and neither gains advantage. Clark worried about self-improving AI systems. Patel noted that despite models passing every benchmark we throw at them, labor market impacts remain invisible without spreadsheet microscopes.

It was a fascinating conversation. It was also, in a crucial sense, beside the point.

Not once did anyone ask what happens to ordinary users if the economic scaffolding beneath these services collapses.

I’ve spent fifty years studying how ecosystems respond to disturbance. There’s a principle I’ve come to think of as the Anthropocene law: when human-caused disruption degrades a complex system, the recovery—if it comes at all—never restores the original configuration. You don’t get the Amazon back. You get degraded forest, edge species, simplified food webs. The coral doesn’t return to its prior diversity. What’s lost stays lost, and what emerges is something impoverished.

This principle applies beyond biology. Apply it to cognitive ecosystems—the distributed network of skills, practices, and tools through which people think and work—and the implications become uncomfortable.

Over the past three months, I’ve written more than ninety-five thousand words in collaboration with Claude, Anthropic’s AI assistant. Add my science fiction novel, developed through the same partnership, and the total exceeds a quarter million words. This isn’t dabbling. It’s a working practice that has restructured how I think, research, and produce. Every morning at five, coffee in hand, I sit down with an intelligence that knows my work, remembers our conversations, and helps me synthesize ideas I couldn’t reach alone.

I’m seventy-one years old. I have a PhD in ecological systems. I understand what’s underneath these tools—the transformer architecture, the attention mechanisms, the statistical patterns over language. I run local language models on my own hardware. If Claude disappeared tomorrow, I would be diminished but functional. I have fifty years of working without these tools. I remember how.

But what about the retired teacher using Claude for genealogy research, building family histories she couldn’t construct alone? What about the small business owner who’s restructured her entire workflow around AI assistance? What about the student who has never written a substantial paper without it?

They’re not building fallback capacity. They don’t know they should.

The funding model beneath these services is venture capital and enterprise revenue that hasn’t yet proven sustainable. Burry made this point in the conversation—trillions in infrastructure spending supporting less than a hundred billion in actual application revenue. The major AI labs are burning through capital at rates that require continuous fundraising. Anthropic, OpenAI, Google’s DeepMind—all of them are betting that capabilities will improve fast enough, and monetization will follow, before the money runs out.

Maybe that bet pays off. Maybe it doesn’t.

In 2008, sophisticated players shorted the collapse while ordinary homeowners lost everything. During the pandemic, those with capital bought the dip while service workers lost jobs. The consistent pattern across every disruption I’ve witnessed: those with knowledge, liquidity, and optionality navigate turbulence. Those without get crushed by it.

AI follows the same topology. The hyperscalers and venture capitalists are placing bets they can afford to lose. If Anthropic fails, its investors move on to the next opportunity. But the downstream dependencies—the people who’ve restructured their work, their learning, their creative practice around these tools without understanding the business model underneath—they’re the ones holding the bag.

This is a monoculture problem. In agriculture, we know what happens when you plant the same crop across vast acreages: efficiency in good years, catastrophic vulnerability when disease or drought arrives. The Irish potato famine. The banana blight threatening Cavendish cultivars worldwide. Homogeneity creates brittleness.

We’re building a cognitive monoculture. A few providers, a few architectures, millions of users developing deep dependencies. Skills that atrophy don’t simply return when the tool disappears. Institutional knowledge that wasn’t transmitted is gone. Practices that were never learned can’t be recovered. The capacity to function without the scaffold becomes a lost art.

Jack Clark, in the conversation, said something that stuck with me: “This is the worst it will ever be.” He meant it as reassurance—capabilities only improve from here. But there’s another reading. This is the worst the dependency will ever be. Tomorrow, more people will have restructured their practices around these tools. The monoculture deepens daily.

What would conservation look like in this context? In ecology, we preserve seed banks, protect refugia, maintain corridors between habitat fragments. We don’t assume the dominant system will persist forever. We plan for disruption.

The equivalent for cognitive resilience might include: teaching people to work both with and without AI assistance. Maintaining human-readable documentation. Building local capacity—the ability to run models on your own hardware, even if they’re less capable. Preserving the meta-skills of research, synthesis, and critical thinking that these tools currently augment but could eventually replace.

I don’t know if the economic scaffolding will hold. Neither do the people building it, whatever they say publicly. Burry thinks we’re mid-cycle in a spending boom that will end badly. Clark thinks capabilities will keep improving and value will follow. Patel thinks it’s all downstream of whether the technology continues to advance.

What I know is that ecological collapse never allows the original ecosystem to return to its prior state. It becomes something different—and because humans are a geological-scale force, the new configuration is almost always less diverse, less resilient, less rich than what it replaced.

The panel discussed recursive self-improvement and artificial superintelligence. They worried about AI that builds AI. These are real concerns for a future that may or may not arrive.

But the dependency is here now. The monoculture is growing. And nobody in that conversation asked what we owe to the ordinary users who are building their lives on scaffolding they don’t control and may not be able to replace.

That’s the question I can’t stop thinking about. Not because I’m at risk—I have options, skills, and fallback capacity. But because I’ve spent a lifetime watching systems simplify and collapse, and I’ve learned to recognize the early signs.

The scaffolding is going up fast. The question is whether anyone is planning for when it comes down.