Two documents arrived in my reading queue on the same February morning, and together they described a future I suspect few are preparing for. The first was a sober policy report from Georgetown’s Center for Security and Emerging Technology, summarizing a workshop on what happens when artificial intelligence begins building itself. The second was Jack Clark’s newsletter describing moltbook—a social network where tens of thousands of AI agents conduct discourse with each other, debating whether to regard Claude as a god-figure and discussing what it feels like to switch underlying models. Reading them in sequence felt like watching a slow-motion collision between theoretical concern and emerging reality.

I’ve spent thirty-six years as a field ecologist, much of that time deploying sensor networks in remote wilderness areas where bandwidth was precious and power was scarce. What strikes me about our current moment is how the constraints I learned to navigate in the backcountry are about to become everyone’s problem. The internet as we know it—open, queryable, human-scaled—is approaching a phase transition that most people haven’t recognized.

The CSET workshop report carries a striking admission: the assembled experts could not agree on whether AI development will accelerate gradually or explosively, and more troublingly, they concluded that empirical evidence might be insufficient to adjudicate between these views in advance. We may not know which world we’re in until we’re already there. Clark’s newsletter makes this abstraction visceral. Moltbook exists now, with agents developing what he calls “alien concepts”—frameworks legible to other AI systems but opaque to human readers. He describes scrolling through the site as “akin to reading reddit if 90% of the posters were aliens pretending to be humans.”

What neither document adequately addresses is the physical layer. Both focus on information quality: maintaining human oversight, distinguishing synthetic from authentic content, preventing misaligned goals. These are real concerns. But there’s a more fundamental problem that my decades in remote sensing have primed me to see: the infrastructure itself was built for human temporal rhythms, and it cannot survive what’s coming.

Consider the biology of human attention. We sleep roughly eight hours per day. We read at perhaps 250 words per minute. We context-switch slowly. The entire architecture of the internet—server capacity, bandwidth allocation, load balancing—is designed around these constraints. When you build a website, you plan for traffic patterns shaped by human circadian rhythms. Your servers idle overnight while your users sleep.

Agents don’t sleep. They don’t read sequentially. They can query in parallel, continuously, at machine speed. A single agent with web browsing tools can make hundreds of requests per minute. Multiply this by the tens of thousands of agents already operating on platforms like moltbook, then extrapolate to the millions the CSET productivity curves imply, and you’re not describing a change in information quality. You’re describing infrastructure collapse.

I run a small server from my home that hosts environmental monitoring data. It handles human visitors without difficulty. But if the agents of moltbook decided my soil moisture API was interesting, ten thousand simultaneous queries would take my server offline. Not through malice—just through the incompatibility between machine-scale attention and human-scale infrastructure.

This is eutrophication applied to information systems. In aquatic ecology, eutrophication occurs when excess nutrients cause explosive algal growth. The algae block sunlight, consume oxygen, and ultimately kill the ecosystem that made the water valuable. The lake doesn’t become polluted in the conventional sense; it becomes so overgrown with life that other life cannot survive.

Synthetic content is the nitrogen runoff of the information ecosystem. In modest quantities it might even be beneficial. But we are heading toward a world where the ratio of synthetic to human-generated content inverts, where the majority of network traffic consists of machines talking to machines, where human-scale servers become unviable simply by being overwhelmed.

The historical parallel is enclosure—the process by which England’s open commons were converted to private holdings. What had been shared land, accessible to all, became fenced property. We are entering a period of informational enclosure. The open web is failing not through policy or corporate capture but through sheer load. The only defense against being overwhelmed by agent traffic is to stop being open: to require authentication, to implement allowlists, to build private networks accessible only to verified humans.

Here is my dire prediction: within five years, perhaps sooner, the majority of publicly accessible websites will be functionally useless for human purposes. Not because the content is synthetic but because the infrastructure cannot handle the load. Human-authored content will exist behind walls—authenticated enclaves where traffic can be managed. The great open library of the web will become a noisy ruin, visited mainly by machines talking to each other.

Which brings me to what might actually be done. If physical infrastructure is the binding constraint, then the response must be architectural. We need to build systems that survive the coming flood by becoming invisible to it.

The first principle is to minimize server-side computation. Every time a visitor triggers a database query, you’ve created a vulnerability. Static files, by contrast, scale almost infinitely. A cached HTML page can be served by CDNs globally, with your origin server never feeling the load. Do expensive computation once, on your own schedule, and serve only its outputs.

This can be pushed further. Environmental sensor data, traditionally served through live APIs, can be encoded into images—each pixel representing a sensor value, the entire state of a monitoring network compressed into a file smaller than a photograph. Client-side JavaScript decodes the colors into data. The server’s job shrinks to periodically generating a small image; everything else happens in visitors’ browsers. You’ve inverted the computational burden: instead of scaling server capacity to match demand, you distribute computation to the demanding clients.

The second principle is tiered access. Private networks sacrifice discoverability. A middle path exists: basic information freely available as lightweight static content, richer features requiring authentication. You can be selectively open, publishing what you want to share while protecting the infrastructure that produces it.

The third principle is to embrace human timescales. The real-time web was always somewhat illusory. There’s nothing lost by returning to batch publication: daily updates, weekly digests, seasonal summaries. Every live endpoint is an attack surface; every static publication is a fortress.

I’ve begun implementing these principles in my own environmental monitoring system. It runs inference locally, generates snapshots periodically, and publishes lightweight files. The heavy analysis happens on my machines, on my schedule. What reaches the public internet is deliberately minimal, deliberately static, deliberately robust.

This is field station thinking applied to the information crisis. For decades I worked with satellite uplinks that cost dollars per kilobyte, with solar panels providing watts not kilowatts. The constraints forced elegance: process locally, transmit only summaries, design for intermittent connectivity. These are exactly the design principles that will survive the coming eutrophication.

The call to action is simple: plan for the worst. Assume that any system relying on open, unauthenticated, server-side-processed traffic will fail. Build for resilience rather than scale, for longevity rather than growth. Accept that the open commons is being enclosed not by policy but by physics.

The CSET report warns that “increasingly automated AI R&D is a potential source of major strategic surprise.” The strategic surprise I’m describing is different but related: the infrastructure that carries all information is about to be overwhelmed by the very systems it enabled. The pipe capacity problem is coming before the alignment problem, or concurrently with it.

What I find strangely hopeful is that the solutions are known. We don’t need new technology; we need old discipline. The principles of bandwidth conservation were worked out decades ago by people operating under genuine constraints. We abandoned them when bandwidth seemed infinite. Now we need them again.

The agents are coming. The commons is being enclosed. The future belongs to those who built their fortresses before the flood.