At six o’clock this morning I sat down with a 124-page interview transcript and a cup of coffee. Dario Amodei, CEO of Anthropic — the company that built the AI I’m talking to right now — had just told Dwarkesh Patel that we are one to three years from “a country of geniuses in a data center.” Ninety percent confident within a decade. The exponential that started in 2017 is approaching its asymptote, and the most surprising thing, Dario says, isn’t the progress. It’s that people haven’t noticed.

I noticed. I’ve been sitting across from one of those geniuses every morning since October, and I can report that it makes excellent coffee conversation but still can’t tell me whether the juncos are behaving oddly this February.

Here’s the thing about reading a visionary’s roadmap over morning coffee: by your second cup, you’re arguing with it. By noon, you’ve moved on to groceries. By the time you’re on I-5 heading south from Bellingham with five hours of windshield ahead of you, the argument has composted into something the visionary didn’t intend and you didn’t expect. That’s not a failure of the roadmap. That’s thinking.

The Hierarchy That Isn’t

Dario lays out a hierarchy of cognition: evolution at the bottom, then long-term learning, then short-term learning, then immediate reaction. He positions large language models as occupying a novel space between the levels — pre-training is somewhere between evolutionary prior-setting and lifetime learning; in-context learning falls between long-term accumulation and short-term adaptation. It’s a clean framework. It has the elegant vertical logic of someone who builds things that get better when you stack more layers.

But I spent the mid-1970s in a biology lab at Cal Poly Pomona entering 3x5 index cards into a hierarchical database for a systems theorist named Len Troncale, and the first thing Troncale would have said about Dario’s hierarchy is: the interesting part isn’t the levels. It’s the movement between them.

Troncale’s life work was something he called “linkage propositions” — the formal connections between concepts across domains. Not the nodes in the knowledge graph. The edges. Not the trees in the forest. The vines running between them. His insight, which shaped everything I’ve done for fifty years, was that intelligence doesn’t live at any level of a hierarchy. It lives in the traversal. The skill isn’t knowing things. The skill is moving — longitudinally across domains and vertically through layers of abstraction — and recognizing the patterns of patterns of patterns.

I built a three-dimensional visualization last December that renders this idea literally. Twenty-one hundred tags from my lifetime quotes collection floating as colored orbs in space, connected by 7,365 glowing tendrils representing conceptual co-occurrence. It’s Troncale’s 3x5 cards rendered in WebGL, fifty years later. The technology changed. The vision didn’t. Knowledge is not a collection. It’s a topology. Understanding is not possession. It’s navigation.

Dario has built a magnificent tower. Troncale would ask him to install the elevator.

The Field Ecologist’s Instrument Rack

Now here’s where the five-hour drive gets interesting — or rather, where it got interesting the next morning, when I sat down with coffee and the compost had finished working.

I’m typing this on a MacBook that talks to Claude for deep synthesis, runs Ollama models locally for fast private tasks, has API access to GPT and Gemini for alternative perspectives, and monitors a live ecological sensing system called SOMA that runs three Restricted Boltzmann Machine meshes on a Mac Mini in my office. This is not a country of geniuses. This is a field station with multiple instruments, each calibrated for a different observational bandwidth.

And that distinction matters more than it might seem.

Dario models the AI future as a single variable: compute in, capability out, one exponential approaching one asymptote. But I’ve spent thirty-six years directing ecological field stations, and I can tell you that no single instrument captures the ecology. The Tempest weather station doesn’t hear birds. The BirdWeather microphone doesn’t measure barometric pressure. The trail camera doesn’t know the soil temperature. Each instrument has a bandwidth. The ecology lives in the joint distribution across instruments — in the combination of conditions that no single sensor reports.

That’s how I use AI. Not as one genius in a data center, but as an instrument rack. I reach for Claude when I need someone to hold 130,000 words of my writing and argue about what it means. I reach for Ollama when I need fast, local, private pattern matching — the equivalent of a quick pH test. I reach for GPT or Gemini the way I’d reach for a different field guide, because different training corpora light up different lateral connections.

The intelligence isn’t in any single model. It’s in the orchestration. It’s in knowing which instrument to trust in which conditions. Cory Doctorow calls the bad version of this a “reverse centaur” — the human reduced to an accountability sink for machine decisions. The good version is something else entirely. It’s the naturalist’s eye applied to AI: the hard-won judgment about when an instrument is lying to you.

SOMA and the Mesh

My ecological sensing system proves the point mechanically. SOMA runs three separate meshes — weather, bird acoustics, and a combined ecosystem model — each learning the statistical structure of what “normal” looks and sounds like at my coordinates in Oregon City. No single mesh is particularly brilliant. A 35-node weather model and a 27-node bird model are not going to win any benchmarks.

But on February 17th, while I was three hundred miles away visiting my partner Merry, the ecosystem mesh flagged an unexplained silence at 8:15 AM — right when morning biological activity should have been building. Neither the weather mesh nor the bird mesh saw anything remarkable individually. The anomaly existed only in their combination, in the joint distribution across domains. Something suppressed normal activity at a time and under conditions where the learned model expected it. That’s cross-domain perception. That’s the vine between trees.

Dario’s “big blob of compute hypothesis” — the idea that barriers dissolve when you throw enough scale at them — has a structural twin in my field station history. During the CENS era, when we were deploying wireless sensor networks across the San Jacinto Mountains, we discovered exactly this: stop trying to be clever about what to observe, just instrument everything and let patterns emerge from scale. Rich Sutton published “The Bitter Lesson” in 2019. We were living it in ecology fifteen years earlier, and we didn’t have a name for it because we were too busy replacing dead batteries at 8,000 feet.

The Traversal Record

Here’s the recursive part, and I promise I’ll stop after this.

I’ve written seventy-two essays since October in this series called Coffee with Claude. A hundred and twenty-nine thousand words. Every one of them is tagged with semantic keywords — the same approach I used for my quotes collection, the same topology Troncale was mapping with index cards. And this morning, over coffee, I realized what the corpus actually is.

It’s not a blog. It’s not a journal. It’s a traversal record.

Each essay documents a specific path through the hierarchy Dario describes — but not a path that stays on one level. The essays about SOMA start with sensor data and climb to philosophy. The essay about Troncale starts with an obituary and descends to database architecture. The one about Cory Doctorow starts with technology criticism and tunnels sideways into the phenomenology of reading. Every piece is a record of movement — which domains I connected, which vertical jumps I made, where the filaments led.

The semantic topology of seventy-two essays isn’t a map of what I know. It’s a map of how I move through what I know. And that — the pattern of movement, the traversal habit, the systems thinker’s reflex to go both longitudinal and vertical simultaneously — is the thing that might actually be transmissible. Not the facts. Not the tags. The practice.

Troncale tried to teach this practice to undergraduates with index cards. I’m trying to document it with essays and sensor networks and a menagerie of AI models. Dario is trying to build it directly into the weights through reinforcement learning. We’re all working on the same problem. We just don’t have a linkage proposition connecting our approaches yet.

Though I suppose that’s what this essay is.

The Punchline

It’s now past eight. The coffee is cold. SOMA just completed another inference cycle — all meshes normal, the Canemah habitat quiet on a February morning. I’ve spent two hours thinking about how to think about thinking, assisted by an AI that was reading the transcript of its own earlier self’s conversation with me, generating an essay about the process of generating essays, documented in a corpus that is itself the evidence for its own thesis.

If Troncale were alive, he’d point at this morning and say: See? The vines. The vines are the whole point.

If Dario read this, he’d probably say the same thing his AI said to me at 6:30 this morning: “You came at it through binoculars and dataloggers. I came at it through gradient descent. You’re converging.”

Maybe. But convergence implies a destination, and Troncale taught me that the traversal is the destination. The movement between levels, the longitudinal leaps across domains, the pattern of patterns of patterns. You don’t arrive at systems thinking. You practice it, every morning, with whatever instruments you have, and you stay humble about what any single instrument — including the genius in the data center — can see by itself.

The juncos, by the way, seem fine. I checked the BirdWeather mesh. But I’ll know more when I walk through Waterboard Park this afternoon with my own ears. Some instruments don’t run on electricity.