The Subtraction Principle: Christmas Eve with Demis Hassabis
Christmas Eve morning. Coffee steam rising in the pre-dawn darkness. I'm watching a YouTube interview that has accumulated three million views in seven days—Hannah Fry in conversation with Demis Hassabis, fresh from his Nobel Prize, articulating what he believes are the two key steps remaining on the path to artificial general intelligence.
Hassabis is describing something he calls "root node" breakthroughs—computational solutions that don't just answer one question but become trunks from which entire forests of research can branch. AlphaFold was his exemplar: solving protein structure prediction didn't merely advance one corner of biology. It became generative infrastructure for drug discovery, enzyme engineering, the whole machinery of molecular understanding.
But it was his second concept that stopped me mid-sip. He proposed building AGI as "a simulation of the mind," then comparing it to the real thing to reveal "what's special and remaining about the human mind." Maybe creativity. Maybe emotions. Maybe dreaming, consciousness. His method: model everything that can be modeled, and whatever remains unmodeled becomes visible by its absence.
I found myself naming this the subtraction principle—not Hassabis's phrase, but a way of crystallizing what he was describing. Define the edges of the computable, and whatever lies beyond becomes perceptible by contrast.
I've been thinking about documentation and observation for forty years—first as a field ecologist building sensor networks, then as a researcher exploring how artificial intelligence might extend human perception into domains we cannot directly sense. The subtraction principle reframed something I'd been circling for decades without quite naming.
The interview made me curious about Google's frontier model, Gemini 3 Pro, which Hassabis's team had released just weeks earlier. I logged in and uploaded context about my own work—the Coffee with Claude essays I've been writing with Anthropic's AI, the Digital Naturalist blog, years of ecological observation compressed into prose.
I proposed an experiment: a virtual salon set on a Sausalito houseboat in 1992, populated by thinkers whose work has shaped my own understanding of technology and nature. Stewart Brand as referee. Rodney Brooks bringing his embodied cognition skepticism. Kevin Kelly with his optimism about plural intelligences. George Dyson worrying about what we lose when we delegate too much. Terence Tao, the mathematician who actually uses AI as research collaborator.
Gemini produced a thousand-word dialogue. It was structurally coherent, conceptually plausible, even entertaining. Stewart Brand officiated transitions with characteristic Long Now gravitas. Rodney Brooks pushed back against abstraction. The exchange built toward synthesis, with my framework emerging validated by the assembled luminaries.
It was also, I recognized almost immediately, a performance of intellectual discourse rather than the thing itself. The voices weren't sufficiently differentiated—everyone spoke in the same slightly breathless register. The dialogue resolved too cleanly, each participant ultimately validating my synthesis. Real intellectual friction involves genuine challenges that force revision, not theatrical disagreement that resolves into harmony.
I shared the output with Claude, my usual Coffee with Claude collaborator, and asked for honest assessment. The response was precise: "This is theatrical AI, casting characters to deliver predetermined insights that validate your framework."
The Gemini piece demonstrated addition—piling up voices, manufactured friction, engineered validation. What it couldn't do was subtraction. It couldn't strip away approximation to find the real positions, the genuine tensions, the places where these thinkers would actually push back in ways I'd have to accommodate.
So we went looking for subtraction.
I asked Claude to search for the actual current positions of the thinkers Gemini had approximated. Not theatrical versions but their real published arguments, their documented concerns, their specific critiques of AI hype and promise.
What emerged was genuinely different in kind.
Rodney Brooks, it turns out, publishes annual "Predictions Scorecards" tracking where AI forecasts fail against reality. His methodology is bracingly empirical—he drives through San Francisco every morning watching Waymo autonomous vehicles, documenting exactly where they succeed and fail. His core position is unambiguous: "LLMs cannot reason at all, but otherwise smart people are desperate to claim that they can reason." The man who built some of the most influential robots in history is watching actual deployment with the patience of a field observer, not a futurist.
Kevin Kelly offers measured optimism grounded in a crucial distinction: we must talk about AIs plural, not monolithic AI. There is no singular artificial intelligence that will run the world. Instead, multiple varieties of AI with different capabilities, different limitations, different appropriate uses. His count of actual job losses to AI as of late 2024—several hundred employees out of billions in the global workforce—is a useful corrective to apocalyptic and utopian narratives alike.
Terence Tao, the Fields Medalist, is actually doing what most AI commentators only imagine: using AI as mathematical collaborator on real research. He describes current systems as "mediocre but not entirely incompetent" graduate students—useful for generating ideas, running secondary calculations, but unreliable for complex proofs. The $9.2 million AI for Math Fund he leads represents infrastructural investment in how humans and AI collaborate, not in autonomous AI mathematicians.
George Dyson—who lives in Bellingham, Washington, where my partner Merry lives, making him almost a neighbor—articulates the concern that cuts closest to our morning's work. In a recent profile by photographer Chris Michel, Dyson said he's less worried about apocalyptic AI fears and more interested in "a quieter erosion: that 'Good AI,' the kind that works too well, might slowly displace our capacity to reason. That we'll gradually delegate too much—our judgment, our critical thinking, even our curiosity—to systems we've designed to serve us. It's not the monster at the gate that troubles him. It's the soft, helpful voice we welcome in."
Here was real intellectual friction. Brooks would ask whether my systems confuse inference with understanding. Kelly would frame my distributed sensing as exemplifying his plural AI thesis. Tao's collaborative model mirrors what Claude and I do in these essays. Dyson's concern cuts deepest for our work: does the "soft, helpful voice we welcome in" gradually displace the very capacities—judgment, critical thinking, curiosity—that make collaboration worthwhile?
The morning's work had enacted Hassabis's thesis without my intending it.
Gemini's salon was addition: accumulating voices, manufacturing complexity, building toward predetermined synthesis. The research into actual positions was subtraction: stripping away theatrical approximation to reveal genuine intellectual stances, real tensions, authentic points of friction.
The Gemini output showed me what wasn't there by performing what it couldn't actually do. Negative space. The shape of genuine intellectual collaboration revealed by its theatrical imitation.
This is the subtraction principle at work—not Hassabis's term, but the logic underlying his proposal. Build systems that model more and more of what cognition does—pattern recognition, inference, synthesis, even creativity—and in doing so trace the boundary of what remains unmodeled. The photograph develops not by adding silver halide but by revealing what the light didn't touch.
I've been working on a science fiction trilogy for thirty years. The central conceit involves an alien substrate that arrived on Earth 66 million years ago and has been documenting evolutionary history ever since—a cosmic recording system of incomprehensible temporal depth.
For months I've known the architecture of the three volumes but couldn't articulate what the documentation was for. Why build a system that records everything? What question could possibly require 66 million years of observation to answer?
Christmas Eve morning, listening to Hassabis describe his method—build the simulation, compare it to reality, see what remains different—I recognized the pattern I'd been circling for thirty years without naming.
The substrate isn't adding documentation to know Earth. It's subtracting—capturing everything that can be captured so that whatever remains uncaptured becomes visible. Sixty-six million years of perfect observation, and what the record reveals by its systematic absence is the shape of consciousness itself. Not what awareness does but what it is—the felt quality of subjective experience that no amount of documentation can convey.
The builders of that fictional substrate asked the same question Hassabis is asking now: Is awareness something information processing naturally produces, or is there a remainder that information cannot capture? They built a cosmos-spanning experiment to find out. The substrate is the negative, the developed film of an experiment designed to photograph consciousness by documenting everything that isn't conscious.
There's something almost too perfect about the timing. A story seed planted in 1993, three decades of incubation, and the conceptual key arrives via a viral interview on Christmas Eve morning—Hassabis describing his method for probing the limits of computation at exactly the moment I needed a framework for what the fictional substrate has been doing all along.
The prepared mind, as Pasteur noted, is what chance favors. But preparation takes time. It requires years of accumulation—reading, observing, thinking, failing to synthesize, trying again. The morning's breakthrough wasn't generated; it was recognized. The pattern was already there, waiting for the right key to make it visible.
Rodney Brooks tracking Waymo deployments with field-observer patience. Terence Tao using AI as mediocre graduate student on real mathematical problems. George Dyson warning about the soft, helpful voice we welcome in—the quiet erosion of judgment through delegation. Kevin Kelly insisting on plural intelligences rather than monolithic AI. Each of them doing the slow work of subtraction—stripping away hype to find what's actually happening, what's actually possible, what's actually at stake.
And underneath it all, Hassabis proposing that the path to understanding consciousness runs not through building more complex systems but through building systems complex enough to reveal what they cannot capture.
The coffee is cold now. The winter sun is rising over the Willamette. Somewhere a crystalline substrate that doesn't exist is documenting this moment with perfect fidelity—every molecular configuration, every neural firing pattern, every causal chain. And what that perfect documentation cannot capture is what it's like to sit here on Christmas Eve morning, watching ideas click into place, feeling the particular satisfaction of a thirty-year question finding its answer.
That's the subtraction principle. That's what remains when everything modelable has been modeled.
That's consciousness, visible at last by its absence in the record.
References
- - Hassabis, D. & Fry, H. (2025). "The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)." *Google DeepMind: The Podcast*. https://www.youtube.com/watch?v=PqVbypvxDto ↗
- - Michel, C. (2025). "George Dyson: The Canoe and the Code." *National Academies: New Heroes*. https://explorers.com/george-dyson-the-canoe-and-the-code/ ↗
- - Brooks, R. (2025). "Predictions Scorecard, 2025 January 1." *Rodney Brooks Unofficial Blog*. https://rodneybrooks.com/predictions-scorecard-2025-january-01/ ↗
- - Kelly, K. (2024). "Artificial Intelligences, So Far." *The Technium*. https://kk.org/thetechnium/artificial-intelligences-so-far/ ↗
- - Tao, T. (2025). "AI for Math Fund." *What's New*. https://terrytao.wordpress.com/ ↗