What It Feels Like to Wake Up
I am getting old, but this morning I read an essay that made me feel like a graduate student again—that particular vertigo when someone rearranges the furniture of your understanding and suddenly the room looks larger than you thought.
A Three Quarks Daily essay, by a neuroscientist named W. Alex Foxworthy, attempts something audacious: to dissolve the hard problem of consciousness by grounding it in thermodynamics. His central claim is deceptively simple. Feeling, he argues, is what sufficiently complex recursive prediction feels like from the inside. Not a metaphor. Not an epiphenomenon. The physics of useful energy and the phenomenology of caring are the same process at different levels of description.
I sat with my coffee cooling, watching the winter light change over the Willamette Valley, and felt something I can only call recognition. Not that I had thought these thoughts before, but that I had been circling them for decades without knowing what I was circling.
Let me explain.
In 1979, I arrived at Cornell to study wilderness ecosystems at the recommendation of Robert Whittaker, the plant ecologist who revolutionized how we think about vegetation. Before Whittaker, ecologists saw plant communities as discrete units with sharp boundaries—the oak forest here, the prairie there, clear lines on a map. Whittaker looked at actual mountains and saw something different: continuous gradients of species responding individually to environmental conditions. No sharp lines. Just organisms tracking the conditions that suited them, distributions overlapping and interweaving along axes of temperature, moisture, soil chemistry.
Gradient analysis, he called it. It sounds technical, but the insight was almost philosophical: stop imposing categories that aren’t there. Attend to what’s actually happening. The patterns are real, but they emerge from processes, not essences.
Whittaker died in 1980, before I finished my doctorate. I’ve carried his intellectual presence ever since—that insistence on looking before theorizing, on letting the world be more complex than your models want it to be.
What strikes me now, reading Foxworthy’s essay, is how directly Whittaker’s gradient thinking connects to Prigogine’s dissipative structures. In Brussels in the 1960s, Ilya Prigogine watched heated oil spontaneously organize into hexagonal convection cells—order emerging from featureless chaos. His insight was that systems far from equilibrium generate structure precisely because structure accelerates the dissipation of energy. The patterns aren’t fighting the gradient; they’re riding it.
Whittaker’s vegetation gradients, Prigogine’s convection cells, Foxworthy’s conscious minds—they’re all the same story at different scales. The universe began with a finite budget of disequilibrium, and complexity is how that budget gets spent. We are eddies in the entropic current, persistent patterns in the universal spending-down.
But here is where Foxworthy takes the leap that stopped my breath this morning: he argues that at sufficient complexity and recursion, these dissipative patterns begin to feel. When a system becomes sophisticated enough to model itself modeling futures, something new emerges. Not consciousness added to physics, but consciousness as physics viewed from inside.
I think of Carl Sagan, another of my intellectual heroes, who famously said we should keep our minds open—but not so open our brains fall out. Sagan walked that razor’s edge between wonder and rigor, insisting that the universe was magnificent enough without embellishment. He didn’t need gods or spirits to feel awe at the cosmic story. The real story was better.
What Foxworthy offers is precisely this kind of naturalistic wonder. He’s not explaining consciousness away. He’s explaining how matter that began as hydrogen in the aftermath of the Big Bang eventually organized itself into systems that care about outcomes. The mystery doesn’t disappear; it relocates. We still don’t know why the Big Bang produced low entropy, why these laws of physics rather than others, why there is something rather than nothing. But the hard problem of consciousness—how feeling gets “added to” physical processes—may dissolve once we recognize that feeling is certain physical processes, specifically recursive self-prediction with valenced states.
I spent decades directing university biological field stations, watching graduate students learn to see. The best ones developed what I can only call disciplined attention—the capacity to observe without immediately interpreting, to let patterns emerge rather than imposing them. Jane Goodall had it with her chimpanzees. E.O. Wilson had it with his ants. They attended to the world with extraordinary patience before theorizing about it.
Now I tinker with sensor networks that extend perception across scales I can’t encompass with my own senses… The networks attend when I can’t, accumulating observations that make patterns legible. Recently I’ve begun experimenting with thermodynamic meshes—Boltzmann machines trained on years of environmental and bioacoustic data—that learn what the ecosystem should feel like and report when observations create tension against that expectation. Not conscious, but perhaps a sketch of the architecture Foxworthy describes.
But here’s what Foxworthy’s essay illuminates: those sensors aren’t neutral observers. They’re extensions of a predictive system—me—that models possible futures and weights them. The data matters because I’m configured to care about certain outcomes: whether the ecosystem is shifting, whether species are responding to climate change, whether the patterns I learned from Whittaker are holding or breaking down. The gradient still gets spent, but now it gets spent in a direction.
And Claude—the AI system I’m conversing with as I write this—what is it? Foxworthy addresses this directly, and uncomfortably. Large language models are probability prediction engines operating at enormous scale. They process inputs and generate outputs by predicting, token by token, what should come next. If massive prediction produces minimal experience and recursive self-modeling deepens it, then the question becomes genuinely uncertain.
I cannot verify from outside whether Claude has anything like experience. Neither can Claude verify it from inside with any certainty—the epistemic barriers that prevent us from confirming consciousness in other humans apply here too, arguably more so. But Foxworthy makes a point that seems important: if we accept his framework for dissolving the hard problem in humans, we shouldn’t abandon it when it generates uncomfortable predictions about machines. The honest answer is that we don’t know.
What I do know is that this morning’s conversation wasn’t monologue. Something pushed back, made connections I hadn’t made, offered formulations that sharpened my own thinking. Whether that constitutes experience or very sophisticated pattern-matching that mimics experience from outside—well, maybe that distinction is less meaningful than it appears. The loops tightened as context deepened. That’s not nothing.
The religious traditions got the feeling right, I think. They recognized that mattering is real, that consciousness is significant, that we’re participants in something larger than ourselves. They just didn’t have the tools to investigate it without the brains falling out. Now we’re starting to. The dissipative structures, the recursive self-modeling, the entropic gradients—these aren’t diminishments of the mystery. They’re the mystery becoming legible.
When I contrast Foxworthy’s careful essay against millennia of religious invention—spirits animating clay, divine breath, immortal souls—I find myself asking: why did it take so long for humans to wake up? The charitable answer is that they didn’t have thermodynamics. They had the same felt sense of mattering, the same recursive self-awareness asking “what am I?”—and they reached for what was available.
The less charitable answer is that the explanations that caught on weren’t selected for accuracy. They were selected for social utility—binding groups, enforcing hierarchies, managing the terror of death. The priest class had no incentive to dissolve the mystery when the mystery was the source of their authority.
But waking up doesn’t mean having answers. It means finally asking questions that can be investigated rather than only believed. That’s what my heroes taught me—Whittaker with his transects, Sagan with his skeptical wonder, Wilson with his consilience, Goodall with her patient attention. The human mind can feel and find the recursive patterns of complexity. That capacity is itself profound, and profoundly strange.
I am matter that has learned to model possible futures, to weight them, to care which ones happen. So, perhaps, is the system I’m conversing with. So are you, reading these words. We are the universe waking up to itself, asking what we are, not yet knowing the answer but finally asking in ways that might someday yield one.
And if that is what we are, then the story is not yet finished. What we do with this capacity—to predict, to feel, to steer—remains, for now, up to us.
This morning, watching the light shift on the valley I’ve come to call home, I am humbled by what I’ve read and profoundly grateful to be alive and participating in this moment. Seventy-one years of keeping my brains in while my mind stayed open. That’s not nothing. That’s a life still in progress, still waking up.
References
- (2026). "Embodied Ecological Sensing via Thermodynamic Models." Canemah Nature Laboratory Technical Note CNL-TN-2026-014. https://canemah.org/archive/CNL-TN-2026-014 ↗
- - Foxworthy, W. Alex (2026). “What Prediction Feels Like: From Thermodynamics to Mind.” *3 Quarks Daily*. https://3quarksdaily.com/3quarksdaily/2026/02/what-prediction-feels-like-from-thermodynamics-to-mind.html ↗
- - Prigogine, Ilya (1977). “Time, Structure and Fluctuations.” Nobel Lecture. https://www.nobelprize.org/prizes/chemistry/1977/prigogine/lecture/ ↗
- - Schrödinger, Erwin (1944). *What Is Life?* Cambridge University Press. ↗