Walking with AI versus the Death of Science
Sara Imari Walker’s recent essay in Noema asks whether AI will kill science or foster a scientific revolution. Her answer hinges on a deeper question: what is science, actually? She argues that science isn’t method—it’s intersubjective meaning-making, the social negotiation of which descriptions of reality a community will adopt. AI can execute within representational frameworks, she contends, but cannot participate in the creative act of recognizing when our maps have become inadequate.
It’s a thoughtful argument. But I think she’s missing something about how scientists actually work with these tools.
I came up through science in the pre-Internet decades. In the 1970s and 1980s, research began at the card catalog—a spatial addressing system for physical objects containing knowledge. You traversed the stacks, handled volumes, read in silence, and then did the crucial labor of synthesis alone, often days later, pen in hand. Understanding was the act of writing your own summary. The knowledge didn’t become yours until you’d metabolized it through that solitary process.
The card catalog preserved a clean boundary between the knowledge system and your consciousness. You went to it, extracted what you needed, returned to yourself to think. The boundary was spatial, temporal, and phenomenologically clear.
What’s changed with conversational AI isn’t just speed or convenience. Language itself has become the interface to accumulated human knowledge, and that interface talks back. The card catalog never said “but have you considered…” It never noticed connections between sources you hadn’t seen. It never pushed against your framing.
This matters for understanding the temporal architecture of traditional science. Those months of living with your own analysis before external feedback created a particular cognitive condition. You developed robust internal standards because validation was so delayed. The uncertainty was productive, forcing deeper self-scrutiny. A hypothesis you’d defend six months from now needed to survive your own evolving understanding. The temporal gap between formulation and scrutiny was a filter.
With conversational AI, that feedback loop compresses radically. I can iterate on an idea in minutes that might have taken weeks of solitary wrestling. But here’s what I’ve come to understand: this isn’t replacing the process—it’s extending it.
Consider what the traditional artifacts of science actually represented. The dissertation certified that someone had become a scholar. The peer-reviewed paper marked that someone had traveled from not-knowing to knowing. The credential was never really about the document—it was about the person who emerged from producing it.
Now those artifacts can be manufactured without the transformation they were supposed to certify. An AI can produce a document with all the formal features: literature review, methodology, results, discussion, properly formatted citations. It would be structurally complete and internally consistent.
But what would be absent? The three years of failed deployments before one worked. The committee meeting where your mentor dismantled your first framing and you had to rebuild from foundations. The August afternoon in the field when the data finally made sense in your body, not just your mind. The dissertation document was evidence of a transformation you underwent—proof you’d become capable of independent scholarship.
This is the tsunami of concern Walker identifies. The document can exist without the becoming. The proxy and the transformation have been decoupled. And suddenly we’re forced to ask: what were we actually valuing? The knowledge itself, or the human passage that produced it?
An unnamed philosopher put it simply: it’s the path, not the destination.
The artifact was always a kind of receipt—proof of passage, not the value itself. The dissertation wasn’t the knowledge; it was evidence you’d walked a particular route through confusion into clarity. What AI produces isn’t a path walked. It’s a destination manufactured without passage. The receipt without the journey.
But this framing is incomplete. There are paths only machines can walk—and they constitute genuine traversal, not mere destination-manufacturing. When a simulation explores ten thousand parameter variations, mapping the boundaries of what’s possible by systematically testing what isn’t, that’s exploration of unmeasured space no human could cover. My sensor networks did something analogous in physical space: distributed observation points, none of them human, converging on patterns no single observer could detect.
What actually matters for scientific validity isn’t that humans walked the path. It’s that independent paths converge. Two scientists replicating an experiment. A simulation and a field observation arriving at the same result. The epistemic weight comes from independent traversal reaching consistent conclusions—regardless of who or what is walking.
Walker’s “irreducibly human” may need revising: irreducibly traversed, perhaps.
And here’s what Walker doesn’t fully consider: conversation itself can be a path. When I talk through ideas with AI at five in the morning, I’m not extracting answers—I’m thinking out loud, encountering resistance, following tangents, sometimes arriving somewhere unexpected. The dialogue has duration, friction, occasional wrong turns. It’s closer to walking the stacks than to downloading conclusions.
The real distinction isn’t human versus machine. It’s path versus destination. Any process that involves genuine traversal—uncertainty, iteration, surprise, integration—produces knowledge differently than one that generates endpoints without passage.
Walker worries that AI outputs arrive “already dead,” produced without the embodied creative act that accompanies scientific discovery. She borrows from Barthes: the scientist dies into publication, birthing shared understanding. But she’s comparing AI outputs to finished papers, which is the wrong comparison. The generative conversation—the notebook sketches, whiteboard sessions, 2 AM scribbles that precede formal articulation—that’s where AI can participate without killing anything.
The library stacks didn’t write my dissertation. Neither does AI. But both expanded where my thinking could travel.
After four decades of building what might be called technologies of perception—from early ecological movie maps through wireless sensor networks to current AI-enhanced observation systems—I’ve come to see each new tool as an extension of capacity rather than a replacement of process. The question isn’t whether AI can do science. The question is whether we’re still walking.
Humans using AI as trail companions are still walking. The path remains ours. What’s changed is the terrain we can cover—and now, the recognition that some terrain requires companions who can walk paths we cannot.
References
- - Kuhn, Thomas (1962). *The Structure of Scientific Revolutions*. University of Chicago Press. ↗
- - Barthes, Roland (1967). “The Death of the Author.” *Aspen Magazine*. ↗
- - Walker, Sara Imari (2025). “The Death of the Scientist.” *Noema Magazine*. https://www.noemamag.com/the-death-of-the-scientist/ ↗