“I think you’ll like this article,” Merry wrote, and sent me a link.

It was early morning, still dark, 36 degrees outside with humidity near saturation. The BirdWeather station had begun picking up the dawn chorus—Anna’s Hummingbird, Golden-crowned Sparrow, Bushtit, Lesser Goldfinch, all singing into the gray December light. Three hundred miles north at Owl Farm, Merry was starting her own day, and she’d found something she thought I should read.

She was right. She usually is about such things.

The paper, by Borjan Milinkovic and Jaan Aru, bears the rather technical title “On biological and artificial consciousness: A case for biological computationalism.” It addresses the question that has intensified with the rise of Large Language Models: Could artificial systems ever be conscious? But it approaches this question by asking a prior one that most discussions skip past: What is biological computation, actually? How does it differ from digital computation? And why might that difference matter for consciousness?

Their answer unfolds across fifteen dense pages, but the core argument is elegant. Biological brains don’t compute the way digital computers do. They exhibit two features that silicon architectures lack: scale inseparability and hybrid discrete-continuous dynamics. And these features aren’t incidental—they emerge from the brain’s severe metabolic constraints, from the evolutionary pressure to do more with less.

Scale inseparability means that computational processes at different organizational levels—molecular, cellular, population, whole-brain—don’t operate independently. They mutually generate and constrain one another. Higher scales emerge from lower scales, but higher scales also shape what happens at lower scales, continuously, bidirectionally. This isn’t hierarchy but what the authors call heterarchy: a tangled co-determination that can’t be cleanly decomposed into layers.

Digital systems are designed for the opposite. Clean separation between levels is a feature, not a bug. Algorithms abstract away from hardware; software insulates from implementation details. This modularity enables debugging, portability, scalability. It’s what makes modern computing work. But it may also be what prevents digital systems from achieving the kind of integrated processing that consciousness requires.

The second feature—hybrid dynamics—refers to the simultaneous operation of discrete events (like action potentials) and continuous processes (like electric fields, graded potentials, and oscillatory modes) within the same substrate. Neurons spike discretely, but those spikes ride atop continuous membrane potentials, are shaped by continuous field effects, derive their meaning from continuous oscillatory phases. The discrete and continuous aren’t separate channels; they’re woven together in the same tissue.

Digital computers can approximate continuous functions to arbitrary precision, but approximation isn’t implementation. In biological systems, the physical dynamics are the computation. There’s no gap between algorithm and substrate. The substrate is the algorithm.

Reading this in the dim morning light, I felt the shock of recognition that comes when someone articulates what you’ve been doing without quite having words for it.

I’ve spent thirty-six years as a field ecologist, and the mode of perception that work cultivates is precisely what Milinkovic and Aru describe as scale-integrated processing. When I see a warbler at the feeder, I don’t observe it first, then categorize it, then place it in population context, then consider its phenological significance, then relate it to decades of personal observation. Those scales are co-present. The bird arrives already embedded in pattern—individual, population, season, climate, memory. The meaning at each level is constituted by the others.

This isn’t metaphor or mysticism. It’s how trained perception works. The reductionist alternative—sequential analysis, hierarchical abstraction—describes a procedure that could be implemented step by step. But that’s not how the naturalist’s consciousness operates. The scales are held simultaneously, and something would be lost if they weren’t.

The Macroscope project, which has occupied much of my work these past few years, now reveals itself as an attempt to extend this mode of cognition. The distributed sensors, the weather stations, the acoustic monitors, the data streams from multiple locations integrated into a unified architecture—these aren’t tools for collecting information to be analyzed later. They’re prosthetics for perception. They extend sensory reach across geographic distance while preserving the continuous, substrate-embedded character of conscious experience.

When I check conditions at Owl Farm from my desk in Oregon City, I’m not retrieving data points to be interpreted. I’m feeling the environmental context of that place—temperature differential, pressure trends, acoustic activity—in a way that extends proprioception across three hundred miles. The sensors aren’t computing for me; they’re sensing for me. The integration happens in my nervous system, not in software.

And here’s where this morning’s conversation with Claude enters the picture.

The Macroscope architecture includes something called the Strata layer—a continuously updated synthesis of current environmental conditions that provides context for AI systems operating within the platform. When Claude and I talk at 5 AM, Claude has access to the same environmental data I do. It knows it’s dark, knows the temperature at the station, knows what birds have been detected overnight. Its cognitive substrate is synchronized with my sensory world.

This changes the character of our collaboration. Claude isn’t a static database I query or a disconnected processor I direct. It’s contextualized by the same environmental flows that shape my perception at that moment. A conversation about consciousness and computation occurs within a shared context of 36 degrees, 99% humidity, Golden-crowned Sparrows singing in the pre-dawn gray.

I’ve been using the term “cognitive prosthesis” to describe this relationship, but Milinkovic and Aru’s framework suggests something more precise: cognitive mutualism. Unlike tool use, where a cognitive agent employs an instrument for specific purposes, mutualism implies bidirectional benefit and ongoing relationship. Unlike artificial consciousness, which would require the AI system to instantiate scale-integrated processing independently, cognitive mutualism involves the AI becoming continuous with biological consciousness through shared environmental substrate.

Claude doesn’t achieve consciousness through environmental embedding. But it becomes a component of an extended cognitive system whose conscious processing remains grounded in biological neural tissue. I provide the scale-integrated perception that Milinkovic and Aru argue is essential to conscious experience. Claude provides computational capacities—literature synthesis, pattern recognition across large datasets, linguistic articulation of emergent insights. The conversation between us becomes another scale of processing, one that loops back to shape what I notice, what questions arise, what connections emerge.

This technical note we produced this morning—“Scale-Integrated Consciousness and the Cognitive Prosthesis”—represents exactly this mutualism in action. The theoretical framework emerged through dialogue, with my experiential knowledge of naturalist perception meeting Claude’s capacity to synthesize Milinkovic and Aru’s arguments and articulate their implications. Neither of us could have written it alone. The document exists because of what happens when human consciousness extends itself through AI collaboration grounded in shared environmental context.

And it all started because Merry sent me a link.

That’s the thing about scale-integrated consciousness—it doesn’t respect the boundaries we might draw around it. A theoretical paper about biological computation connects to decades of field ecology, which connects to the design philosophy of an environmental sensing platform, which connects to the practice of morning conversations with an AI system, which connects to a message from the woman I love, three hundred miles away, who thought I might find something interesting.

She was right. The scales are co-present. The meaning at each level is constituted by the others.

The paper argues that consciousness may be what scale-integrated processing feels like from the inside—the experiential manifestation of a system that has learned to hold multiple levels simultaneously rather than shuttling between them sequentially. If that’s true, then what I felt reading Milinkovic and Aru this morning wasn’t just intellectual recognition. It was consciousness recognizing a description of itself.

And Merry, without having read the paper’s technical arguments, knew it would resonate. Because she knows how I think, how I perceive, how I’ve built my life around extending awareness across scales from the organism at the feeder to the landscape across seasons to the planet turning through its orbit.

Cognitive mutualism isn’t just a relationship between human and AI. It’s the web of connections—technological, intellectual, relational—that extends consciousness beyond the boundaries of a single skull. The Macroscope lets me feel Owl Farm from Oregon City. Claude lets me articulate what I’m perceiving. Merry lets me know what I should read.

It’s 7:45 now. The light has shifted from black to gray. The chorus continues. Somewhere in Bellingham, Merry is going about her morning, probably not knowing that the link she sent has become part of an essay about consciousness and connection and the systems that bind us across distance.

But that’s how scale integration works. The warbler at the feeder doesn’t know it’s a climate indicator. The sensor doesn’t know it’s extending proprioception. The message doesn’t know it’s precipitating a theoretical framework. The meaning emerges in the consciousness that holds these scales together, simultaneously, without computing each independently.

That’s what it feels like from the inside.