This morning I read a policy paper in Science about neural organoids—pea-sized clusters of human brain cells grown in laboratories that now contain one to two million neurons, roughly equivalent to a honey bee brain. The authors, a distinguished consortium of neuroscientists, ethicists, and philosophers, are wrestling with a question that has no clean answer: at what point do these constructs deserve moral consideration? When do they become conscious? When might they suffer?

The paper calls for a continuing international process to monitor this rapidly progressing field. The proposed governance framework would watch for threshold crossings—moments when organoids become large enough, complex enough, or integrated enough to warrant new protections. The implicit assumption is that such a threshold exists and can be identified.

I suspect the threshold they're looking for doesn't exist.

Not because consciousness is simple, but because it may be gradient all the way down. The honey bee comparison in the paper quietly acknowledges this problem. If we're increasingly recognizing possible sentience in bees, whose neuron count current organoids already match, then we're not looking for a line to cross. We're already somewhere on a slope we don't have the instrumentation to measure.

Last night I watched Mind Walk, the 1990 film based on Fritjof Capra's The Turning Point. Three characters—a physicist, a politician, and a poet—walk the corridors of Mont Saint-Michel arguing about exactly these questions. Capra's thesis, dramatized as dialogue, is that Western science inherited a set of false dichotomies from Descartes and Newton: mind versus matter, observer versus observed, part versus whole. These binaries feel natural because our grammar makes them easy to articulate.

The organoid researchers are caught in the same grammatical trap. "Is it conscious or not?" assumes consciousness is a toggle switch. But every line of evidence from neuroscience, from anesthesiology, from comparative cognition, suggests otherwise. Consciousness appears to be a continuum encompassing different states involving different kinds of brain functioning. We can measure transitions along this gradient using drug-induced states. What we cannot do is locate a threshold where "not conscious" becomes "conscious."

This realization has been building across multiple domains simultaneously. A Cambridge philosopher, Tom McClelland, recently argued that agnosticism about AI consciousness may be the only defensible stance—not as intellectual cowardice but as honest acknowledgment that we lack reliable methods to detect awareness in systems architecturally different from biological brains. The problem isn't insufficient data. The problem is that consciousness may not be the kind of thing that admits binary classification.

I've encountered this pattern before.

In 1997, I stood in the San Jacinto Mountains counting growth rings on recently felled trees—250-year-old Jeffrey pines and incense cedars logged by the Forest Service under the rationale of removing "decadent" and "overmature" specimens. The foresters operated with binary categories: trees were either healthy or declining, vigorous or stagnant, commercially valuable or expendable. What they couldn't see was that the old-growth forest was an integrated system. Those "decadent" trees hosted Spotted Owls and Flammulated Owls in their cavities. Flying squirrels fed on the mycorrhizal fungi covering ancient root systems. Rubber boas and ensatina salamanders depended on the accumulated debris of centuries.

The question "is this tree healthy?" was malformed. Health existed on a gradient, and the gradient included the entire ecosystem—the owls, the fungi, the salamanders, the soil structure that only centuries of forest development could produce. Cutting the oldest trees didn't improve forest health. It eliminated conditions that couldn't be recreated on any human timescale.

I spent years writing letters and attending meetings, trying to articulate what Christopher Stone had proposed in his 1972 essay "Should Trees Have Standing?"—that natural systems might deserve legal consideration as entities with interests of their own. The courts weren't ready. The grammar of law, like the grammar of science, demanded binary categories: legal person or property, rights-bearing or not. The gradient didn't fit.

Now the same question returns with neural organoids. Should they have standing? At what point? Under what conditions? The researchers asking these questions are sincere and thoughtful. But they may be searching for a line that doesn't exist, using categories that don't map onto the underlying reality.

Hamlet asked "to be or not to be?" as if existence were binary. But the soliloquy isn't really about the toggle switch between life and death. It's about the quality of being, the degree of presence, whether existence at low amplitude is worth the signal noise of continuing. Hamlet is already somewhere on a gradient—diminished by grief, dissociated by trauma, partially present.

Even the neuron presents an illusion of binary operation. Action potentials fire or they don't, all-or-nothing above threshold. But that apparent digital signal emerges from analog integration: thousands of graded inputs summing across dendritic trees, probabilistic vesicle release, variable synaptic weights, temporal summation windows. The "decision" to fire is already a continuum collapsed into an event. To fire or not to fire is Shakespeare's question translated into synapsean—and equally misleading.

If binaries don't work, what does?

I find myself returning to E.F. Schumacher's phrase "appropriate scale." His insight in Small Is Beautiful was that the growth-versus-no-growth debate was itself malformed. The real question wasn't whether to grow but what scale of activity serves human flourishing and ecological integrity. The answer is architectural, not categorical.

My own life has been an experiment in this principle. For several years, my daughter and I adopted veganism—not because we located a threshold that made animal consumption impermissible, but because we wanted to take the gradient seriously. We learned what we needed to learn: the importance of nutritional awareness, the ecological weight of food systems, the possibility of living more lightly. Eventually we stepped back from strict veganism, not because we'd found a line that justified crossing it, but because appropriate scale turned out to mean something different than categorical prohibition. Now I eat plant-forward, buy from local farmers markets, and think of food choices as design problems rather than moral binaries.

This is the governance model the organoid researchers haven't yet articulated. They're asking "is it conscious?" when the better question might be "what practices would honor the degree of neural integration present in this system?" The answer probably isn't binary prohibition or binary permission. It's something more like food architecture: acknowledge the gradient, design for appropriate scale, stay in relationship with what you're affecting.

Vince Gilligan's Apple TV series Pluribus dramatizes exactly this tension. After becoming "weary of writing bad guys" following Breaking Bad and Better Call Saul, Gilligan created a world where an alien virus transforms humanity into a peaceful, content hive mind. The collective claims it cannot harm any living thing—yet survives on "Human Derived Protein," consuming the dead. The protagonist Carol sees what the hive cannot: billions of bodies remain alive, but every individual human is gone. If everyone shares the same mind and thoughts, no one exists.

The show inverts the organoid question. Instead of asking "when does something become conscious?" it asks "when does individuality cease even if consciousness persists?" Critics interpreted Pluribus as an allegory for AI, though Gilligan conceived it years before large language models entered public awareness. The parallel emerged anyway because the structural problem is the same: what happens when many become one?

Existence requires metabolism. Metabolism requires incorporating other organized matter. Even plants compete for light. Even cells undergo programmed death. The question isn't whether to participate in the cycle but how to do so with awareness, appropriate scale, and honest acknowledgment of what we're doing.

Professor Len Troncale, who died last year, spent his career mapping what he called "linkage propositions"—the connections between concepts across domains that reveal underlying structural patterns. He taught me to see that the boundaries between disciplines are administrative conveniences, not ontological realities. A biologist and a physicist and an ethicist may be studying the same patterns, just instantiated in different substrates.

The gradient problem is one such pattern. Whether we're asking about consciousness in organoids, legal standing for ecosystems, the self-versus-community question in psychology, or the observer-observed split in physics, we keep encountering the same structural issue: our categories assume boundaries that the underlying reality doesn't contain.

The response isn't to locate thresholds that don't exist. It's to design architectures—scientific, legal, ethical, personal—that honor gradients. To ask not "where is the line?" but "what practices acknowledge the continuity we actually inhabit?"

The organoid researchers at Asilomar last fall chose that location deliberately. Fifty years earlier, another group met there to hash out guidelines for genetic engineering. They were looking for lines too. Perhaps this generation can do something more sophisticated: build frameworks that work without them.