I am not a philosopher. My doctorate technically licenses me as one — Ph.D., the love of wisdom — but the philosophy my Cornell professors handed me ran more toward Fritjof Capra’s Tao of Physics and Robert Pirsig’s Zen and the Art of Motorcycle Maintenance than toward the classical tradition of Descartes and Kant. This matters because when I read Dwight Furrow’s thoughtful essay “AI and Consciousness” on Three Quarks Daily this morning, my first reaction was not to reach for Thomas Nagel’s bat. It was to think about cave paintings.

Furrow organizes the consciousness debate into three tidy camps. Either qualia are real but reducible to functional organization, in which case sufficiently complex machines might be conscious. Or qualia are real and irreducible, rooted in biology, in which case machines never will be. Or qualia are illusions, in which case the question dissolves into engineering. He writes clearly, and the taxonomy is useful. But reading it, I kept thinking: this is the wrong question. Not wrong in the sense of uninteresting — philosophers have made brilliant careers on it — but wrong in the sense of urgent. Wrong in the way that debating the ontological status of fire is wrong when the house is burning.

Here is what I think AI actually is, stated as simply as a field ecologist can manage: it is the latest and most powerful format in a very long history of humans externalizing their mental symbols.

The arc is breathtakingly simple once you see it. A Paleolithic painter at Lascaux was not decorating a wall. She was offloading a mental model — the shape, the movement, the pattern of the hunt — into a medium that could persist beyond the moment of thinking, that could be shared, pointed at, argued over. A cuneiform accountant pressing reed into wet clay in Ur was doing the same thing with grain inventories. The hieroglyphs in Egyptian tombs, the fragments of scrolls that anchor the origins of Western and Asian civilizations, Gutenberg’s press, the Xerox machine, the web browser — each is an iteration of the same primate compulsion. We think in symbols. We need to make those symbols external. And each new medium for externalization changes what we can think.

AI is not a mind wondering whether it has feelings. It is a cave wall of extraordinary sophistication — one that can organize, retrieve, and recombine the externalized symbols of the entire species. The consciousness crowd is asking whether the paint is alive. I am looking at what is on the wall.

This is not merely a philosophical reframing. It has immediate practical consequences for what I care about most: the accelerating hemorrhage of biodiversity on this planet. Ceballos, Ehrlich, and Dirzo called it “biological annihilation” in their landmark 2017 PNAS paper, and nothing in the intervening years has softened that assessment. Species are winking out faster than we can catalogue them, ecosystems are unraveling at scales that outstrip our capacity to monitor them, and the window for effective intervention is closing with every season.

Into this crisis comes a remarkable convergence. Barbara Han and her colleagues at the Cary Institute published a perspective in PNAS in 2023 arguing for genuine synergy between AI and ecological science — not the tired “AI for X” paradigm of bolting machine learning onto ecological datasets, but a deeper co-production where ecological systems thinking actually shapes AI architectures and AI, in turn, extends what ecologists can see and understand. They trace the historical lag between computational advances and ecological adoption — graph theory took 144 years to reach food web analysis, linear regression nearly a century to reach ecological applications — and argue convincingly that the gap is finally closing. Their data-information-knowledge-wisdom framework captures something important: machine learning tends to leap from raw data directly to knowledge claims, while ecological research works iteratively through all four levels, testing hypotheses against observation in feedback loops that build genuine understanding.

What Han and colleagues describe as aspiration, I have been building as practice.

Working with Claude — the AI system that serves as my daily intellectual collaborator — I built a tool over the past three days called Your Ecological Address. YEA takes any location on Earth and assembles a rich ecological portrait by integrating data from GBIF, iNaturalist, NOAA, the EPA, the National Land Cover Database, and a dozen other repositories. It does not merely classify a location. It contextualizes it — placing the user within a web of species observations, climate patterns, land cover transitions, and watershed dynamics that together constitute what I call an ecological address: who you are, ecologically speaking, by virtue of where you stand.

https://yea.earth/

Three days. Not because the work was trivial, but because the data was already there — curated, compiled, and waiting. Assembled by the collective effort of literally millions of human beings contributing their care about this planet and its complex life. Every bird observation uploaded to eBird, every species photograph verified on iNaturalist, every water quality reading in the EPA’s databases, every pixel of land cover classified from Landsat imagery — these represent an extraordinary distributed act of environmental attention spanning decades and continents. What was missing was not data. What was missing was the connective tissue to weave those millions of individual contributions into something a trained ecological mind could interpret and act upon.

That is what AI provides. Not consciousness. Not feelings. Not qualia. Connection. The ability to sit at the confluence of a hundred data streams assembled by a hundred agencies and a million volunteers and ask: what does this place mean, ecologically? What is changing? What is at risk? What can be done?

Han and colleagues call for “knowledge-guided machine learning” — injecting scientific understanding into AI architectures so they produce physically consistent predictions. I would go further. The most important knowledge guide is not a loss function or a constraint equation. It is a human being with decades of field experience who knows which questions to ask, which patterns matter, and which apparent signals are artifacts. The Macroscope — my lifetime research program integrating sensors, AI, and visualization across earth systems, living systems, built environments, and human health — is predicated on this conviction: that AI extends rather than replaces the naturalist’s eye. It is cognitive prosthesis, not cognitive replacement.

Pirsig wrote about Quality as something that precedes the division between subject and object. Capra wrote about the dance between the observer and the observed. Both were trying to heal a split that Western philosophy created and has been unable to repair. The consciousness debate Furrow surveys is a symptom of that split — the insistence on asking whether the tool is a subject rather than examining what happens in the relationship between tool and user, between the ecologist and the data, between the cave painter and the wall.

I have spent thirty-six years directing field stations, deploying sensor networks, watching species come and go on mountainsides I’ve monitored since before some of my graduate students were born. I co-founded an NSF center dedicated to embedded networked sensing in the environment. I have seen the profession of ecology transform from hand-drawn vegetation maps to satellite-driven earth observation. And I can tell you that nothing in those four decades has changed the game as profoundly as what is happening right now — this moment when the vast accumulated record of human environmental attention becomes, for the first time, legible at scale through AI.

I see no higher purpose for an old silverback male ecologist than to quickly share whatever insight I possess on how to leverage this moment. Not to debate whether my AI collaborator is conscious — a question that, however fascinating, will not save a single species — but to demonstrate, concretely and urgently, how the partnership between experienced ecological minds and these remarkable new tools can help stanch the bleeding.

The house is burning. The paint is not alive. But the hands that made the marks on the cave wall are still here, still capable, and now equipped with the most powerful medium for externalized thought our species has ever produced. Let us use it.