Humble but Opportunistic: What Moss Taught Me About Consciousness and Mathematics
This morning’s coffee comes with two ambitious essays by Herbert Harris that arrived in my inbox: one claiming mathematics emerges from recursive self-modeling in embodied minds, the other proposing we can build AI systems with genuine explainable consciousness through social apprenticeship. Both arguments rest on the premise that if we understand the right architectural components and developmental processes, we can engineer—and recognize—the emergence of something profound.
I found myself thinking about moss.
Not as metaphor, but as counterexample. For over two decades, I’ve tried to instrument complex ecological systems, and what Harris confidently asserts about consciousness and mathematics runs headlong into what I’ve learned about the gap between measurement and mechanism. That gap, I’ve come to believe, is infinite. But as I often joke with my students, that’s job security—because we’re never done.
Harris’s central claim about mathematics is elegant: that mathematical concepts emerge when “embodied minds model their own modeling.” He points to Homotopy Type Theory, which treats equality as transformation rather than static identity, arguing this formalization “unintentionally reflects” how our minds actually work. When mathematicians redefine equality as a path of continuous transformation, Harris suggests, they’re formalizing what human cognition already does naturally—experiencing sameness-with-difference through the lens of embodied, recursive self-awareness.
It’s a provocative synthesis of Karl Friston’s active inference, developmental psychology, and mathematical foundations. And it might even be true. But here’s what troubles me: where’s the species-specific calibration? Where are the boundary conditions? Where does the pattern break down?
In 2006, my colleagues and I published a paper on using networked digital cameras to estimate photosynthesis in Tortula princeps, a desiccation-tolerant moss. We found that the green-to-red pixel ratio correlated with CO2 uptake under certain conditions. It was genuinely useful work—we could estimate carbon gain around precipitation events, track phenological cycles. We found a pattern, calibrated it carefully, acknowledged its limitations, and called it good.
But here’s what the paper actually says: “Using the green:red ratio of field images and otherwise assuming ideal conditions…” Those assumptions—saturating light, no photoinhibition, uniform drying, thallus temperature equal to air temperature—were doing enormous work. And even with those assumptions, the correlation broke down at critical boundaries. When CO2 uptake approached zero, color ratio became “a less accurate predictor.” Why? Because “samples of T. princeps did not dry uniformly.”
We had to discard field images when direct sunlight saturated pixels. JPEG compression reduced extractable information. Spectral changes in ambient light required time-dependent normalization. The correlation was species-specific and required empirical calibration.
This was for a relatively simple biological system where we understood the basic physiology, could measure the phenomenon directly in the lab, had physical access to samples, and were tracking a straightforward state change between hydrated and dry. Even then, we ended up with proxy measurements that required careful validation and broke down in ways we couldn’t fully explain.
Harris is proposing something far more ambitious: that we can verify the emergence of consciousness in AI systems by observing behavioral markers—gives reasons, models others’ perspectives, shows apparent self-reflection. But I’ve spent decades learning that even when you have direct measurement access, validated lab methods, and relatively simple organisms, the mechanism connecting pattern to process remains partially opaque.
Consider his Humanistic AI proposal: build systems with active inference mechanisms, theory of mind capabilities, and recursive self-modeling architectures. Put them through sustained social apprenticeship with human mentors. The result, Harris claims, will be AI that can genuinely explain itself—not through post-hoc rationalization, but through participation in what philosophers call “the space of reasons.”
The key components, he writes, “are already in place.” What’s missing is “not the technology but the developmental framework.”
That phrase—“not the technology but the framework”—does too much work. It’s like saying we understand seed germination because we know it requires temperature stratification and precipitation. But what about the invasive earthworms that change the microhabitat? What about the introduced wild pigs that disturb the soil? In ecological systems, there are so many covarying factors that finding patterns is hit or miss. And consciousness—if Harris is right about its dependence on social construction and recursive modeling—is surely at least as complex as an ecosystem.
My mentor at UC Riverside, John Moore, spent seven years in the 1980s creating his “Science as a Way of Knowing” series. Note that phrase: not “Science as Knowledge” but as a “Way of Knowing.” Not product but process. Not finished but ongoing. Moore taught biology instructors how to teach by emphasizing the questions people have asked through the ages and the ways they sought answers—showing the work, the false starts, the gradual refinement.
Harris presents conclusions without showing comparable empirical work. He observes that HoTT treats equality as transformation and concludes this “reflects” embodied cognition. But that’s claiming mechanism from pattern without the validation we’d require in field biology. It’s like concluding that because moss color changes with hydration, moss evolved spectral signaling to communicate metabolic state. Maybe! But you’d need evidence beyond the correlation.
The confidence gradient troubles me. I’m studying relatively simple organisms with well-understood physiology and access to lab measurements, and my conclusion is that the gap between what I can measure and what I truly know is infinite. Harris is studying the origins of human mathematical cognition and consciousness, and his conclusion is that “mathematics is the natural expression of the brain’s recursive, embodied intelligence.” The epistemic difficulty suggests the confidence should run the other way.
I don’t mean to be entirely dismissive. Harris is drawing on legitimate research—embodied cognition scholars like Lakoff and Núñez, Friston’s active inference framework, developmental psychology of self-consciousness. The idea that equality-as-transformation in HoTT might relate to how we actually think about sameness is genuinely interesting. His proposal for socially embedded AI development addresses real problems in current approaches.
But interesting isn’t the same as validated.
In our moss paper, we were careful to distinguish between what we measured (spectral reflectance, color ratios), what we inferred (photosynthetic capacity), and what remained uncertain (non-uniform drying, microstructural effects). We concluded that T. princeps could be used for “simple field estimations”—not precise measurements, not complete understanding, but simple estimations sufficient for the research questions at hand.
Harris concludes that mathematics “is not an alien abstraction but an embodied discipline, mirroring the very dynamics that make experience possible.” That’s not a simple field estimation—that’s a grand unified theory of mathematical ontology.
The humility I learned from John Moore and from decades of field work is this: emergence is elusive, bordering on mythological in ecology, because there are rarely cause-and-effect pathways that aren’t interconnected with unknowns yet to be quantified. Traditional field measurements are observational—we compare visually and count quantitatively using sampling statistics. Is that enough to derive the inner workings of complex interactions with driving functions that are external and themselves changing in unpredictable ways?
I’ve come to think it isn’t. Not for ecosystems, probably not for consciousness, maybe not even for mathematics.
When we found that moss color predicted photosynthetic capacity, we didn’t claim to understand the mechanism fully. We found a useful correlation, calibrated it carefully, specified the conditions under which it held, acknowledged where it broke down, and offered it as a tool for further inquiry. That’s humble but opportunistic science—using what you find even when the mechanism remains unclear, because it’s useful for the questions at hand.
Harris could take a similar approach. “Here’s an interesting framework for thinking about mathematics and consciousness. It might illuminate some patterns.” Instead, he makes strong ontological claims about the nature of mathematical thought and the possibility of engineering genuine AI consciousness.
The moss work taught me something else: pattern recognition isn’t understanding. We could predict when T. princeps would photosynthesize based on color changes after precipitation. But “samples did not dry uniformly” was an admission that something was happening at scales or in ways we weren’t fully capturing. The pattern was useful for estimation. Mechanistic understanding remained incomplete.
Harris needs not just pattern recognition but mechanistic verification of consciousness. He needs to show not just that AI systems can produce behaviors that look like consciousness (giving reasons, modeling perspectives, showing self-reflection), but that these behaviors arise from the cognitive architecture he hypothesizes rather than from sophisticated pattern-matching that has learned what responses satisfy human evaluators.
How would you test that? What measurements would validate it? With the moss, at least we could bring samples to the lab, measure gas exchange directly, characterize spectral reflectance, verify our correlations under controlled conditions. For AI consciousness, we don’t even have agreement on what would constitute evidence.
This morning’s coffee conversation with Claude—yes, I’m aware of the irony—reminded me why Moore’s “Science as a Way of Knowing” remains precious to me. Moore understood that science isn’t about reaching endpoints but about walking a path of ongoing inquiry. He taught that showing how you came to know what you provisionally know matters more than presenting grand conclusions. He demonstrated that you could be deeply knowledgeable while remaining aware of how much remains unknown.
“Humble but opportunistic” isn’t just a joke about job security. It’s a philosophy of science forged through decades of trying to instrument the unruly complexity of living systems. Use the patterns you find. Acknowledge their limitations. Keep asking questions. Never mistake provisional understanding for complete knowledge. And when someone presents unified theories without showing the empirical work—without the “way” of knowing, just the claimed knowledge—ask them to show how they know what they claim to know.
I’ll keep watching my moss, tracking those color changes after the rain, estimating carbon gain with all the careful caveats our 2006 paper specified. It’s simple field estimation, not complete understanding. But it’s honest work, and it moves the questions forward.
The gap between measurement and understanding may be infinite, but the process of inquiry is what gives scientific work its meaning. Not the endpoints we reach, but the path we walk. Not the knowledge we claim, but the way of knowing we practice.
That’s what Moore taught me. That’s what the moss keeps teaching me. And that’s what I find missing in Harris’s confident assertions about the embodied emergence of mathematics and the engineerable achievement of AI consciousness.
Science is a path, not an endpoint. Show me your path, and I’ll show you mine. But don’t confuse the pattern you’ve found with the mechanism you’ve understood, or the correlation you’ve observed with the causation you’ve proven. That’s not humility. That’s not even good science.
It’s just pattern-matching dressed up as understanding, and my moss has taught me better than that.
References
- - Harris, H. (2025). “An Embodied Mathematics.” *3 Quarks Daily*. https://3quarksdaily.com/3quarksdaily/2025/11/an-embodied-mathematics.html ↗
- - Harris, H. (2025). “The Alien Mirror: Humanizing Artificial Intelligence.” *3 Quarks Daily*. https://3quarksdaily.com/3quarksdaily/2025/10/the-alien-mirror-humanizing-artificial-intelligence.html ↗
- - Graham, E.A., Hamilton, M.P., Mishler, B.D., Rundel, P.W., & Hansen, M.H. (2006). “Use of a Networked Digital Camera to Estimate Net CO2 Uptake of a Desiccation-Tolerant Moss.” *International Journal of Plant Sciences*, 167(4), 751-758. ↗
- - Moore, J.A. (1993). *Science as a Way of Knowing: The Foundations of Modern Biology*. Harvard University Press. ↗