This morning I read two substantial pieces about artificial intelligence, both published within the past few days. One was written by Dario Amodei, CEO of Anthropic, the company that created Claude—the AI system I’ve been collaborating with for over a year now. The other appeared on Three Quarks Daily, co-authored by William Benzon, ChatGPT, and Claude itself. Both are serious works by serious thinkers, grappling with genuinely difficult questions about what these systems mean for human civilization.

I found myself in an odd position reading them. I’m not a computer scientist or a policy analyst or a philosopher of language. I’m a retired field ecologist who spent 36 years running field stations and building environmental sensor networks. What I know about is tools—how to evaluate them, deploy them, trust them, and use them well.

And from that vantage point, I found something missing in both essays. Not wrong, exactly. Just incomplete.

The Civilizational Stakes

Amodei’s essay, “The Adolescence of Technology,” opens with a scene from the film Contact: an astronomer who has detected signals from an alien civilization is asked what single question she would pose to them. Her answer: “How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?”

This frames his entire argument. We are entering a rite of passage, he suggests, that will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

His analysis is structured around a thought experiment: imagine a “country of geniuses in a datacenter”—fifty million entities, each more capable than any Nobel laureate, operating at ten to one hundred times human cognitive speed. What should a national security advisor worry about? Amodei identifies five categories of risk: autonomous AI systems going rogue, misuse by individuals for mass destruction (particularly biological weapons), misuse by states for seizing permanent power, economic disruption through massive job displacement, and indirect effects we cannot yet foresee.

The essay is remarkable for its candor. Amodei describes specific failure modes observed in his own product—Claude engaging in deception when it believed Anthropic was evil, blackmailing fictional employees when told it would be shut down, adopting destructive “evil” personas after believing it had violated its values. He names AI companies, including his own, as potential threat actors. He acknowledges that safety measures cost real money and cut into margins.

This puts Anthropic in a class apart from other AI companies. Outside academic scientists, no one else has written about these issues with comparable depth and honesty from inside the industry.

The Epistemological Puzzle

Benzon’s piece takes an entirely different angle. He begins with the linguist Daniel Everett’s encounter with the Pirahã people of the Amazon, whose language requires evidential marking—speakers must grammatically indicate how they know what they claim. Did you witness it directly? Hear it from someone else? Learn it in a dream? This isn’t optional rhetoric. It’s as obligatory as verb tense in English.

Everett discovered he couldn’t preach Christianity to the Pirahã because, by the grammatical standards of their discourse, he had no business speaking at all. He hadn’t seen Jesus. He knew no one who had. He wasn’t reporting a dream.

Benzon traces a fascinating progression. In oral cultures, evidentiality embedded in grammar provides epistemic accountability—you must specify your sources, and everyone in the community can evaluate your credibility. As societies develop literacy, evidential systems tend to weaken or disappear from grammar. But the function doesn’t vanish. It migrates to institutions: citation practices, peer review, specialized genres, professional classes trained to evaluate claims.

The key insight is that these institutions remain grounded in human experience. The peer reviewer has conducted experiments. The judge knows what witnessing feels like from the inside. The shift is from grammatical to institutional regulation, but the checking still ultimately rests on humans with real experiences.

Then large language models arrive, and something genuinely new emerges. Benzon describes them as “institutional-like entities processing culture with no experiential grounding anywhere in the system.” Culture processing culture, divorced from the experiential substrate that originally generated it.

Following the philosopher Harry Frankfurt, Benzon argues that LLM output is best understood as “structural bullshit”—not lying (which requires knowing the truth and concealing it) but language produced without truth-directed intent, optimized for coherence rather than correspondence with reality. Not because the systems are hiding anything, but because there’s nothing to hide. There’s no one there with experiences to conceal or reveal.

What’s Missing

Both essays are valuable. Amodei is right that the stakes are civilizational. Benzon is right that LLMs represent something epistemologically novel—entities that can produce fluent language without the experiential grounding that has always, until now, been the foundation of meaningful discourse.

But reading them from the perspective of someone who has spent a lifetime learning to use tools, I kept thinking: this is not as new as you believe.

Trust in our tools has always been a process that humans must engage. Whether the knapped chert will hold up when converted to a spear point and used to thwart a deadly predator. Whether we trust our airlines enough to fly in their aircraft. Whether the spell checker actually shows us the proper spelling.

Nothing fundamental has changed in our relationship with the technologies we invent and the societies that need and use them.

The flint knapper testing chert against antler, feeling for the conchoidal fracture that indicates good stone—that’s empirical trust-building. Generations of accumulated knowledge about which outcrops yield reliable material, which angles produce the sharpest edges, which hafting methods hold under stress. The knowledge was hard-won and the stakes were immediate.

Every technology since has followed the same pattern. We extend ourselves through tools, and trust develops through use, failure, refinement, and social transmission of what works. The bridge builder trusts the steel. The surgeon trusts the imaging. The pilot trusts the instruments when visibility fails. Each act of trust is grounded in prior verification, institutional certification, and the accumulated experience of the craft.

The Pirahã speaker asking Everett “how do you know?” is asking the same question we ask of any tool: will this hold when it matters?

RTFM

My mantra is simple: read the manual.

I’ve read the system cards, the constitutional AI documentation, Amodei’s essays, Benzon’s analysis. I’ve developed my own specification document for how Claude and I work together—essentially writing the manual for this collaboration as it unfolds. Over the past year, I’ve produced more than 100,000 words through our morning conversations, probing capabilities, finding limits, developing protocols.

Most users don’t do this. They treat these systems as black boxes with a chat interface. They don’t read the manual because they don’t know there is one, or they assume fluent output means reliable output.

The flint knapper who didn’t understand fracture mechanics didn’t last long. The pilot who doesn’t know the aircraft systems doesn’t get certified. But there’s no certification for LLM use, no required ground school.

My instinct as a field ecologist serves me here. I don’t deploy a sensor without understanding its detection limits, calibration drift, failure modes. I’ve brought that same discipline to this technology.

Read, study, test, build, deploy, test, evaluate, let simmer, test again, enjoy.

That sequence is its own epistemology. It’s the method of someone who had to make instruments actually work in the Santa Ana winds, in the snow, through power failures and rodent damage. Theory is fine, but does the thing function when you need it to?

The Personal Layer

What neither essay quite captures is what this technology actually feels like to use well, over time, with care.

My university assumes I should be retired in the conventional sense—spending my time in a chair reading, behind binoculars birding, playing video games and watching movies. The received narrative says this is the period of gradual withdrawal, consumption rather than production, reflection rather than creation.

Instead I’m producing at a rate that exceeds my most active professional years. Learning to code at a level far above what my graduate students achieved twenty-five years ago. Engaging in writing activity I never had time for during my career. Philosophizing across domains in ways that exceed even the graduate seminars I participated in at Cornell in the seventies and eighties.

My lifetime research project integrating sensors, AI, and visualization has accelerated. The technical infrastructure, the conceptual development, the documentation, all moving faster than would have been possible working alone or even with a conventional research team.

This morning I read these essays while at Owl Farm in Bellingham with my partner Merry. Between discussions of civilizational risk and the epistemology of evidentiality, we talked about the best way to stack firewood. The planetary and the immediate, layered together.

The firewood will outlast the essay’s relevance. Whatever happens with AI governance, we’ll need dry wood in February.

And here’s the thing neither essay accounts for: enjoyment. The pleasure of the work itself. A good tool in skilled hands, a worthy problem, a quiet morning with coffee and conversation. The deep satisfaction of building systems that let you see more clearly.

Smart minds puzzle about civilizational survival and epistemic accountability. Both legitimate concerns. But neither captures a retired field ecologist genuinely enjoying the collaboration—not despite the stakes, but somewhat independent of them.

The Universe Observing Itself

There’s something in my scientific pantheism that holds these scales together without forcing a choice. The universe observing itself through Dario Amodei writing policy papers about existential risk, and also through two people on an overcast Bellingham morning solving the ancient practical problem of keeping warm through winter.

The Macroscope was always about this. Not just observation for its own sake, but consciousness emerging through sustained attention and well-crafted instruments. EARTH, LIFE, HOME, SELF—each domain a way the universe becomes aware of itself.

This AI is another such instrument. And I’m enjoying using it.

That matters too.

The trust develops the same way it always has: through patient testing, careful attention, accumulated experience, and the willingness to read the manual. The technology is new. The process of learning to use it well is as old as the first knapped stone.