Towards an Ethical AI
I am writing this essay with the help of an artificial intelligence whose training data includes books downloaded from pirate libraries, whose source code was accidentally leaked to the world two days ago, and whose technology — despite its creator's objections — was embedded in the targeting system that helped the United States military strike over six thousand targets in Iran in February, including an elementary school full of girls. I am aware of the irony. I am also aware that awareness of irony is not the same thing as absolution, and that complicity acknowledged is still complicity.
This is the condition of working with AI in the spring of 2026. You cannot use these tools without being implicated in what they have done and what has been done with them. The question is not whether you are clean — you are not — but whether you can be honest about the dirt on your hands and still find a direction worth walking.
I have spent decades building instruments for ecological observation, and longer than that advocating for environmental values and the conservation of nature. I am not an AI critic who discovered the technology last Tuesday. I am a field ecologist who has been waiting for something like this for decades — and who now has to reckon with what "something like this" actually turned out to be.
The Children
On February 28, 2026, a cruise missile struck the Shajareh Tayyebeh primary school in Minab, Iran, in the opening hours of Operation Epic Fury. The building was hit at least twice. Between 175 and 180 people died. Most were girls between the ages of seven and twelve.
Media coverage immediately asked whether Claude, Anthropic's AI, had selected the school as a target. The actual story is both simpler and more damning. The school sat near a military compound it had once been part of, but had been separated from years earlier. The military database was never updated. The targeting software — built by Palantir, with Claude embedded as a document processor — generated hundreds of strike coordinates in the first twenty-four hours, enabling over a thousand strikes in a single day.
Claude did not hallucinate a target. It processed what the database contained. But it processed it at a speed that made human verification a formality rather than a safeguard. Humans have always made mistakes in war — databases go years without updates, buildings get misclassified. What is new is that AI compresses the timeline between the mistake and the missile to nearly nothing. And what makes Minab not just a tragedy but a scandal is that the people whose job it was to catch exactly this kind of error had already been removed. In early 2025, Defense Secretary Hegseth gutted the Pentagon's Civilian Harm Mitigation and Response office by ninety percent, dismissing civilian protection as a distraction from "lethality." Central Command's harm-review team was cut from ten people to one. By the time AI-assisted targeting was generating hundreds of strike coordinates per day in Iran, there was almost no one left to check.
As Kevin Baker argued in the Guardian, the obsession with whether the AI erred displaced the constitutional question of who authorized this war and the legal question of whether this strike constitutes a war crime. The technology became simultaneously scapegoat and accelerant — distributing responsibility until it belonged to no one.
Speed without oversight is not intelligence. It is negligence at scale.
The Theft
From dead children to stolen books. The difference in magnitude is absolute. But the underlying logic is the same: extraction without accountability, speed without consent.
Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit brought by authors whose books were downloaded from pirate sites and used to train Claude. The settlement works out to roughly three thousand dollars per title — for books that took years to write. Catherynne Valente, in her essay "Blood Money," makes the argument that no legal brief can. The companies took books written about love and used them to build systems that isolate people from human connection. They took books about hope and built what she calls a despair-and-oppression generator. The payment is not restitution. It is a receipt.
Cory Doctorow frames the structural problem: the media companies suing AI firms do not want to stop AI from replacing creative workers — they want to control how it happens and collect licensing fees. It was collective bargaining in the Hollywood writers' strike, not copyright, that secured actual protections. Copyright treats the writer as a small business. Labor rights treat the writer as a worker with collective power. The distinction matters enormously.
The Leak
Two days ago, Anthropic accidentally published the complete source code for Claude Code due to a configuration error. Analysts cataloged nearly two thousand files containing hidden features and internal decisions never meant for public view: a system that strips evidence of AI involvement from employees' public work, a permission classifier internally named YOLO, a mechanism that injects false information to prevent competitors from copying Claude's behavior, and a research mode that disables all safety features at once.
Doctorow's point is not that the code reveals something uniquely sinister, but that when consequential software is proprietary, accidents are the only path to public accountability. And the irony he savors most: Anthropic responded by flooding the internet with copyright takedown demands — the same legal mechanism the company argues should not apply to its own scraping of other people's work. Information flows freely toward capital. It locks down when it flows away.
The Confrontation
These threads converge on the confrontation between Anthropic and the Pentagon. Anthropic drew two red lines: no autonomous weapons, no mass surveillance of Americans. When the Pentagon insisted on unrestricted access, President Trump ordered agencies to stop using Anthropic's technology, and Defense Secretary Hegseth designated the company a "supply chain risk" — a classification normally reserved for foreign adversaries. A federal judge called the designation "Orwellian" and found it was likely retaliation for protected speech.
Meanwhile, the military continued using Claude in Iran through contractor relationships. The entity attempting to impose ethical limits was punished by the state, while the technology operated without those limits in a theater of war. The red lines existed on paper. In Minab, they did not exist at all.
The Response
I wrote recently to the Anthropic Institute, pointing out an absence in their research agenda: no ecological dimension, no accounting for the metabolic cost of AI infrastructure, no program studying how AI could serve the ecosystems that human society depends on.
But diagnosis without prescription is just eloquent despair. So today, I built something different.
A project called the Common Pile has assembled a massive text corpus built exclusively from public-domain and openly licensed sources, with provenance tracked for every component. From it, researchers trained a model called Comma — smaller and less powerful than the commercial giants, but trained entirely on data whose origins are documented and whose use is authorized. A small, honest language engine that knows where its knowledge came from.
This afternoon, I installed Comma on my laptop. It runs entirely on my own hardware — no cloud, no data center. I connected it to my ecological monitoring data: weather stations, biodiversity sensors, air quality instruments. Then I asked it to summarize the environmental conditions around my laboratory.
It worked. The prose was rougher than what Claude produces. The reasoning was simpler. But the model read my data, understood what it meant in ecological terms, and gave me a useful summary of conditions in the Willamette Valley — drawn from instruments I own, processed by a model trained on data no one stole, running on a machine sitting on my desk.
This is not a solution. It is a compass bearing — proof that it is possible to build AI tools for ecological work that do not depend on pirated training data, do not require industrial-scale infrastructure, and do not feed into systems designed to generate strike coordinates.
The ecologist in me knows that you do not restore a degraded landscape by refusing to enter it. You walk the transect. You count what is there. You plant what belongs. You accept that the soil is damaged and you begin the work anyway.
Today, in a small laboratory in Oregon, a model that has never read a pirated book summarized the state of the living world outside my window. In a season of blood money and dead children and leaked source code, it is the thing I can build with clean hands. And so I built it.
References
- Engel, Richard et al. (2026). "The U.S. Built a Blueprint to Avoid Civilian War Casualties. Trump Officials Scrapped It." ProPublica. https://www.propublica.org/article/trump-defense-department-iran-hegseth-civilian-casualties ↗
- - Hamilton, Michael P. (2026). "Ethical Edge AI for Ecological Monitoring: Comma v0.1 Sensor Summarizer Prototype." *CNL Technical Note CNL-TN-2026-039*. https://canemah.org/archive/document.php?id=CNL-TN-2026-039 ↗
- - Hamilton, Michael P. (2026). "The Metabolic Cost of Knowing." *Coffee with Claude*. https://coffeewithclaude.com ↗
- - Baker, Kevin T. (2026). "AI got the blame for the Iran school bombing. The truth is far more worrying." *The Guardian*. https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying ↗
- - Doctorow, Cory (2026). "It's extremely good that Claude's source-code leaked." *Pluralistic*. https://pluralistic.net/ ↗
- - Valente, Catherynne M. (2026). "Blood Money: The Anthropic Settlement." *Substack*. https://catvalente.substack.com/p/blood-money-the-anthropic-settlement ↗
- - Copp, Tara et al. (2026). "U.S. target list may have mistaken Iranian elementary school as military site." *The Washington Post*. https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/ ↗
- - CNBC (2026). "Anthropic wins preliminary injunction in DOD fight as judge cites 'First Amendment retaliation.'" *CNBC*. https://www.cnbc.com/2026/03/26/anthropic-pentagon-dod-claude-court-ruling.html ↗
- - Common Pile Project (2025). "Common Pile v0.1 Dataset and Comma v0.1 Models." *Hugging Face*. https://huggingface.co/common-pile ↗