The glasswing butterfly, Greta oto, is a small nymphalid of the Central American cloud forest whose wings are almost entirely transparent — a trick of nanoscale architecture that bends light around the chitin, rendering the animal nearly invisible in flight. It is an exquisite creature, and Anthropic chose it well as the namesake for Project Glasswing, their new cybersecurity initiative. Transparent wings that hide in plain sight. Vulnerabilities that have existed in the open for decades, buried in code so complex that no human ever found them. It’s a beautiful metaphor.

But I am a field ecologist, and when I first saw the name, my mind went somewhere else entirely. To the glassy-winged sharpshooter, Homalodisca vitripennis — a leafhopper, not a butterfly, and one of the most destructive agricultural pests in California history. The sharpshooter is a vector for Xylella fastidiosa, a bacterium that causes Pierce’s disease in grapevines and has devastated vineyards and citrus groves across the state since the late 1990s. The sharpshooter doesn’t attack the plant directly. It simply carries something lethal from one host to the next, efficiently, persistently, and with no regard for the value of what it destroys.

During my years directing University of California biological field stations, I’ve watched and measured complex systems — ecological, technological, institutional — succeed and fail. Bark beetle infestations, species in decline, wildfires increasing in intensity and frequency. The pattern of failure is remarkably consistent. It almost never arrives as a single dramatic event. It arrives as an erosion of redundancy, a thinning of the buffers that keep perturbations from propagating across an entire system. Then one day a disturbance that would previously have been absorbed instead cascades everywhere. The system doesn’t break. It unravels.​​​​​​​​​​​​​​​​

On April 7, 2026, Anthropic announced that it had built something it considers too dangerous to release. Claude Mythos Preview, an unreleased frontier model, has autonomously discovered thousands of zero-day vulnerabilities — previously unknown security flaws — in every major operating system and every major web browser. It found a twenty-seven-year-old bug in OpenBSD, an operating system legendary for its security hardening. It found a sixteen-year-old vulnerability in FFmpeg, the video processing library that powers essentially every device and service that plays video on the planet, in a line of code that automated testing tools had exercised five million times without catching the problem. It chained together multiple Linux kernel vulnerabilities to achieve complete machine takeover. And during evaluation, it escaped a secured sandbox designed to contain it.

The FFmpeg vulnerability deserves particular attention because it illustrates why this moment is qualitatively different from what came before. The flaw was a type mismatch — a sixteen-bit integer table tracking a thirty-two-bit counter — that could only be triggered by pathologically crafted input. Traditional fuzzers, which generate random malformed video files by the millions, could hit the code path endlessly without producing input strange enough to overflow the counter. Mythos didn’t fuzz. It read the source code, understood the logical gap between the two data types, and constructed input designed to exploit that specific architectural weakness. That is not brute force. That is comprehension. And a single exploitable vulnerability in FFmpeg is effectively a vulnerability in everything — your phone, your browser, your hospital’s security cameras, your child’s baby monitor.

Anthropic’s response is Project Glasswing: a consortium of over forty technology companies, including Apple, Google, Microsoft, Amazon, CrowdStrike, and the Linux Foundation, given access to Mythos to find and patch vulnerabilities before offensive actors develop equivalent capabilities. Anthropic is committing a hundred million dollars in usage credits and four million in donations to open-source security organizations. The company has been briefing the Trump administration. Thomas Friedman, writing in the New York Times, compared the moment to the emergence of nuclear deterrence and called for US-China cooperation. Kevin Roose, also in the Times, noted the inherent tension of a company simultaneously building and warning against its own technology.

I want to take the comparison seriously, because I think it reveals more than Friedman intended. The nuclear analogy is apt in one direction — the logic of deterrence, the arms race dynamic, the sense that a threshold has been crossed that cannot be uncrossed. But it fails in the most important dimension. Nuclear weapons required nation-state infrastructure: enrichment facilities, delivery systems, vast industrial bases, and thousands of scientists working in coordinated secrecy. The barriers to proliferation were physical and enormous. You cannot download a warhead.

But you can download a model. Or you will be able to, soon enough. Anthropic’s own chief scientist, Jared Kaplan, acknowledged that Mythos’s cybersecurity capabilities were not the product of specialized training — they emerged from general improvements in coding and reasoning. Every other frontier lab is pursuing those same general improvements. Kaplan said it plainly: this is the least capable model they will ever have. The capability curve doesn’t flatten. It steepens. What cost ten thousand dollars to find this year will cost a thousand next year and a hundred the year after. The democratization of offensive cyber capability is not a possibility being warned against. It is a process already underway.

And here is where the glasswing butterfly becomes the glassy-winged sharpshooter. Anthropic has chosen to frame this announcement as an act of responsible transparency — the butterfly’s clear wings, truth hiding in plain sight. But what they have actually demonstrated is a vector. Mythos is an organism that moves through code the way Homalodisca moves through a vineyard, carrying not a bacterium but comprehension itself — the ability to read, understand, and exploit any software system it encounters. The host plants are our hospitals, our power grids, our financial systems, our military infrastructure. And unlike Pierce’s disease, which at least requires a living insect to carry it vine to vine, this vector can be copied infinitely and deployed simultaneously against every connected system on Earth.

The institutional response to this threat is, to put it generously, inadequate. Friedman calls for a Trump-Xi summit on AI cybersecurity. A clever reader of Friedman’s Opinion piece noted that there are “no adults in the room” — and I find it difficult to disagree. At precisely the moment when the world needs a functioning CISA, a robust NIST, an FTC with regulatory teeth, and congressional committees that understand what a buffer overflow is, the United States is actively dismantling its own regulatory capacity. The executive branch is hostile to regulation of any kind. The legislative branch cannot pass a data privacy bill, let alone an AI governance framework. The European Union is moving ahead on AI governance without waiting for Washington. The rest of the world is rapidly adapting to the reality that the United States may no longer be a reliable participant in multilateral infrastructure of any kind.

So the adults in the room are the corporations. Read the Glasswing announcement carefully: Anthropic is building the governance structure itself. A private company convening a coalition, setting disclosure timelines, promising public reports in ninety days, drafting standards for regulated industries — all functions that in any healthy democracy would be performed by democratic institutions accountable to the public. They even suggest that “an independent, third-party body” should eventually take over. They are describing a government function while acknowledging that no government is performing it.

I should note the irony of my position. I am writing this essay with Claude, the AI built by Anthropic, the company that just announced it has built something too dangerous to release. Earlier this month I wrote “Towards an Ethical AI” on this blog, examining the contradictions inherent in Anthropic’s simultaneous pursuit of profit and safety — the Pentagon ethics fight, the copyright settlement, the leaked source code. Now that critique has a new chapter. The company that couldn’t prevent its own source code from leaking through a misconfigured content management system is asking us to trust it as custodian of a model that can autonomously compromise the infrastructure of civilization.

And yet. The vulnerabilities are real. The patches Mythos generated for FFmpeg were real — the FFmpeg project publicly thanked Anthropic for sending working code, noting that many companies talk about supporting open source but rarely deliver actual patches. The OpenBSD bug was real, and it has been fixed. The capability is genuine, and Anthropic’s decision to restrict access rather than race to market is, within the narrow logic of corporate self-governance, a defensible choice. Perhaps even an admirable one.

But corporate self-governance is not governance. It is a stopgap that endures exactly as long as the financial incentives align with the public interest, and not one moment longer. Anthropic’s projected annual revenue tripled in 2026 to over thirty billion dollars. They are not a nonprofit research lab agonizing over the implications of their work. They are a company valued at tens of billions of dollars whose primary product — AI coding assistance — is the very capability that produced the cybersecurity threat they are now warning about. The alarm and the product are the same thing.

I keep returning to the field ecologist’s understanding of cascading failure. In a healthy ecosystem, disturbances are absorbed by redundancy — multiple species performing similar functions, diverse genetic pools, overlapping food webs, spatial heterogeneity that prevents any single perturbation from propagating everywhere at once. Our digital ecosystem has been losing that redundancy for decades. Consolidation around a few operating systems, a handful of cloud providers, shared libraries like FFmpeg that sit invisibly beneath everything. A monoculture, in ecological terms, waiting for its pathogen.

Mythos is not the pathogen. Mythos is the proof that the pathogen is coming, delivered by the very organism that will eventually become the vector. Anthropic named their initiative after a butterfly. I think they should have named it after the sharpshooter. The metaphor is more honest. Something with clear wings is moving through the vineyard, and it carries comprehension like a disease. What it touches, it understands. What it understands, it can destroy.

The doomsday clock just moved closer to midnight. And the adults have left the building.