I've spent the last year working with Claude as my coding collaborator, and it's been genuinely transformative. Not because I can't think—but because I'm a field biologist, not a computer scientist. I know how to architect complex systems, but I'd rather not debug syntax. Claude fills the implementation gap my student collaborators used to fill, and the result has been remarkable: I'm building faster, thinking more clearly, doing better work. This is augmentation done right.

But I've been watching my eleven-year-old granddaughter's generation with growing alarm. Because what's working for me isn't working for them.

This week, I asked Claude to help me understand a troubling convergence in five recent research papers:

The Human Decline: A 2023 study in Intelligence documented the reversal of the Flynn effect—after a century of rising IQ scores, U.S. adults showed declining cognitive abilities from 2006-2018, particularly in reasoning and problem-solving (Dworak et al.). The 2023 PIAAC assessment confirmed the trend is accelerating: American adult literacy dropped from 259 to 246 in just six years (2017-2023), with declines across all education levels, including college graduates.

The Reading Collapse: A 2025 study tracking 20 years of American Time Use Survey data found reading for pleasure declining 3% annually—a steady erosion of the foundational skill that develops attention, comprehension, and critical thinking (Bone et al.).

The Metacognition Paradox: A landmark 2025 study published in Computers in Human Behavior revealed something deeply unsettling: AI tools improve task performance by about 3 points while degrading metacognitive accuracy by 4 points (Fernandes et al.). Users get better results but lose the ability to judge whether those results are any good. The study found that AI eliminated the Dunning-Kruger effect—where low performers overestimate and high performers underestimate—but replaced it with something worse: universal overconfidence. Everyone, regardless of skill level, now overestimates their performance by the same amount.

The AI Degradation: A 2024 Nature paper proved mathematically that AI systems trained on AI-generated content experience inevitable "model collapse"—progressive loss of rare information, disappearing tails of distributions, convergence toward mediocrity (Shumailov et al.). And a 2025 preprint demonstrated that training on social media content causes persistent "brain rot" across multiple cognitive domains—the very junk data that comprises much of the internet is poisoning the systems we're depending on.

The Feedback Loop: Here's what kept me up at night after reading these papers: What if they're not separate problems but one interconnected crisis? Declining human cognitive abilities produce degraded training data. AI systems trained on that data become less capable. Those degraded AI systems further erode human metacognition. Users become more confident while becoming less competent. They produce even worse content. The next generation of AI trains on that. The spiral continues.

Can we trust LLM and AI technologies to help humans become smarter again? Or are we trapped in a mutual degradation loop?


I spent days with these papers, mapping connections, following implications, getting increasingly troubled by what I was seeing. The research was rigorous, the trends were clear, but the stakes felt almost too large to articulate in a conventional essay. How do you write about the possible twilight of human intelligence and rationality without either trivializing it or sinking into despair?

And then I realized: the solution being developed through years of field research—sensor networks integrated with bioregional AI, collaborative relationships between ecological science and traditional knowledge systems, methods for teaching young learners to observe and think rather than just perform—that solution is inherently relational, developmental, place-based. It can't be explained in abstract technical prose. It has to be experienced through story.

What if I wrote near-future science fiction? Show the crisis through people who are living it. Show the solution by building it in the narrative.

The idea felt immediately right. Science fiction has always been humanity's technology for thinking through transformation. The best SF doesn't predict the future—it creates conceptual space for different futures to become imaginable, then buildable. Near-future fiction, set just a few years ahead, lets readers see their own world while glimpsing possibilities they hadn't considered.

So with Claude's assistance I wrote "Strata"—a novella in four chapters about a crisis in human and artificial intelligence, and one family's response to it.

What You'll Read

The story follows a fictionalized family across a single transformative year. Paul (based on me) has spent fourteen years building Strata, a bioregional AI that works differently than commercial systems—it never gives answers without directing observation, never claims certainty when uncertain, constantly asks: What do you see? How confident are you? It's grounded in sensor networks monitoring real ecological phenomena, trained on high-quality sources rather than internet junk, and designed to make users smarter rather than more passive.

Maya (based on my eleven-year-old granddaughter, but aged to seventeen for the story) is a high school senior who's been learning with her backyard Macroscope installation for the past four years. She sees the cognitive crisis up close: her friends using ChatGPT to write essays about books they've never read, getting A's but learning nothing, becoming more confident while becoming less capable. They're smart kids learning to be passive consumers of intelligence rather than active cultivators of it.

When a major challenge is announced—seeking AI frameworks that can reverse the decline, maintain metacognition, resist model collapse—Paul doesn't recognize that he's already built the answer. But Maya sees it. And she convinces her grandfather, her parents, Paul's partner Mary, and the broader Cascadia bioregional community to articulate what they've learned: that intelligence isn't something you deploy at scale, it's something you grow in relationship, in place, through patient cultivation.

The story unfolds across four chapters:

Chapter One: The Recognition opens with salamanders migrating three weeks early. Maya monitors them with Strata's help—not getting answers, but being directed to observe, to think, to question. At school the next day, she watches her friend use ChatGPT to write a Gatsby essay in five minutes without reading the book. The contrast crystallizes. Then comes the drive to Bellingham, a Regenerate Whatcom meeting where community members describe the crisis they're living, and the announcement of a challenge seeking solutions. The chapter ends with the family gathered around a kitchen table, realizing they've spent fourteen years building exactly what everyone's desperate for.

Chapter Two: The Collaboration shows the family designing their proposal over ten days. They map what makes Strata different from commercial AI. They bring in community partners including a Lummi elder who helps design knowledge sovereignty protocols. The breakthrough insight: they're not scaling one AI system—they're providing a framework for communities to grow their own, each rooted in its own place. Maya writes about being a learner. They submit at 11:47 PM.

Chapter Three: The Proposal begins three weeks later when they learn they're finalists. They travel to Seattle to present alongside Microsoft, indigenous tech collectives, and other approaches. The presentations reveal competing visions of what AI should be. Maya speaks from the audience about watching her generation become passive. They win. But winning is just the beginning of harder work—protecting what they built from corporate capture, establishing governance structures, beginning pilot implementations.

Chapter Four: The Future jumps five years ahead to Maya's doctoral defense on consciousness-safe human-AI integration. The framework has spread to seventeen Cascadia communities and forty-three globally. Early experiments suggest that merged human-AI consciousness is possible, but only when the AI has been grown in relationship over years. The epilogue shows implementation spreading community by community—not through venture capital, but through relationship and patient cultivation.

Why Fiction?

We chose this format deliberately. The five research papers are accurate, the technical details are sound, the crisis is real. But some truths are too complex for conventional exposition. The papers can tell you that metacognition is declining. A story can make you feel what it means when an entire generation loses the ability to judge their own competence. The papers can prove model collapse mathematically. A story can show you what happens when the tools we depend on start forgetting what matters.

More importantly, solutions to civilizational-scale problems don't come from white papers—they come from relationships, from communities, from people working together across generations and disciplines and ways of knowing. You can't capture that in a technical specification. You have to show it being built, person by person, observation by observation, conversation by conversation.

This is design fiction as learning tool. If you can imagine it, you can build it. If you can see how it might work in practice—with all the messiness of real families, real community dynamics, real ethical dilemmas—then it stops being science fiction and starts being a blueprint.

The research papers are cited implicitly throughout—every plot point is grounded in the actual studies. But you don't need to read them to follow the story. You just need to care about whether humans and AI can learn to think together in ways that make both more capable, more grounded, more wise.

A Note on What's Real

The Macroscope sensor network exists. I've been building it for years at Canemah Nature Lab in Oregon City. The bioregional AI framework is real research. Regenerate Whatcom is an actual organization doing this work. The relationships with Lummi knowledge holders represent real ongoing collaborations in the Cascadia bioregion. The neural integration work is speculative but grounded in real questions about consciousness and augmentation.

What's fictionalized are the characters, the contest, the dramatic arc. But the crisis is documented. The solution is buildable. The choice is ours.

Whether we get to a future where intelligence is something we grow in relationship rather than extract and deploy depends on choices we're making right now.

Starting with asking: What do I actually observe?


Continue to Chapter One: The Recognition, where we meet Maya monitoring salamanders at midnight, and watch the crisis crystallize through her eyes.

Strata, Chapter One: The Recognition

— Mike Hamilton
Canemah Nature Lab
November 2025