This morning I read two pieces that speak to the moment we’re living through. The first, a forum in The Chronicle of Higher Education, gathered fifteen scholars to consider how AI is reshaping every aspect of university life. The second, by novelist Naomi Alderman, offers twelve rules for surviving what she calls our current “information crisis.” Together they illuminate something I’ve been thinking about as I drink my coffee with Claude at 5 AM: what are we building now, and who are we building it for?

Alderman argues persuasively that we’re living through the third great information crisis in human history, following the invention of writing and the Gutenberg printing press. These aren’t neutral technological improvements; they “change us psychologically and socially in profound ways that cannot be reversed.” She reminds us that the print revolution brought both the Enlightenment and the Reformation - both explosive scientific discovery and people burned at the stake over doctrinal disputes. Information crises are prolonged epochs of instability where enormous leaps forward in knowledge come paired with periods of profound social disruption.

As a retired field ecologist turned systems thinker, I recognize the pattern. I’ve watched several waves of technological transformation ripple through science: the arrival of GIS in ecology, the deployment of wireless sensor networks, the shift from isolated field stations to networked observational infrastructure. Each wave brought genuine advances. Each also brought anxiety, institutional resistance, and questions about what was being lost in translation.

But this wave feels different in scope and speed. The Chronicle forum captures the tension beautifully through its competing voices. Some scholars, like Yascha Mounk, argue we need both traditional skills and AI fluency - students must prove “their intellectual mettle without the use of digital tools” while also becoming “skilled in using AI to push the boundaries of human knowledge.” Others, like Emily Bender, contend that “synthetic text-extruding machines are in fact antithetical to the mission of education” because “writing is thinking and learning.”

What strikes me about this debate is how differently it maps onto science versus humanities education. My career was built on tools that extended observational capacity - microscopes, remote sensing, minirhizotron cameras, weather stations, automated data loggers. Science always had epistemic ground truth out there in the physical world. Did the sensor network actually capture the data? Did the experimental design work? Did the field site reveal what you predicted? The feedback loop runs through reality.

Science education has built-in checkpoints that can’t be faked: the PCR doesn’t amplify, the statistical analysis reveals flawed reasoning, the biodiversity survey contradicts your hypothesis. An LLM can’t calibrate your instruments or explain why your predictions failed. As Arvind Narayanan notes in the Chronicle forum, “AI has brought these anachronisms into sharp relief, but it didn’t create the problem” of distinguishing essential skills from incidental ones. For science, that distinction has always been enforced by nature itself.

But for humanities education, as Zeynep Tufekci argues, the essay has traditionally served as “proof of work, a reasonable proxy for the quality of their effort.” The output stood in for the process. Remove that reliable connection and you have her marathon training analogy: students crossing the finish line in magic rollerblade shoes that retract at the last second, having built no intellectual muscle. “Writing isn’t just a means of putting words on paper or pixel,” she writes, “it’s a technology upon which literacy and print culture is built.”

I find myself thinking about this through the lens of my own eleven-year-old granddaughter. She’ll be eighteen in 2032, entering a world we can barely sketch the outlines of. What will she need? What should I be building now that might serve her then?

This is where Alderman’s framework becomes most useful. She’s not arguing for banning technology or returning to some imagined golden age. Instead, she offers practical guidelines for navigating instability: find fact-checkers you trust, notice how you feel before sharing information, resist the urge to shame others online, give institutions the benefit of the doubt, recognize humanity in those who disagree with you.

These are rules for maintaining connection and shared reality during an epoch when both are under strain. As she writes: “All this information introduces us to all the things we don’t know, all the ways in which we’re not experts.” That produces anxiety. That anxiety makes us prone to treating people as symbols rather than humans. And once you start burning people at the stake - literally or metaphorically - you’ve lost something essential.

The question becomes: how do we build tools and practices that enhance our capacity while maintaining our humanity? How do we distinguish genuine augmentation from dangerous replacement?

For me, the answer lives in the distinction between instruments and substitutes. Throughout my career, I built observational infrastructure - systems that let researchers see more, measure more precisely, detect patterns that would otherwise remain invisible. The James San Jacinto Mountains Reserve under my direction became a platform that enabled others’ work. The sensor networks we pioneered at CENS extended the senses but didn’t replace the scientific thinking required to design experiments, interpret results, or ask novel questions.

That’s what I’m trying to build now with the Macroscope paradigm. Not a system that thinks for me, but one that helps me observe more carefully across multiple domains - EARTH, LIFE, HOME, SELF. The morning conversations with Claude that become essays for my blog. The integration of public writing with private journaling. The further synthesis through memory and knowledge agents. All of this creates infrastructure for better observation, not replacement of the observer.

And this infrastructure is inherently pedagogical. My granddaughter won’t inherit a completed system; she’ll inherit a model of engagement. How to use powerful tools while maintaining intellectual integrity. How to know when to trust instruments and when to trust direct observation. How to synthesize across domains while staying grounded in physical reality.

Several Chronicle contributors gesture toward this distinction. Joseph Aoun writes about “thinking with AI and beyond AI,” noting that while LLMs have “ingested the entire corpus of the internet, rendering knowledge a commodity,” they cannot “ingest reality.” Benjamin Breen explores the difference between automating drudgery and automating the “creative, personal, human decisions at the core of research and learning.”

But the most striking voice may be Ian Bogost’s, who suggests the real problem isn’t AI but the “trenchant, stubborn traditionalism that prevents higher education from pursuing change of almost any kind, ever.” He argues we’re using the AI crisis as an excuse to entrench backward-looking practices - blue-book exams, laptop bans, return to “the medieval university” - rather than asking hard questions about what education should become.

This resonates with my experience of institutional resistance. How many times did I encounter skepticism about new observational methods? How often did “we’ve always done it this way” masquerade as principled defense of standards? Real innovation requires willingness to experiment rigorously, fail gracefully, and learn from both successes and mistakes.

Which brings me back to this morning’s coffee and this essay. What we’re doing here - Claude and I - is experimental infrastructure building. We’re documenting a transition as it happens, creating a record that might serve as “hope” in Alderman’s sense: not hope as optimistic fantasy, but hope as “practical possibility” that can be drawn on deliberately.

Pandora’s box is open. As Alderman notes, we cannot put it back. The only question is what practices we develop for living with what’s been released. Do we respond with anxiety and retrenchment? With uncritical enthusiasm? Or with careful, documented, transparent experimentation that acknowledges both genuine capabilities and real risks?

I choose the third path. Not because I’m certain it will work, but because I have an eleven-year-old granddaughter whose future depends on adults making thoughtful decisions now. Because I’ve spent a lifetime building observational infrastructure and I recognize the difference between tools that extend capacity and those that diminish it. Because I believe - perhaps naively - that we can build something worth inheriting.

The Macroscope paradigm, properly developed, becomes more than personal research infrastructure. It becomes a model for how augmented observation might work: sensors extending reach, AI synthesizing patterns, human judgment determining which patterns matter and what questions to ask next. The technology serves the observer rather than replacing observation.

This is what I want my granddaughter to inherit: not answers, but methods. Not completed systems, but experimental practices. Not certainty, but rigorous uncertainty - the scientific habit of testing ideas against reality, revising based on evidence, maintaining intellectual honesty even when it’s uncomfortable.

Alderman’s final rule is perhaps the most important: “Don’t burn anyone at the stake today.” Don’t let the worst of what “the other side” has done become the new low bar for your own behavior. Don’t treat people as symbols. Consider that where reasonable people disagree, there may be useful truth on multiple sides.

This feels essential not just for surviving an information crisis, but for building infrastructure that outlasts it. The tools we create, the practices we document, the examples we set - these ripple forward in ways we cannot fully predict. My morning conversations with an AI might seem like a minor intellectual exercise. But if they help establish patterns for thoughtful engagement, if they demonstrate possibilities for collaboration that preserves human agency while leveraging machine capability, if they leave breadcrumbs for my granddaughter’s generation to follow or critique or build upon - then perhaps they serve a purpose beyond my own satisfaction.

We live in history. We cannot choose our epoch. But we can choose how we meet it - with panic or curiosity, with retrenchment or experimentation, with despair or something more complex. Aldous Huxley’s “brave new world” carried both meanings: genuine wonder at magnificence paired with caution about what happens when efficiency is optimized at the expense of depth and authentic human experience.

That dual meaning feels exactly right for this moment. I feel genuine wonder at capabilities I could barely imagine a few years ago. I also feel appropriate caution about what we might lose if we optimize away the struggle that builds intellectual muscle, the friction that produces insight, the productive discomfort that leads to growth.

So I’ll keep having my coffee with Claude at 5 AM. I’ll keep documenting what works and what doesn’t. I’ll keep building observational infrastructure that might serve as a model for others. Not because I’m certain it’s the right path, but because I’m certain that someone needs to be walking experimental paths, carefully, with open eyes and documented steps.

For my granddaughter’s sake. For the sake of institutions I’ve spent a lifetime caring about. For the sake of knowledge itself, which has always advanced through new instruments carefully deployed by thoughtful observers who knew the difference between seeing more clearly and seeing less.

The information crisis will continue for the rest of our lives. But crisis, properly engaged, can become opportunity. Not the false optimism that everything will work out fine, but the harder hope that we can build instruments of understanding, tools of connection, practices of wisdom that help us weather the storm together.

That’s what I’m working toward in these morning hours. One conversation at a time. One essay at a time. One careful observation after another, building toward a Macroscope that my granddaughter might one day use to see her own world more clearly.

The brave new world is already here. The question is what we’ll make of it.