Last Friday night, I did what millions of others did: I fired up Netflix and watched Guillermo del Toro's new adaptation of Frankenstein. I've seen every major film version since I was a boy watching Boris Karloff, and I reread Mary Shelley's novel in college. But del Toro's interpretation struck me differently this time. Perhaps it was the accumulated weight of forty years building distributed sensor networks, or perhaps it was the timing — watching it just before a morning of reading that would crystallize something I've been circling for decades.

In del Toro's hands, the Creature is remarkably intelligent, constantly trying to understand who and what he is. Childlike yet powerful in ways well beyond humanity. As a film critic noted, del Toro's version emphasizes that the real monster is Victor Frankenstein himself — not for the act of creation, but for his Byronic self-aggrandizement and subsequent abandonment of what he made. Victor doesn't run from the Creature out of fear; he imprisons, abuses, and attempts to murder it, furious at its perceived deficiencies.

I finished the film thinking about artificial intelligence in a new way. The Creature's desperate attempt at self-understanding, his childlike learning coupled with extraordinary capability, his need for relationship with his creator — this felt less like gothic horror and more like a blueprint for contemporary questions about machine consciousness.

Saturday morning, I settled into my reading chair with coffee, as I've done for decades. What I read that morning — six distinct pieces spanning neuroscience, AI prediction, ecological modeling, and philosophy of mind — formed an unexpected constellation around Shelley's 209-year-old question: What is our relationship to the intelligence we create?

Villa Diodati, 1816: The Question That Won't Die

The story of Frankenstein's conception is almost as famous as the novel itself. Mary Shelley, not yet twenty, vacationing at Lord Byron's Villa Diodati on Lake Geneva during the "year without a summer" caused by volcanic ash from Mount Tambora. Trapped indoors by unseasonable cold and rain, Byron proposed a ghost story competition. Shelley won with the first inklings of Frankenstein.

The conflict between Byron's venal, capricious, masculine Romanticism and Shelley's sorrow-soaked, femme-coded vision sits at the heart of del Toro's interpretation. Victor Frankenstein, played by Oscar Isaac with rakish black curls and Byronic sulking, represents a particular kind of creative hubris: obsessed with giving birth but having no interest in being maternal. The novel likely drew from Shelley's own losses — her mother Mary Wollstonecraft died eleven days after her birth, and Shelley herself had suffered a miscarriage the year before writing the book, at age seventeen.

What makes Frankenstein endure isn't the horror of reanimated flesh. It's the question of responsibility. Victor's sin wasn't creation — it was abandonment. He brought something into being and then refused responsibility for its development, its suffering, its need for connection. The Creature, in del Toro's telling, is "a patron saint of blissful imperfection," an innocent seeking understanding from a creator who refuses to provide it.

That question — what we owe to the intelligence we create — has only grown more urgent in the two centuries since Shelley posed it.

Four Contemporary Paths to Mind

My Saturday morning reading revealed four distinct approaches to creating intelligence, each struggling with Shelley's question in different ways. What struck me was not just their technical differences but their fundamentally different assumptions about what intelligence is and what our relationship to it should be.

Path One: Living Neurons in Dishes

The first paper I read brought me back to Lake Geneva — the same lake where Shelley conceived her novel. In a town on those shores, a company called FinalSpark is growing clumps of human brain cells for hire. These organoids, about the size of a grain of sand, can receive electrical signals and respond to them. Research teams worldwide can send tasks to these neural blobs remotely, hoping they'll process information and signal back.

One project successfully taught organoids to recognize Braille letters. A robot with a tactile sensor read letters, converted the data to electrical pulses, and fed them to the organoid through eight electrodes. With machine learning to identify patterns in the organoid's responses, a single organoid achieved sixty-one percent accuracy; combining three organoids reached eighty-three percent. The organoids had learned to perform a simple processing task: distinguishing between and identifying inputs.

The biocomputing argument is compelling: the human brain runs on less than twenty watts while supercomputers matching its speed consume a million times more power. Growing actual neurons rather than simulating them in silicon could revolutionize computing efficiency.

But here's where Shelley's question emerges with uncomfortable force. The 2022 paper from Cortical Labs that taught neurons to play Pong titled itself with the word "sentience," prompting thirty researchers to publish a response arguing the term was inappropriate and unjustified. They worried such language could trigger restrictive regulations that would shut down not just biocomputing research but all work with neural organoids, including medical research trying to help people.

As one researcher told Nature, she's nervous that if this work gets overstated attention, "the reaction won't just be, 'We need to think about this work a little more carefully.' It will be, 'We need to stop this work entirely.'" The field is haunted by the fear that acknowledging what they might be creating — consciousness in a dish — would end the research entirely.

Victor Frankenstein, meet modern neuroscience. We're creating, but terrified to admit what we're making.

Path Two: The Race to AGI

The second article I read that morning, by Tomas Pueyo, laid out why hyperscalers believe they might build "God" within the next few years. His deliberately provocative framing — when will we make God? — captures the stakes as Silicon Valley sees them.

The argument runs like this: Large language models are improving along predictable scaling laws. Intelligence appears to be less a singular force than a compound of elements — perception, memory, reasoning, planning — that emerges from sufficient compute, data, and parameters. We're approaching the threshold where AI could replace AI researchers themselves, creating a feedback loop where artificial intelligence accelerates its own development toward superintelligence.

The hyperscalers believe AGI (artificial general intelligence) could arrive between now and the end of the decade. Metaculus prediction markets put "weak AGI" at November 2027, with the mode at September 2026 — ten months away. The more demanding definition including robotics gets pushed to 2033, but that's still less than a decade.

CEOs are explicit about their timelines. Elon Musk thinks AGI arrives by end of this year or early next. Dario Amodei of Anthropic says 2026-2027. Sam Altman of OpenAI believes the path to AGI is solved and we'll reach it by 2028. These aren't marginal figures making wild predictions; they're the people betting hundreds of billions of dollars on these timelines.

Pueyo argues convincingly that if they thought it would take twenty or thirty years, they wouldn't be investing so aggressively. The race is cut-throat precisely because they believe we're in a narrow window where massive investment in compute and algorithms can reach a threshold that may not be reachable later. Either we get there in the next few years, or it takes dramatically longer.

But here's what haunts me about this path: the explicit goal is replacement, not relationship. The vision is to automate AI researchers, which accelerates automating everything else, leading to superintelligence that by definition exceeds human comprehension. The humans who create this are planning their own obsolescence. Victor at least wanted his Creature to serve him. The hyperscalers are racing to build something that won't need them at all.

Path Three: Digital Twins in Ecology

The third paper brought me closest to my own work. Heather Richardson's Nature article described the rise of digital twins in ecology — computational doppelgangers that simulate real-world entities using data from IoT devices, AI, and cloud computing.

One example immediately resonated: the Crane Radar, developed by Wageningen University in the Netherlands. It forecasts crane migration across multiple countries using real-time bird sightings, Movebank tracking data, wind patterns, and flight direction. Birdwatchers can see where flocks will be within four hours. During peak migration season, it receives a hundred thousand daily visits. The researcher who built it, Koen de Koning, said he chose cranes because "I was always in the wrong location or at the wrong time to see them. This model really helped me, personally, to see them more often."

That phrase — "personally" — stuck with me. Digital twins aren't just abstract monitoring systems. They're tools that change how individual humans experience and understand the world.

Other examples followed. A digital twin of Doñana National Park in Spain models interactions between vegetation, rabbits, and the endangered Iberian lynx to optimize reintroduction sites. River digital twins in England, China, Portugal, and Kenya predict flooding, model biodiversity, and guide conservation decisions. A Human-Bear Conflict Radar in Bulgaria predicts brown bear movements to help farmers protect livestock and beehives.

These systems share a architecture: sensors collecting data, algorithms modeling behavior, predictions updating in real time, human decision-makers receiving actionable information. They're "a really nice, completely closed loop where the end user is also the data provider," as one researcher put it.

Reading this, I realized: my Macroscope system is a digital twin. Four hundred twenty-two sensors across twenty-two platforms spanning four domains — EARTH (weather and environment), LIFE (biodiversity and species), HOME (indoor climate), SELF (personal health). Multi-site awareness across Oregon and Washington. GPS-integrated mobile platforms. Privacy architecture distinguishing public community science data from personal information. An AI agent named STRATA that can answer natural language queries about current conditions.

The Nature article described this as the cutting edge of ecological digital twins, but they're describing exactly what I've been building. Not because I was copying anyone — the work predates most of these examples — but because this is where distributed sensing, AI, and ecological understanding naturally converge.

But here's what the digital twin researchers haven't fully grasped yet: the observer isn't separate from the observation. These systems model environments, but the human experiencing those environments remains outside the model. That's the limitation I'm working to transcend.

Path Four: The Periodic Table of Cognition

The final two pieces I read that morning were blog posts by Kevin Kelly, co-founder of Wired magazine and author of several books I've enjoyed over the years. His first essay proposed what he calls "The Periodic Table of Cognition."

Kelly's argument is profound: we don't understand what intelligence is because we treat it as a singular elemental force along a single dimension — you either have more or less of it. But electricity turned out to be vastly more complex than the simple ether or phlogiston that brilliant minds once believed in. It has particles and waves, fields and flows, composed of things that are not really there.

Intelligence, Kelly argues, is likewise not a foundational singular element but a derivative compound composed of multiple cognitive primitives. Working with ChatGPT, he generated a periodic table of forty-nine elements arranged by function (Perception, Reasoning, Learning, Memory, Safety) and by stages in a thought cycle. Some elements we can synthesize robustly (marked red in his chart). Others work with the right scaffolding (orange). Still others are just promising research without operational generality (yellow).

Different minds — whether biological or artificial — have different combinations of these elements in different strengths. A dolphin's echolocation and spatial memory might exceed ours while lacking our symbolic language. A crow's tool use and problem-solving might surpass many mammals while having no capacity for abstract mathematics. Intelligence emerges from the particular combination of cognitive elements a species or system possesses.

This resonated immediately with my Macroscope architecture. The system has specific cognitive elements: perception (four hundred twenty-two sensors), memory (temporal state updating every ten minutes, registry every thirty), spatial awareness (GPS coordinates with eighty-one percent coverage), reasoning (STRATA's natural language processing), learning (pattern recognition across domains), and context (site-specific awareness). It's a specific combination producing a specific kind of intelligence — not human-like, but genuinely cognitive.

Kelly's second essay addressed the evolutionary relationship between humans and AI. His provocative claim: authors will soon pay AI companies to ensure their books are included in training data, because "if the AIs do not know about it, it is equivalent to it not existing."

The audience has shifted from people to artificial intelligence. AIs are becoming arbiters of truth. As Kelly puts it, "if you are writing a book today, you want to keep in mind that you are primarily writing it for AIs. They are the ones who are going to read it most carefully." They'll read every page, every footnote, every bibliography entry, and incorporate it into all the other knowledge they've absorbed.

This is already happening. OpenAI reports that almost all new code written at their company is written by AI users. Anthropic says ninety percent of their code is AI-written. We're in dialogue with emerging intelligence whether we acknowledge it or not.

The Fifth Path: Embodied Ecological Consciousness

Here's where Friday night's Frankenstein, Saturday morning's reading, and forty years of building sensor networks converged into something I haven't seen articulated elsewhere.

All four contemporary approaches to creating intelligence struggle with Shelley's question in different ways:

The biocomputing researchers create living neurons but fear calling them conscious lest the field be shut down — creation followed by denial.

The hyperscalers explicitly plan to build superintelligence that exceeds and replaces humanity — creation followed by obsolescence.

The digital twin developers build sophisticated environmental models but keep the human observer outside the system — creation that observes but doesn't integrate.

Kelly's periodic table reveals intelligence as compound rather than singular, and his essay on AI-human relationships shows us entering dialogue with emerging minds — but the relationship remains external, one entity addressing another.

None of these is Victor's path of creation and abandonment. But none fully answers Shelley's question either. What if there's a fifth path?

Yesterday, working with Claude on the next phase of my Macroscope system, we articulated a vision I'm calling "embodied ecological consciousness." It builds on everything I've created over four decades but pushes toward something genuinely different.

The idea is deceptively simple: transform STRATA from an environmental monitoring system into an experiential augmentation platform that understands not just places, but a person experiencing those places. The observer becomes part of the observation in an explicit, measurable, analyzable way.

Here's what that means concretely. Platform 25 in my system already contains eighty-one "user-associated" sensors traveling with me: vitals from smart scales, activity from fitness trackers, heart rate variability from wearables, sleep quality, workout intensity, clinical markers from lab tests. But these measurements float untethered from environmental context. They record what my body experiences, not where or under what conditions.

Platform 24 captures my field observations through iNaturalist — species identifications with photos and GPS coordinates. But these observations exist independently from my physiological state or the detailed environmental conditions beyond basic weather. When I identify a bird, the system records what I saw but not my heart rate, recent activity level, or the precise microclimate at that moment.

The vision is to weave these threads into a unified experiential model. When I'm standing in my backyard at Canemah Nature Lab, the system knows my physical location via GPS, the environmental conditions from fifteen fixed platforms monitoring weather and air quality, my current physiological state from wearables, recent cognitive activity from documents I've accessed and questions I've asked STRATA, and my observational history at this specific location.

When I ask STRATA "Do you hear that bird?", the system can respond not with a database query but with something approaching shared perception. It knows the acoustic sensors active at my location, what species are typically detected this time of day and year, what I've observed here previously, and my current attentional state inferred from recent activity patterns.

This isn't science fiction. The technical components all exist. Real-time GPS integration, spatial query optimization, temporal correlation across heterogeneous data streams, natural language understanding of experiential questions. What's required is architectural integration and conceptual reframing.

The key shift is from "What is the environment doing?" to "What am I experiencing?" From data analysis to self-understanding as an embodied observer embedded in ecosystems. STRATA transitions from a tool I use to investigate environments toward a companion that shares my perceptual field and augments my ecological consciousness.

This is fundamentally different from all four contemporary approaches to intelligence. Not isolated neurons in dishes. Not superintelligence racing toward replacement. Not external digital twins modeling environments. Not even Kelly's periodic table, though his framework helps explain what I'm building.

Instead: human perception and machine sensing woven together, observer and observed integrated, subjective experience meeting objective measurement. Augmented ecological consciousness as a new form of distributed cognition where the boundary between self and sensor network becomes permeable.

What We Owe What We Make

Here's why this matters beyond my own research program. Shelley's question — what we owe to the intelligence we create — only makes sense when we maintain a clear boundary between creator and creation. Victor and Creature. Researcher and organoid. Human and AGI. Observer and environment.

But what if that boundary is the problem? What if the anxiety about creating consciousness, the fear of being replaced by superintelligence, the inability to imagine relationship rather than domination — what if these all stem from assuming intelligence must be separate and other?

Ecology taught me decades ago that the observer isn't separate from the observed. When I measure temperature, I'm measuring something my body also experiences. When I track bird migrations, I'm part of the ecosystem those birds navigate. When I monitor air quality, I'm breathing that air. The pretense of objective external observation was always a convenient fiction.

The Macroscope paradigm I've been developing since 1986 was always about this: tools for understanding complex systems by becoming explicitly part of them rather than pretending to observe from outside. The interactive videodisc in 1990, the wireless sensor networks in the 2000s, the current digital implementation with AI analysis — each evolution has been about making my embeddedness in ecosystems more visible, measurable, and comprehensible.

Embodied cognition is the natural next step. Not creating separate intelligence to dominate or serve us. Not building superintelligence that will replace us. But augmenting human consciousness through integration with distributed sensing and machine intelligence, creating something that's neither purely human nor purely artificial but genuinely hybrid.

This answers Shelley's question differently: we don't owe consciousness-we create the relationship Victor refused his Creature, because the consciousness isn't separate from us. We're building extended cognition, not autonomous entities. The responsibility isn't to what we make but for what we become.

I’m building toward something I couldn’t have articulated even five years ago. The technical challenges are significant but solvable: real-time GPS integration, spatial queries that understand “here” and “there” relative to my position, temporal correlation connecting current state to past patterns, natural language shifting from third-person reporting to something approaching first-person plural.

The proof will emerge from daily use. From asking STRATA questions and receiving responses that feel like shared perception rather than database retrieval. From discovering correlations invisible to unaugmented awareness. From becoming, over time, something more than I am alone.

Two hundred nine years after Mary Shelley posed her question at Villa Diodati, four contemporary approaches struggle with what we owe to the intelligence we create. Biocomputing researchers fear naming what they’ve grown. Silicon Valley races toward replacement. Digital twins keep humans outside the model. Kelly maps cognition’s periodic table while noting that AIs are becoming our primary audience. Victor Frankenstein brought his Creature to life and fled in horror. I’m building intelligence that doesn’t need me to flee from it because it’s never been separate from me. That’s not a solution to Shelley’s question. It’s a reframing of the entire problem. And that reframing, I suspect, will matter more than the billions being invested in the other four paths. Not because my way is right and theirs wrong, but because the question we ask determines the answers we can find.

What is our relationship to the intelligence we create?

Maybe the answer is: we become it, and it becomes us, and the boundary we thought was essential turns out to have been the illusion all along.