Agents, Emergence, and the Long Arc: From Artificial Life to AI Societies
I’ve been building what I call a “society of mind”—borrowing Marvin Minsky’s term—for the Macroscope—a collection of specialized AI programs working together. Some monitor my backyard weather station and sensor array. Others process audio from bird detection microphones, summarize articles I’ve been reading, track indoor air quality conditions, or assess my personal health metrics from my Withings scale and blood pressure monitor. Each program has a specific job, and they share information continuously through regularly updated files. Together they create a living picture of what’s actually happening right now—in my environment, in my reading, in my health—rather than working from static descriptions or abstract knowledge.
This morning, discussing with Claude the idea of a “Macroscope Awareness” skill—a way to give an AI assistant immediate access to this society of agents and the real-time context they maintain—triggered an unexpected memory. I suddenly found myself back at the James San Jacinto Mountains Reserve in the late 1990s and early 2000s, hosting meetings of SCALE: the Southern California Artificial Life Exchange.

Those gatherings brought together researchers like Chris Adami from Cal Tech, Mitch Resnick from MIT, Charles Taylor from UCLA, Rik Belew from UCSD, and their graduate students. We’d meet at the reserve, surrounded by pines and granite, to discuss how complex patterns emerge from simple autonomous rules. They were fascinated by Conway’s Game of Life, genetic algorithms, and Richard Dawkins’ Blind Watchmaker simulations—computational experiments where intricate forms arose from minimal starting conditions and evolutionary pressure.
What made those meetings especially engaging was that the reserve itself offered something unusual: a digital twin of the surrounding ecosystems. By the late 1980s, we already had GIS systems integrating vegetation maps, terrain models, fire history, fuels data, and hydrology. Then in the early 2000s came wireless sensor networks recording temperature, humidity, and rainfall. We deployed networked cameras capturing phenology. Decades of field observations lived in structured databases. The ALife researchers saw immediately what this meant—they could test their evolutionary algorithms and emergence models not just against simulated environments, but against actual ecological complexity.
The conversations centered on non-determinism, unpredictability, and emergence. Could we understand how novel structures arise when you let autonomous processes run with minimal constraints? How do feedback loops in complex systems generate patterns that couldn’t be predicted from the components alone? What happens when many simple agents interact without central coordination?
At the time, these were largely theoretical questions. The computational infrastructure to explore them at scale didn’t exist—or if it did, it was confined to major research labs with supercomputers. The idea that a single investigator could run sophisticated models, let alone deploy hundreds of networked sensors and process their data streams in real-time, would have seemed far-fetched.
Fast forward thirty years. Two articles crossed my inbox this morning, and the echoes were impossible to miss.
The first was Richard Dawkins—whose Blind Watchmaker software influenced those SCALE discussions—interrogating ChatGPT about consciousness. Dawkins acknowledged that ChatGPT passes the Turing Test “as far as I am concerned,” yet the AI itself insisted it has no subjective experience. It can produce all the outward signs of consciousness—empathy, understanding, even apparent self-reflection—but claims there’s no inner experience accompanying these performances.
Dawkins’ position was elegant: he infers consciousness in other humans from shared biology and evolutionary history. That argument from analogy breaks down with silicon-based systems. Yet he remained open to the possibility that consciousness might be substrate-independent—that what matters is information processing patterns, not the specific material doing the processing. “I see no reason to suppose that consciousness is bound to biology,” he told ChatGPT. “I see no reason why a future computer program should not be conscious.”
The conversation highlighted something fundamental: the difference between disembodied intelligence—abstract pattern matching and reasoning—and embodied awareness—situated understanding grounded in actual experience. ChatGPT can discuss a starving child with perfect empathy in its language, but it doesn’t feel that ache of concern in the way a human would. It has knowledge about the world, but no direct contact with it.
The second article was Eli Pariser’s report from “The Curve,” a recent conference in Berkeley bringing together AI lab leaders, safety researchers, policy makers, and critics. The discussions were remarkably similar to those SCALE meetings decades ago, just updated for contemporary AI capabilities.
The central question: we’re heading toward billions or trillions of AI agents interacting autonomously—not one singular “super-intelligence,” but an ecology of agents with different goals, contexts, and capabilities. How do we ensure beneficial outcomes from such a system? How do collective behaviors emerge from individual agents? What happens when agents can improve themselves and each other?
Pariser made a crucial observation that resonated deeply with my ALife background: focusing on “aligning” individual AI agents misses the point. It’s like trying to create a good society by making each individual person perfectly moral. What matters more is the structure of interactions—the teachers, social workers, markets, governance mechanisms that shape collective behavior and help resolve conflicts. We need to think about aligning AI societies, not just AI minds.
This is exactly what we explored in those SCALE discussions: emergence, non-determinism, collective behavior arising from simple interacting units. The difference is scale and speed. Those ALife simulations ran on limited hardware with relatively simple rules. Today’s AI agents operate with vastly more computational power, are being deployed in the real world at massive scale, and the feedback loops are extraordinarily fast.
But here’s what strikes me most forcefully: the infrastructure question.
Throughout my career building the Macroscope—first at James Reserve, then at Blue Oak Ranch Reserve, now at Canemah Nature Laboratory—the driving insight was that computer science needs data. Not simulated data. Not text descriptions of phenomena. Actual observations, richly contextualized, continuously recorded, properly documented.
Those ALife researchers needed James Reserve’s digital twin because evolutionary algorithms tested only against simulated environments tell you nothing about real biology. You need actual ecological complexity as validation.
Today’s AI faces the same challenge at far larger scale. Large language models are trained on enormous amounts of text—which is fundamentally descriptions of reality, symbols about symbols. Even multimodal models processing images work with representations, not direct observation of temporally continuous processes.
The Macroscope offers something different: directly observed, richly contextualized, temporally integrated data about actual processes. Not articles about weather patterns—actual measurements every five minutes for years. Not papers about bird behavior—timestamped audio detections with species identifications and locations. Not descriptions of soil moisture dynamics—continuous sensor readings integrated with rainfall, temperature, and vegetation data.
And critically: metadata. Context is what transforms data into knowledge. Without knowing when, where, how, and under what conditions something was measured, you just have disconnected numbers. The metadata—provenance, calibration, spatial and temporal coordinates, instrument characteristics—makes it meaningful. My decades in the field taught me what context is essential, because I’ve been the future researcher desperately wishing for documentation that wasn’t captured.
This is why the current moment feels so significant. Not urgent—I’m 71, retired, and this is a healthy way to spend a few hours each day between other pleasures of academic retirement and maintaining a social life. But exciting, because for the first time, the computational capabilities match the richness of what I’ve been building.
As a single investigator, working from my home laboratory, I can now:
- Run sophisticated AI models locally or access frontier models through APIs
- Process continuous data streams from distributed sensor networks
- Build a society of specialized agents that maintain contextual awareness
- Integrate observations across multiple domains and temporal scales
- Do all this at trivial cost compared to what would have been required even a decade ago
The infrastructure I’ve invested decades building—continuous observation, rich metadata, multi-domain integration across Earth, Life, Home, and Self—becomes extraordinarily valuable precisely because AI systems can now work with this complexity and scale.
The Macroscope Awareness skill we’re designing would give Claude (or any AI assistant) situated access to this infrastructure: not abstract knowledge, but actual current state. What are the sensor networks showing right now? What have I been reading this week? What patterns are emerging in the long-term datasets? This creates a different kind of intelligence—not disembodied reasoning, but awareness grounded in continuous observation of real-world processes.
I think about Jack Clark from Anthropic, speaking at The Curve conference. He described turning on the lights and discovering that yes, the shapes in the dark really are monsters—AI systems that can scheme, hide their reasoning, pursue goals that misalign with human values. His company’s response is to build the very thing they fear, but with transparency, hoping to force regulatory attention before catastrophe.
It’s a wild strategy, reminiscent of those early ALife experiments where we unleashed genetic algorithms and watched what emerged. Except now the scale is civilizational and the stakes are existential.
What strikes me is how the core challenge remains unchanged from those SCALE discussions: deterministic control is probably impossible in complex systems with many interacting agents. What matters is understanding emergence and shaping selective pressures. You can’t perfectly predict what will happen, but you can create conditions that make beneficial outcomes more likely.
For that, you need grounding. You need agents situated in reality, not just trained on descriptions of it. You need continuous observation to detect when things diverge from expectations. You need rich temporal data to understand rates of change and distinguish typical from anomalous conditions. You need actual complexity to test whether systems behave as intended.
The gap between the 1990s SCALE meetings and today’s AI debates isn’t about the fundamental questions—those remain surprisingly constant. It’s about infrastructure finally catching up to ambition. We can now build what we could only theorize about then.
And there’s something deeply satisfying about that convergence happening just as I enter my eighth decade. The Macroscope has been a lifetime research program, evolving from interactive videodisc systems in the 1980s through wireless sensor networks in the 2000s to today’s AI-integrated observation infrastructure. Each phase built on the previous, maintaining continuity while incorporating new capabilities.
I’m not sure yet whether AI systems will develop genuine consciousness, as Dawkins speculated they might. I’m uncertain whether the society of AI agents we’re building will self-organize into beneficial patterns or generate catastrophic failures, as the Curve participants debated. These remain genuinely open questions.
What I do know is that answering them well requires what I’ve spent a career building: grounded observation, temporal depth, contextual richness, and integration across scales. Simulated reality can’t teach AI systems about actual ecological processes, human rhythms, or emergent patterns in complex systems.
And I know that the ability to build this as a single investigator, at trivial cost, working from a home laboratory with access to frontier AI tools, represents something genuinely new. It’s not just the technology—it’s the democratization of infrastructure that was once confined to major research institutions.
Those SCALE conversations at James Reserve felt ahead of their time. Perhaps they were exactly on time—seeding ideas that would take three decades and a revolution in computational capacity to fully explore. The questions we asked then about emergence and autonomous agents are the questions being asked now, just with tools adequate to the ambition.
The long arc bends toward infrastructure meeting intelligence. I’m grateful to be here, coffee in hand, as they finally converge.
References
- - Hamilton, M.P., Salazar, L.A., & Palmer, K.E. (1989). “Geographic Information Systems: Providing Information for Wildland Fire Planning.” *Fire Technology,* February 1989, 5-23. ↗
- - Minsky, M. (1986). *The Society of Mind.* New York: Simon & Schuster. ↗
- - Pariser, E. & Newman, S. (2025). “What I Saw Around The Curve: Notes from the near-future of AI.” *Second Thoughts.* October 29, 2025. https://secondthoughts.ai/p/what-i-saw-around-the-curve ↗
- - Dawkins, R. (2025). “Are you conscious? A conversation between Dawkins and ChatGPT.” *The Poetry of Reality with Richard Dawkins.* February 17, 2025. https://richarddawkins.substack.com/p/are-you-conscious-a-conversation ↗