In the spring of 2020, field stations across the country locked their gates. The COVID-19 pandemic shut down the educational infrastructure that hundreds of thousands of students depend on each year for direct experience with ecological systems. At the John Inskeep Environmental Learning Center in Oregon City, where I served on the founding technical advisory committee, the seasonal rhythms of student field visits simply stopped. The frogs still called from the wetlands. Nobody was there to hear them.

The response was fast and grassroots. The Organization of Biological Field Stations mobilized fifty field sites across twenty-six states and six countries. Claudia Luke at Sonoma State, Hilary Swain at Archbold, and Kari O’Connell at Oregon State secured an NSF RAPID grant funded through the CARES Act to coordinate the effort. The Virtual Field Project went live in October 2020 with a mandate both urgent and modest: get students into ecosystems they could no longer physically visit, and do it with whatever tools were at hand.

I volunteered the ELC as a participating station and took on the role of field camera operator. Four times a year, aligned with the solstices and equinoxes, I set up an Insta360 camera at two monitoring points in the reserve’s hardwood-conifer forest. Five minutes of 360-degree video at each point, capturing the full sphere of the environment. Winter 2021 through Fall 2022 – eight sessions, sixteen videos, uploaded alongside contributions from stations in Montana, Michigan, South Carolina, Belgium, Costa Rica, and dozens of other locations. The protocol was deliberately simple. Anyone who could operate a consumer camera could contribute.

The project worked. Over sixteen thousand visitors used the portal. Eighty percent of the nearly two thousand students surveyed rated the exercises effective for building observation skills. Faculty from more than a hundred and sixty universities incorporated the materials. The grant ended in October 2022, but many stations continued monitoring, and the video archive persists on YouTube. Nobody anticipated what would happen to those flat panoramic frames a few years later. I certainly didn’t.

I have been building virtual models of ecological environments for most of my career, and each attempt has taught me something about the gap between what I wanted and what the technology could deliver. At the James San Jacinto Mountains Reserve in the late 1980s, I used LaserDisc players to create an interactive system for navigating ecological imagery across scales – satellite to landscape to canopy to bark texture. I called it the Macroscope. The interaction model was right: fluid movement through nested spatial contexts without losing the cognitive thread of where you are. Over the decades that followed, the equipment improved – video still cameras, digital cameras, camera drones, sensor networks, edge computing – but none produced a workflow that was simultaneously cheap enough to distribute, simple enough for non-specialists, and rich enough to generate explorable three-dimensional environments. Last year I spec’d a multi-sensor field kit – stereo depth camera, iPad LiDAR, Insta360, acoustic monitor – that would have run about four or five thousand dollars. It could have captured ecological structure beautifully but required complex software development to tie everything together. It might not have scaled. But I was still interested in the proof of concept.

Then Apple changed the equation. Last year they introduced a depth rendering feature in their Photos app. Initial experiments used their built-in depth sensor, a low-resolution LiDAR addition that I have enjoyed using to build 3D models of objects. But what proved most interesting was their SHARP framework, which I explored in Virtual Terrariums: When a Failed Hypothesis Becomes a Better Instrument.

That January morning session began with a simple question – could SHARP process a 360-degree panorama? – and ended with a failed merge that turned out to be more productive than any success. The six cubemap faces refused to combine into a unified sphere because SHARP predicts depth independently for each view, but each face produced a stunning self-contained 3D reconstruction. I started calling them terrariums: bounded volumes of habitat you could rotate and examine from any angle. This monocular view synthesis model from Apple’s machine learning research group takes LiDAR out of the equation entirely and uses Gaussian splats instead. A single photograph – not a stereo pair, not a sequence of overlapping images, one picture – generates over a million three-dimensional Gaussian ellipsoids representing the spatial structure of the scene. Each Gaussian carries position, orientation, scale, color, and opacity. Collectively optimized to reproduce the original image when rendered from any angle, they constitute a surprisingly detailed three-dimensional reconstruction. The computation takes about five seconds per view on a consumer laptop. My five-thousand-dollar hardware specification had just been replaced by software.

I began experimenting with the Virtual Field panoramas. The original cubemap experiment used six faces – the minimum geometry for covering a sphere. The ecoSPLAT specification evolved that approach into twenty-five overlapping perspective views, trading the rigid cubemap grid for a denser sampling that captures horizon, canopy, and ground structure with better coverage and overlap. Each of these terrariums can be independently processed through SHARP. I ran the full archive over a series of evenings: thirty-three stations, four hundred fifty-seven panoramas, eleven thousand four hundred twenty-five individual views, zero errors. The processing ran on my MacBook.

The results are genuinely surprising to me, and I say that as someone who has been chasing this problem for forty years. A researcher can now spin a globe in a web browser, select a station, enter a panorama, navigate to a specific view, and explore its three-dimensional reconstruction. A species layer overlays iNaturalist biodiversity observations, connecting structural context to biological presence. The movement from planet to organism and back is continuous.

The platform is live at canemah.org/projects/ecoSPLAT/viewer/.

The metadata for these stations and their ecosystems is still being added over the coming days, but the skeleton is ready to explore.

It is, in functional terms, the Macroscope interaction model I prototyped with LaserDiscs in 1988, except that it runs in a browser and the data is three-dimensional.

But I want to be careful. This is an experiment, not a product. SHARP’s depth estimation is not LiDAR-grade. The reconstructions contain artifacts – atmospheric fill, depth-plane slices at object boundaries, scale ambiguities. I have spent weeks building filtering tools to separate physical surfaces from computational noise, and the problem is not solved. The terrariums are explorable approximations, not survey-quality models, with error characteristics I am still working to quantify.

What excites me is not the precision but the economics. The entire capture apparatus is a five-hundred-dollar consumer camera. The processing requires no specialized hardware. The skill required is the ability to press a shutter button. This is the same threshold that made iNaturalist transformative – not because phone-camera identification replaced taxonomic expertise, but because it made ecological observation participatory at a scale that generated scientifically useful data despite its imperfections.

The Virtual Field Project already proved that distributed volunteers can produce standardized 360 imagery on a seasonal protocol. That infrastructure exists. The question I am now testing is whether 3D reconstruction can transform that proven collection model into something structurally quantitative – whether the terrarium, which emerged from a pandemic emergency, might scale into a tool for characterizing ecological structure across sites, seasons, and years.

I am calling this concept SCOPE: Science Community Observatory for Participatory Ecology. (It is no coincidence that my Macroscope contains a SCOPE, but that’s another essay to come.) SCOPE has the potential to be a distributed network of contributors generating standardized three-dimensional structural documentation of environments using consumer cameras, processed through automated pipelines, and accessible through multi-scale visualization platforms. The “planetary” refers both to geographic scope and to the fact that SHARP does not care about biology or atmosphere – it cares about pixels. An equirectangular panorama from a lunar rover runs through exactly the same pipeline as one from an Oregon forest.

Whether this is viable depends on questions I cannot yet answer. Can the reconstructions support meaningful comparison across sites? Can the filtering pipeline separate ecological signal from computational artifact? Can the protocol be simplified enough for genuine citizen science adoption? These are empirical questions that I am actively tinkering with (along with too many other things ;-).

What I know is this: a pandemic forced a network of field stations to build virtual experiences for stranded students, and that emergency infrastructure turned out to contain the seed of something none of us anticipated. The terrarium was not designed as a unit of planetary observation. It was designed to get an ecology student through a semester without a bus to the field station. The fact that it might become something more is a reminder that the most productive scientific tools often emerge not from grand plans but from urgent, practical responses to immediate problems.

The technologies that arrived in the last two years – monocular 3D reconstruction, Gaussian splatting, AI-assisted coding that lets a solo researcher build a browser platform that would have required a team – are what finally made the experiment possible. But the experiment is young, the data is preliminary, and the hardest questions are still ahead. I am not announcing an observatory. I am describing a hypothesis, and testing it in the field, which is where hypotheses belong.