We've brought ourselves up to the present day in the evolution of the Earth, life, and human civilization. My estimates for the various filters we've passed through indicate that there ought to be around fifty other civilizations in the Milky Way.
The principle of mediocrity suggests that a good half of those fifty should be older than our own by at least an order of magnitude, if not many. Which raises the Fermi Paradox question again: Where Are They? Why don't we see any sign of ancient supertech civilizations enveloping stars with Dyson Spheres, building artificial black holes, dismantling stars, and doing other cool stuff like that? Why don't we detect their interstellar laser messages? Why don't we see their starship engines? Why do we exist at all when they might have colonized Earth and replaced our vertebrate ancestors with alien organisms?
This raises the worrying possibility that there are still some Great (or at least Great-ish) Filters in our future — filters which evidently could and did thwart the growth of beings as rich and clever as ourselves. Filters which may have ended civilizations bigger and wiser than our own. Scary stuff.
One's thoughts immediately stray to big disasters which might somehow wipe out a technological civilization. That's what I'm going to discuss today: disaster filters.
The trouble with disasters is that they're actually kind of wimpy, at least when you're talking about intelligent tool-using beings with large social structures who live all over the planet.
Consider everyone's favorite civilization-ender: nuclear war. Long, long ago, back in the glorious 1980s, I sat down to create a postapocalyptic roleplaying setting based on real-life North America. I knew how many bombs the Russians had, and various nuclear-disarmament groups published helpful lists of potential targets in the United States. So just black out a twenty-mile radius around each of those targets and presto! There's your Thundarr the Barbarian world.
Except for two problems.
First, the areas which weren't blacked out still included a heck of a lot of towns and small cities, plenty of farmland, forests, oil fields, coal mines, and other resources. Plenty of people, too. They could communicate, organize, and rebuild. Sure, there would likely be bandit gangs and some regional warlords, at least at first, but it didn't seem very likely that civilization would vanish — or even regress much. My postapocalypse would look more like the 1930s than the 1980s, but it wouldn't have illiterate savages with MTV hairstyles gazing in superstitious awe at the ruins of Disney World.
The second problem was when I remembered that the world exists outside the borders of the United States. Even if the whole Warsaw Pact plus Red China unloaded everything they could throw at the US, NATO, France, Australia, and Japan . . . what about Brazil? What about Mexico? What about Thailand or India or Ethiopia or Nigeria or Zaire?
The notion of "nuclear war blows up the world" seems to incorporate a lot of not-so-hidden racist assumptions about the inhabitants of the warmer parts of the planet. All those countries I listed above had burgeoning industries even in the 1980s.
Even the much-feared "nuclear winter" which would leverage the damage of a nuclear war into a planetary ice age wouldn't be enough to wipe out civilization. How do I know that? Simple: we've done ice ages already. Our ancestors rode out ten thousand years of glaciation equipped with tools made of wood and stone, with only wood fires and animal furs to fight the cold. They not only survived, they expanded across the globe during that period. An ice age made humanity.
Much the same objections apply to giant asteroid impact as a civilization-ender. Right now we're just on the edge of being able to detect and deflect an object headed for Earth. But even if we fail, and an object like the Chicxulub impactor — the asteroid that killed the dinosaurs — hits Earth, it probably wouldn't wipe out humanity.
Sure, billions would die. Coastal areas would likely be ravaged by tsunamis. Another ice age might follow. But . . . Zaire would be in fine shape. Paraguay would be unscathed. The American Great Plains and Ukraine and western China and Zambia and Bolivia and Chile would come through intact, all able to cooperate and rebuild.
Humans have been through planetary disasters. They would definitely harm our civilization and change it, but I don't think they would destroy it.
What about plagues? That was the original question which started this epic. Well, the worst plague on record, the Black Death, may have killed as much as one-fourth of the humans alive at the time. Civilization barely noticed. Some borders changed, there were economic shifts, but no knowledge was lost. One would not call the world of A.D. 1400 "less civilized" than the world of A.D. 1200, before the plague arrived.
In the modern day, we are right now living through a global crisis resulting from a disease about a thousandth as deadly as the Black Death. It's a crisis for us because we've gotten so good at fighting and eradicating diseases that any fatal virus is now unusual and alarming. Yet even the current crisis is temporary: we're arguing about what to do until a vaccine is developed. Not "if" but "when."
Again, we've survived much worse diseases while relying on medical technology that was either utterly useless or actively harmful. Using scientific medicine, the question is simply how long it takes to stop an epidemic. Like ice ages, we've done plagues.
Okay, but those are natural viruses. What about a man-made menace? What if some paranoid regime or mad genius builds a virus that is easily transmissible, incurable, and utterly lethal? And what if it gets released in every airport in the world simultaneously?
Even a hypothetical superbug would leave survivors. And humanity has been through some severe population bottlenecks before — it has been suggested that after the Toba supervolcano eruption 75 thousand years ago, the human population was reduced to fewer than 10 thousand individuals. Other studies have posited that earlier in human history the population could have been as low as just a couple of thousand. We got through those crises, even without the resources of an entire global civilization sitting idle around us.
So, again, disease might simply slow down human population growth and technological progress for a few centuries, but not make us extinct.
A more high-tech version of the plague filter is the "gray goo apocalypse" — the notion that some super-advanced form of self-replicating nanotechnology would get out of control and consume everything on the Earth.
There are two problems with this notion. The first is that Earth has been covered by voracious self-replicating machines for about four billion years already. Any cell-sized nanobots escaping into the wild would likely fall victim to feral microorganisms. A nanotech plague sophisticated enough to overcome living things would essentially be a life form itself. See plagues, above, for how we could cope with that.
Finally there's the macroscopic literalized metaphor of technology out of control: the Robot Apocalypse. This is the fear that if we can someday create artificial intelligences which are as smart as we are, or smarter, then they would exterminate and replace us.
That's a plausible fear, but it's utterly irrelevant to the Fermi Paradox. It just means that our world would be home to a technological civilization of digital intelligences rather than biological ones. Getting wiped out by robots is rough on your species, but not your civilization.
So I honestly can't think of any disaster which might wipe out humanity or permanently end our technology. Now, admittedly, the Great Filter concept includes the assumption that the terrible filter in our future might well be impossible to imagine until it's too late — since otherwise presumably some civilization might have avoided it.
Next time I'll discuss the one possible future Great Filter which does keep me up at nights.
Comments