What If Advanced Civilizations Are Only Intelligent For A Short Time? | Unveiled

advertisement
VOICE OVER: Peter DeGiglio
WRITTEN BY: Dylan Musselman
Is THIS why we haven't found aliens yet?? Join us... and find out more!
In this video, Unveiled takes a closer look at one incredible answer to the Fermi Paradox: the Brief Window Hypothesis! This is the idea that advanced civilisations can only STAY advanced for a very short time... and that might be a very big problem for us!
In this video, Unveiled takes a closer look at one incredible answer to the Fermi Paradox: the Brief Window Hypothesis! This is the idea that advanced civilisations can only STAY advanced for a very short time... and that might be a very big problem for us!
What If Advanced Civilizations Are Only Intelligent For A Short Time?
When calculating the odds of finding intelligent life on other planets, we have the famous Drake Equation. And the very last variable in it is known as “L”, which is the amount of time that life is able to communicate for. In other words, “L” asks; what is the lifespan of advanced civilizations? How long can intelligent beings survive in the universe? And, shockingly, the answer might be that it’s not for very long at all.
This is Unveiled, and today we’re answering the extraordinary question; what if advanced civilizations are only intelligent for a short time?
Life seems rare when we view the universe from Earth because we’ve yet to find any other lifeforms anywhere else around us. But, really, we only exist in a miniscule portion of the overall universe… and, so, the rest of it could still be teeming with alternative beings. The plain odds are still very good that we are not alone.
The astronomer, Frank Drake, devised the Drake Equation in 1961 to show just how difficult it will be to search for other civilizations. The main problem that the equation presents is that many of the variables within it can only be loosely estimated. Some measures we know better than others. The fraction of stars that have planets, for instance, has been calculated to be around 1, meaning that almost all stars have at least one planet. That’s easy enough. But, on the other hand, scientists have no (or little) clue about some of the equation’s other variables, such as how many planets that are capable of producing life actually do.
Outcomes for the equation can widely vary, then, but given the massive amount of galaxies, stars, and potentially habitable planets that are out there… it can sometimes point to a universe that’s actually filled-to-bursting with intelligent lifeforms. Depending on how you apply the Drake Equation, on the data you input, it’s been found that the chances that we’re the only ones ever to evolve could be as much as 60, 80, even 100 billion to one. And if that’s anywhere near right, it’s clearly very unlikely that we’re the only ones here… so why haven’t we been contacted?
Fermi’s Paradox, or the Great Silence, is the name given to this exact contradiction; the seeming fact that we should see evidence of other civilizations, but we don’t. According to Enrico Fermi, any civilization that develops some form of rocket technology wouldn’t take long to colonize other parts of its galaxy. Humanity, for instance, is starting to attempt to colonize other planets already despite landing on the moon for the first time less than a century ago... and not having done that again for more than fifty years since. There have then been some hypothetical answers put forward as to why no one has colonized our galaxy yet.
1)There’s the possibility that humanity really is the first civilization to ever become intelligent and that other beings will follow in time. Given how long the universe has been around, however, some 14 billion years, this seems unlikely.
2) Another solution is that faster than light travel really is impossible, and so other civilizations are simply too far away from us to communicate with us - no matter how advanced they get. On the face of it, this seems more possible.
3) There’s the Zoo Hypothesis; the idea that aliens do exist and actually know that we’re here, but they choose not to make contact and simply observe us as though we were zoo animals.
But lastly, 4) There’s an argument that intelligent life has sprung up many times before - perhaps in much the same way that it has for us - but that it inevitably dies at a certain point. A certain point that perhaps arrives quite quickly.
This is known as the Brief Window Hypothesis. Technically, it focuses on the length of time that civilizations continue to broadcast signals. The length of time that civilizations continue to communicate, or else impact the universe to reveal that they are there. It’s closely related to theories on the Great Filter, which asks; is there an obstacle (a filter) of some sort on a civilization's path to (and through) advancement that ultimately prevents them from lasting? From our perspective as humankind, if true, this could then be something that we’ve yet to encounter or imagine... or, concerningly, it could be something we’ve already learned about.
The Future of Humanity Institute (the FHI) puts it another way. According to one model, it theorizes the existence of so-called “black ball” tech, with a black ball technology being something that, once invented, spells the end for humanity. In one version of events, it could be the simplification of powerful weapons… i.e. if nuclear bombs were easy to make and anyone could use them, then we likely wouldn't survive for very much longer because all it would take is one person to destroy everything.
However, purpose-built destructive technology doesn’t have to be our undoing. Thinking again of humanity as a particular example of a civilization, it could be a less immediately intentional scenario, such as with us and climate change. In our own story, it could be that fossil fuels were once deemed essential by our intelligent selves for our developing civilization… but also that they’ll inevitably cause our downfall due to their planet changing effects. Fossil fuels could still represent the FHI’s “black ball” technology. Or it could yet be triggered by something as simple as overpopulation. If other life is like life on Earth, then we might assume that population growth would naturally follow as a species learns to better survive in its environment - as it has done here. However, how you deal with that is key. For example, if population outpaces technology then resources could become strained, which could lead to war with powerful weapons, and another road to ruin. Or, failing all of that, the “black ball” could also be something that we would never see coming… something like an ultra-advanced civilization that watches over the universe, and just chooses to switch it off. Among other, equally theoretically, kinda sci-fi possibilities. All could close the “brief window”, as per the hypothesis.
But there is another proposed outcome, where the filter (the closing of the window) doesn’t have to refer to death specifically. In 1961 the astronomer Sebastian von Hoerner, an associate of Frank Drake’s, wrote a paper arguing that civilizations don’t last because of two reasons: one was, yes, destruction… but the other was mental or physical degeneration. A civilization’s ultimate failure is still the inevitable end result, no matter which path it takes… but, here, one trigger for this is that intelligent species effectively grow too comfortable.
For us, we might understand this to mean that the development of bigger and better machines (in our civilization) could lead to a time where human intervention is no longer needed. Where humans no longer engage with life, the planet, with reality. And so, we wither away, or are forced into extinction, as a result. Now’s when modern minds typically consider the direction of artificial intelligence. If AI decision making, in general, is more efficient and effective, then there’s some argument that it would be foolish to continue to depend on our own (often flawed) reasoning. But, from there, it’s perhaps a short trip before AI decides things like when to go to war, which economic policies to back, and what research to fund. In such a world, humans could quickly lose countless complex cognitive skills. Researchers have referred to this as “Dependence lock-in”, where social, survival, and intelligence skills decline as a result of a dependence on machines. It’s not quite dying, but it is wasting away… and it’s potentially another way that the Brief Window Hypothesis could happen.
Consider that all of the above examples only really apply to us, too - and that any other civilization could develop in any other way - and there are endless pitfalls down which you could fall. And, again, the destruction doesn’t necessarily have to be the fault of the civilization that’s being destroyed, either. In a universe as vast and unknowable as this one, it could be plain nature that eventually (always) wins out. Random asteroid strikes, gamma ray bursts, and even rogue black holes all have the potential to end any (and all) civilizations, and quickly. It’s a somewhat unsettling truth, but it’s also something that any growing society anywhere in the cosmos would have to contend with.
In discussing “black ball” technologies, the FHI’s Nick Bostrom calls it the Vulnerable World Hypothesis. And perhaps it doesn’t only apply to us, but actually to all possible advanced groups that could form. It’s certainly one answer to the Fermi Paradox, and a key concept when wrestling with the Drake Equation… because that’s why advanced civilizations might only be intelligent for a short time.
