4 Essential Theories from The Future of Humanity Institute | Unveiled
advertisement
VOICE OVER: Peter DeGiglio
WRITTEN BY: Dylan Musselman
What does the future have in store for us?? Join us... and find out more!
The Future of Humanity Institute is based at Oxford University, and headed by the famed philosopher, Nick Bostrom. In this video, Unveiled takes a closer look at the FHI, and at some of the most interesting and important stories that have emerged from it. Featuring the Vulnerable World Hypothesis, the Aestivation Hypothesis, the concept of AI Canaries, and more!
The Future of Humanity Institute is based at Oxford University, and headed by the famed philosopher, Nick Bostrom. In this video, Unveiled takes a closer look at the FHI, and at some of the most interesting and important stories that have emerged from it. Featuring the Vulnerable World Hypothesis, the Aestivation Hypothesis, the concept of AI Canaries, and more!
4 Essential Theories from The Future of Humanity Institute
Life on Earth. It’s incredible, when you think about it… but it’s also been through immense hardship to reach this point, today. In fact, it’s estimated that of all the life that has ever existed on our planet, more than ninety-nine percent of it is now extinct. So, what does that say about the future of our species? Luckily, there’s at least one group that’s looking for answers.
This is Unveiled and today we’re exploring four of the most essential theories from the Future of Humanity Institute.
The Future of Humanity Institute (or FHI) is based at the University of Oxford. It was founded in 2005 by the well-known philosopher Nick Bostrom, with the broad goal of improving humanity and researching ways to ensure its survival. The FHI is composed, then, of academics who specialise in many different fields including mathematics, economics, philosophy, ethics, computer science, and more. The research of potentially dangerous events - including the rise of artificial intelligence and of biological threats - is high on their agenda. As such, many of the studies coming out of the Institute focus on seemingly outlandish scenarios, but they’re ones that could feasibly threaten humanity in the years to come. Today, we’re taking a closer look at four of the most eye-catching proposals that the FHI has so far put forward.
The first was proposed by the FHI head Nick Bostrom and is known as the Vulnerable World Hypothesis. Generally, it ponders that while technology has allowed humanity to flourish, it could also lead to some devastating scenarios. Really, the Vulnerable World Hypothesis has a lot to do with luck. In a 2019 paper, Bostrom compares our technological accomplishments (of the past, present, and future) to randomly pulling balls from a giant urn. Until now, we’ve mostly drawn what Bostrom refers to as white balls that have had a positive impact. Examples of these might include the understanding of gravity, of star formation, and of space travel… all of which have broadened our horizons and led to more and more progress being made. We’ve also drawn some grey balls, according to Bostrom, which do carry various levels of risk, but luckily we’re yet to pull a black ball. Bostrom explains that a black ball is a discovery so powerful that it could by default spell the end of humanity.
Immediately, we can see how this could have potentially been (and could still potentially be) the development of atomic bombs… but the fact that we’re here today proves that we’ve also so far avoided ending humankind via nuclear war. Bostrom argues that a main reason for this, however, is because we’re lucky that nuclear bombs are difficult to make, and therefore quite easy to track. But, what could happen were a technology ever to be invented that’s both massively destructive and easy to make? Bostrom calls this the Type-1 Vulnerability. Colloquially, its sometimes referred to as an Easy Nuke, although really it’s any hypothetical weapon of mass destruction that can be built using common items and knowledge. If or when that arrives, then Bostrom argues that humanity could be in trouble… as the vulnerable world of his hypothesis comes true. He doesn’t go so far as to predict that it will happen, though, so there might yet be some hope that the future of our species will play out differently.
Although, today’s second FHI theory doesn’t exactly re-instil optimism. While the Vulnerable World Hypothesis envisions humanity’s destruction happening with a bang, another Nick Bostrom paper, this time titled “The Future of Human Evolution” and first published in 2004 (republished in 2009), proposes alternate ways that life could unfold with a whimper.
It’s artificial intelligence that’s the major threat examined here, with one idea being that AI could one day lead to a loss of human consciousness. Bostrom foretells how AI could become increasingly involved with how we live our lives, and how that shift has the potential to also be a major issue. In life as we know it, we have to problem solve and think… but in an artificial world controlled by AI we may no longer need to problem solve or think. Because why make our own decisions, or learn new information for ourselves, when AI has all the answers?
Some might argue that we’ve already seen the beginning of this, what with our growing reliance on internet search engines, sat-nav systems, content algorithms, and so on. But Bostrom’s paper suggests that, ultimately, we could be heading for a full breakdown of conscious thought. That life could be stripped of all the emotions and impulses that make it enjoyable and worth living. And, even if it doesn’t go quite that far, there are various concerns among academics that AI could one day become a main basis for inequality - where artificial upgrades are only available to those who can afford them. If this happens, it could mean that the wealthiest humans will also become the smartest, strongest, longest-lived, etc., all thanks to their upgrades. Those upgraded people could still succumb to a wider loss of consciousness thanks to their AI, but in the meantime the inequality could drive the non-upgraded to extinction. Or, at least, to extreme adversity.
It isn’t all doom and gloom, though, as the FHI has also suggested possible ways to prevent, or become aware of, potentially deadly AI before it can strike. The Institute advocates the use of artificial canaries, today’s third theory (or in this case, concept). Referring back to the real-world canaries of modern history (which acted as early warning signs for carbon monoxide poisoning in coal mines) the idea is that these artificial canaries should also act as warning flags to alert us when computers might be getting too advanced.
A major goal is to prevent dangerous transformative AI from developing, which are AI advancements that are hard to undo once they’ve been invented - such as easy algorithms for voting manipulation, which can undermine democracy, or HLMI High-level Machine Intelligence, which is when computers can think on par with an adult human. These developments potentially take control away from humans, and leave it with the AI itself. One 2021 paper, headed by the FHI’s Carla Zoe Cremer, calls for humans to no longer act as bystanders in the AI revolution, though… emphasising the importance of democratising AI development, to ensure that the human population has a fair enough chance to decide what we do and don’t want from AI tech. And here’s where the canaries come in.
Artificial canaries double up as key milestones in the development of any given AI technology… representing moments when AI developers should be made to stop, democratically assess, and perhaps halt or alter their AI product for the betterment of human society. One broad example might be if (or when) an AI is made that can think abstractly or imaginatively. This would be a major change from the traditionally narrow AI we’ve seen so far, and therefore a major milestone. So, it becomes an artificial canary because it’s also an early warning sign. An AI moment which it could also be impossible to reverse. The FHI generally recommends a more democratic approach to AI development to maintain this level of human control, rather than allowing certain tech companies to have all the power, but it remains to be seen if society will head in this direction.
Finally, though, and to end on a slightly lighter note, not every FHI theory centres directly on the end of humankind. The Aestivation Hypothesis is, in one sense, a bid to explain the much-debated, astronomical head-scratcher - the Fermi Paradox. It was proposed by one Anders Sandberg and others, in 2017… and, simply put, it theorises that there could be alien life out there, but that its aestivating - which is a process like hibernation, only taken to sleep through warmer climates rather than colder ones. According to the FHI, aestivation could be a favourable ability for aliens, too, as studies show that the universe is gradually cooling down. This is important because energy efficiency increases in cooler temperatures, with the FHI calculating that it could be up to 10 to the 30th power times more efficient. So, it would make a lot of sense for an alien to gather resources and then aestivate… until the universe becomes cool enough for their goals.
Significantly, it’s an idea which might also be applied to the future of humanity, too. Perhaps, after all, our best bet is to find a way to sleep through until the universe gets cold. It would perhaps be easy to view the Aestivation Hypothesis as just another ominous prediction to come out of the FHI, wherein humans are this time oblivious to an alien threat that’s lying in wait, and that’s what will finish us off. But really, we still don’t know why it is that we appear to be so alone, and this theory could not only explain it… but also encourage us to follow suit.
For now, some FHI ideas may seem quite farfetched, but all explore what really could happen to our species in years to come… teaching us that it’s better to be prepared for the unexpected than to be left scrambling if it happens. And those are four of the most essential theories from the Future of Humanity Institute.
Send