WatchMojo

Login Now!

OR   Sign in with Google   Sign in with Facebook
advertisememt
VOICE OVER: Callum Janes
Can you really KNOW TOO MUCH?? Join us... and find out!

In this video, Unveiled takes a closer look at Information Hazards - the things that, really, NO ONE should know about! The terrifying concept was devised by the noted Swedish philosopher, Nick Bostrom, and it could have HUGE implications for the future of our species!

<h4>


These Are The Things That NO ONE Should Know</h4>


 


Humankind has an apparently insatiable thirst for knowledge. There are things we know now that those living one hundred years ago, even just ten years ago, would not have even begun to fathom… and, for the most part, that’s billed as a good thing. As progress. But, despite all that this quest for knowledge has led us to, it is still possible to know too much. At least, that’s the thinking behind one especially intriguing (and a little bit frightening) concept put forward by Nick Bostrom.


 


This is Unveiled, and today we’re taking a closer look at the things that no one should know.


 


Nick Bostrom. If you’re thinking you’ve heard that name before, it’s because you probably have. He’s a Swedish philosopher, a professor at the University of Oxford, and the founder of the Future of Humanity Institute - a research initiative at Oxford, aiming to study and provide for our species in the long-term. Bostrom is probably most famous for his 2003 paper, “Are You Living in a Computer Simulation?”, out of which the Simulation Hypothesis was born. However, alongside the breakdown of reality, he’s also given a lot of thought to existential risk. He co-edited the 2008 book “Global Catastrophic Risks” with the Serbian philosopher Milan M. Ćirković, and in 2018 he formulated the “Vulnerable World Hypothesis”; releasing another in-depth paper, this time focussing on what he termed black ball technologies, the like of which could spell inescapable doom for humanity. For example, one potential black ball technology would be easy-to-make nuclear weapons.


 


Back in 2011, however, Bostrom quietly drew the world’s attention to something that he described as being subtler than direct physical threats, and, as a consequence, easily overlooked. He was talking about information hazards.


 


His paper, published in 2011, was titled; “Information Hazards: A Typology of Potential Harms From Knowledge”. And, right away he describes such hazards as being “risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm”. In short, the idea is that not all information is good… and, in fact, some of it could be extremely bad. Particularly to our prospects of long-term survival. So, let’s get into it.


 


Bostrom’s reference to true information is important. Here, information hazards aren’t ill-conceived opinions, they’re not ideas or theories specifically designed to do bad, and they aren’t outright lies that are peddled as the truth. Information hazards are true, which in itself is part of the reason why they’re so dangerous. For example, the information needed to build a nuclear bomb is true, it does work, and we know that after decades of history developing them. Toward the end of his paper, Bostrom himself quotes Robert Oppenheimer - the head of the Manhattan Project - who after the bombings of Hiroshima and Nagasaki, is said to have said, “the physicists [I.e., those who had worked on the bomb] have known sin; and this is a knowledge which they cannot lose”. That particular information hazard (how to build a nuclear bomb capable of killing tens of thousands of people in seconds) is certainly out of the bag, although thankfully it isn’t yet common knowledge - with common sense seemingly being that it never should be. But, how can we be certain that that will always be the case? If the knowledge is out there - which it is - and the human pursuit of knowledge is unending, then it’s quite easy to imagine a day when instructions for the bomb are easily procurable. And that’s what happens when an information hazard gets out of hand.


 


It figures, then, that there could be many more information hazards like this, lying as though dormant in all the collective data and knowledge held by human civilization up until this point. It also figures that, as our species’ knowledge grows and grows in the future, so too will the amount and potential proliferation of as-yet-unknown information hazards. For example, Bostrom draws some attention to DNA sequencing and its potential, in the hands of an ill-meaning aggressor, to create and spread viruses. He also touches upon the advent of artificial intelligence, and the possibilities that could come about wherein it gains and abuses power against us. Comfortably more than a decade after Bostrom wrote his paper, and we’re seemingly beginning to see some of that existential concern come to the fore; with campaigns around the use and legislation of gene therapy, as well as high-profile petitions (signed by countless industry leaders) calling for a pause (or even a complete shut down) in the development of AI. In these cases, the information hazards, it would seem, are now coming over the hill.


 


An infinitely lighter and perhaps more easily understood example of this phenomena at play is something that forever bothers cinema-goers; movie spoilers. When you go to see a film, you usually don’t want to know the ending or the twist before you’ve sat down to watch it. And yet, sometimes spoilers are almost impossible to avoid. Social media is flooded with reviews from pre-screenings before general release; behind-the-scenes leaked images reveal something key to the story; or, if you’re watching a classic movie for the first time, then just time and pop culture itself has served to ruin all the best scenes for you, so that the viewing experience is more like box-ticking rather than any kind of revelation. If a movie’s ever been spoiled for you, then you could be said to have fallen victim to a low level information hazard. 


 


There are other, generally less globally threatening examples, too, such as what Bostrom terms “Intellectual Property Hazards” - wherein companies lose their edge when their secrets become known. What Bostrom calls “Knowing-too-much Hazards” also aren’t necessarily dangerous on a global scale, although they certainly could be for an individual. For instance, say you unwittingly stumbled across a top secret location, kept hidden by some kind of nefarious force. You found the location completely by accident, but no matter because now you know - and are therefore imprisoned or killed. Arguably, there are some lighter examples of this particular categorization, too, such as if you were to discover that your favorite dessert was actually made using some kind of unpalatable ingredient - it would taste the same as it always had done, but you’d probably never eat it again, because you know too much.


 


Really, though, the truly existential information hazards are probably the most troubling of all. Consider that if just one person alive today knew of a way to definitely and forever kill off humanity. How fragile would our species and civilization suddenly become? All it would take would be for that one individual to reveal all (or part) of what they know to someone else, and so would begin a chain reaction that could ultimately lead to extinction. Like a top secret recipe that gets accidentally published online, there’d be no stopping that information once it gets let out. The recipe isn’t secret anymore, humanity isn’t safe anymore, and all due to the same (seemingly inevitable) mechanism; information hazards.


 


So, how do we combat bad knowledge? Is it actually even possible to halt an information hazard? And is it already being done in society today? These are big and ongoing questions that could have a profound impact on our future. In simplest terms, Bostrom’s ideas might be seen as justification for censorship… although, actually, it’s not quite so simple as that. If banned books teach us anything, it’s that those that are banned are usually more widely sought and read as a result. By trying to limit the information - for whatever reason - those behind banning a book often achieve the opposite; drawing more attention to it, and thereby getting it seen by more people. More broadly, though, censorship fails because it’s more often implemented with a certain agenda in mind… and that agenda, in itself, is never usually the truth. With his Information Hazards, Bostrom is again dealing with “true information that may cause harm”. Censorship is neither here nor there… and in fact there could be some argument that the concept of censorship is itself an information hazard. Because, how different would the world be if no one ever tried to hide information purely for their own personal gain or based on their own personal view of the world? Perhaps there’d be more trust and fairness, and less concern over true information hazards to begin with.


 


When you get into it, Information Hazards are a seemingly straightforward idea that actually leads to all manner of wider issues. But, what do you think? Air your thoughts in the comments, and if there’s anything that you think should qualify as an information hazard… then let us know! Because, for now, those are the things that no one should know. 

Comments
advertisememt