What If Von Neumann Machines Already Exist? | Unveiled

advertisement
VOICE OVER: Peter DeGiglio
WRITTEN BY: Caitlin Johnson
Is humanity on the verge of destruction? Have we already reached the technological singularity? In this video, we look at von Neumann machines... self-replicating AI which, according to some predictions, could be about to take over the universe! But, are von Neumann machines really that bad... or are they misunderstood masterpieces of design? To find out, we need to journey all across time and space...
What If Von Neumann Machines Already Exist?
As we march through the twenty-first century, fear of the technological singularity – the point at which tech overtakes humans – continues to grow. But is there really reason to panic? When will all of this tech become too much for us to handle?
This is Unveiled, and today we’re answering the extraordinary question; What if von Neumann machines already exist?
The twentieth-century mathematician John von Neumann was one of the most celebrated thinkers of his day, and he remains an influential figure for science in the modern world. He’s arguably best-known for his concept of a “universal constructor”, a self-replicating machine as described in his 1966 book; “Theory of Self-Reproducing Automata”. Von Neumann was interested in whether machines could ever become so complex that they could grow, evolve, and adapt over long periods of time, like living organisms do. For him, the potential for machines to reproduce themselves was key.
But von Neumann never actually built his self-replicator. During his lifetime (and for decades afterwards) it only existed in the realm of automata theory, an area of research which examines hypothetical computers and devices without taking into account the things needed to actually make them possible; like overcoming hardware limitations or generating the required resources. To this day, we’ve yet to see a genuine, physical, working von Neumann machine… so, the hardware is still under our control. But the von Neumann method has been commonly implemented through software, running on a computer and replicating virtual cells. Which means that we do now have code writing code. We’re seemingly at a midway point, then. The giant and dangerous self-replicating robots seen in so many science fiction movies don’t exist just yet… but the foundations to build them certainly do. As does much of the peripheral technology… including, for example, 3D printing.
As a result, the doomsday predictions have mounted up and up in recent years. According to some, we’re currently standing on the threshold of a frightening new world, ready to walk or fall or be pushed through the door and toward our own demise. And the idea of AI that can spawn itself does pose plenty of problems, not least; what will all the humans do when we’re not needed anymore? And if AI can build itself, then will it ever stop building itself? And, finally, if self-replicating robots are seen as an inevitable part of progress, then have we already created the machines that will lead to our downfall?
With so many philosophical concerns, it’s perhaps surprising that we’re pushing forward with this technology, at all. But we are, and mostly because of what it could mean for space exploration. All of the cities and industries and civilizations on Earth have been built by millions of people over long periods of time, but it doesn’t take a rocket scientist to realise that we can’t construct cities in space in the same way. For one, where would the builders and architects live while they were building the city?
And here’s where self-replicating machines come in. They don’t need to eat or breathe or live in any of the same ways a human does, so the risk involved disappears. Then, with a purpose-built construction AI that can also build another one of itself, we’d only need to launch one, single payload to, say, Mars, and let it do all the work from there. In theory, we’d have all the fully automated, 3D-printed infrastructure in place before we even began to think about risking actual human lives on actually crewed missions. A potentially perfect scenario. So perfect, in fact, that there have been plans formulated to transform the most uninhabitable parts of Earth, as well, like the deserts. For self-replicating robots, even the heart of the Sahara is an accessible and promising place to be.
Still, the main concern science-fiction writers tend to have with self-replicating AI is whether it would reproduce with reckless abandon? If it became smart enough to duplicate itself - or even to build different types of machine without human input - but saw no need to limit the production line, then the world (the universe!) could very soon be full of them! There are some things to keep in mind, though, before this particular thread of existential anxiety gets too much…
One is that an intelligent enough AI could well realise that endless duplication isn’t actually the intelligent thing to do. It would presumably be striving to work as efficiently as possible, which automatically means that there would be a limit to its self-replication - beyond which this system, this society of bots becomes unsustainable. Then, and again with efficiency in mind, it might wish to minimize its impact on the external environment - be that here on Earth, or on some other planet in some other galaxy. This could also put a stop to self-replication quite quickly. The standard sci-fi scenario typically has AI developing less upstanding morals and ideals, however… but if von Neumann machines weren’t to somehow gain feelings of greed or power-lust, then the robot takeover needn’t be quite so apocalyptic!
Another reason not to feel so hopeless is what’s known as the power-law scaling relationship. A concept applied specifically to self-replicating machines by the scientists Robert Freitas and Ralph Merkle in 2004, it says that the more complex a system is, the longer it takes to create. We see a variation of this in the natural world, as large and complicated organisms (from humans to hippopotami) tend to have far longer gestation periods than simpler ones. Elephants, for example, are pregnant for almost two years. It follows, then, that an advanced AI - one that’s capable of self-replication - would also require a long time to reproduce… and probably long enough that we’d be able to stop the process if we thought it was going to end badly. Indeed, for some of the earliest trials of von Neumann’s universal constructor (the ones reproducing virtual cells on computer software) it was calculated that they would take multiple years to complete.
The predictions become a little less favourable when it comes to tiny (or nano) robots, however. On the one hand, nanotech is one of the most exciting and fastest moving tech spaces in the modern world. On the other, there’s much speculation that it’s emergence could be the doom of our species. The idea is that in the time of AI and robots, nanobots could become like bacteria; tiny systems with rapid reproduction rates that can very quickly number in the millions. The result is that we could one day be facing a cataclysm that the US scientist and engineer K. Eric Drexler calls “gray goo”, where miniscule nanobots ruthlessly devour everything. People, animals and plants. Homes, towns and cities. Again, this is a worst-case scenario. Self-replicating nanobots, like their larger counterparts, could also demonstrate restraint, and for one reason or another limit their numbers. But the idea that such a decision could be beyond the control of humans is what causes many to worry.
Let’s finish with a change of perspective, though, and some cause not to be alarmed. Because could we yet find comfort by… looking to the stars? If we tie humanity’s pursuit of von Neumann machines into the timeline of the universe in general, it could be argued that, for whatever reason, self-replicating AIs actually are limited in what they can achieve. Because, why else haven’t we discovered any yet?
If we follow the general scientific consensus and predict that there is other life in the universe (beyond our own), and then we follow the statistical probability that humankind isn’t the most advanced of all life, then it figures that something out there should’ve already surpassed the technological singularity. Some far-off civilization should’ve already developed things like advanced AI, self-replicating machines and super-spreading nanotech. And, if all the worst possible outcomes we’ve covered today were true, then we, waiting patiently on our lowly, less advanced world, should’ve already been swarmed and overcome by a relentless, infinite wave of ruthless, reproductive, alien robots. But, as of 2020, that obviously hasn’t happened.
In theory, these things really could take over the universe, turning the cosmos into a shiny, clunky disaster zone. But, in practice, we’re yet to see it come to pass. The moral of the story; proceed with caution. If technology turns against us, then we could be in for a lot of trouble. But AI can be our friend and ally, too.
