In the film, car accidents become impossible and flight flawless.
However, the ultimate conclusion of the movie is that once the computers become “self-aware,” they instantly recognize humanity as the root of all problems on Earth. Pollution, extinction of animals, poverty, and aggression are destroying the Earth. As a result, the computer network known as Skynet proceeds to eradicate humans through nuclear attacks and the creation of exterminating robots.
I can’t really disagree with Skynet’s appraisal of humanity, but I’d rather not have us wiped out by robots.
What is the Singularity?
The Singularity is essentially a place or environment beyond which we can never return. As with a black hole, there is an event horizon or singularity, and once you pass the threshold, you can never escape.
The scientist John von Neumann described the technological singularity as an “ever accelerating progress of technology…gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
He describes this Singularity as the moment beyond which “technological progress will become incomprehensibly rapid and complicated.” This is also similar to Moore’s law which predicts the speed of computer processors.
The Terminator perfectly captures one bleak version of a future Technological Singularity. As depicted in the films, computers enter into a series of runaway improvement cycles. Each computer building a better version of itself creating infinitely better computers and in theory becoming self-aware.
At an actual Singularity Summit in 2012 (yes, there is such a thing), the scientist Stuart Armstrong predicted that a Singularity may occur by 2040. Theoretical changes include nano-medicine which could physically remove cancer cells from the inside of your body or stem cells injections which could conceivably create body self-repair and immortality.
The morality, ethics, and understanding of such a future are incomprehensible.
Many scientists believe that the mere fact of thinking of this future is fruitless. They believe that if a super intelligent and self-aware computer existed, it would likely create technological and biological processes that we can’t even conceive of with our “limited” intelligence.
Stephen Hawking and Elon Musk have expressed significant concern over the rise of artificial intelligence. Specifically, Elon Musk is concerned about AI, just less about robots killing us and more about robots killing our jobs.
He notes that 12% of our economy is transportation related (trucks, buses, cars, flight, etc). Once those transportation systems are automated, he believes we lose all those jobs.
The resulting job loss would result in millions becoming unemployed and the growing need for a universal basic income system provided by the government. Under these circumstances, everyone in the United States would receive $1,000 a month to live on. Just enough to get by and not too much so that the population doesn’t just sit back and relax. The lower basic income is intended to still incentivize part-time work and entrepreneurship.
The problem with this basic income system is the price, which today would be $4 Trillion.
This job loss due to computer automation is not science fiction. Amazon recently showcased an automated store where shoppers are able to download an app allowing them entrance to the store. Cameras and other devices monitor the shoppers and what they buy, then simply charging their account once they exit the store.
Think: no more cashier jobs.
In 2014, the US Bureau of Labor Statistics noted in its annual report that 4.6 million Americans worked in retail sales and 3.4 million worked as cashiers. That’s 6% of total U.S. employment. It’s not likely all those jobs will be gone, but even a fraction of that number dramatically impacts the US workforce.
How real is the threat of a Technological Singularity?
Some Silicon Valley entrepreneurs and scientists are somewhat skeptical of a future technological doomsday. The head of AI at Facebook, Dr. Yann LeCun, does not believe we will be killed by robots. He stated the “desire to dominate socially is not correlated with intelligence.”
He adds that the desire to dominate a society is correlated with testosterone and obviously, computers won’t have this.
Other arguments against the end of the world via robot comes from Artificial Intelligence scientists themselves. They argue Moore’s law is actually dead and computer companies are not as incentivized to double computing power.
In addition, an argument can be made that science has boundaries. The more complex the field and the more we study the limits of a field, we increasingly push up against boundaries that limit us. On a basic level, scientists state that sentient life is not just about computing power and that most of the fear-mongers are not AI scientists but science fiction writers and philosophers.
A poll of 50 Nobel Laureates revealed their concerns about what will end humanity:
1. 34% predicted population increase and environmental change will end life.
2. 23% named nuclear war as the biggest threat, citing North Korea and other “war-mongers.”
3. 8% consider disease and antibiotic resistance as an existential threat.
4. Tied for 3rd at 8% is the loss of a “humanist perspective” and the loss of reality through the internet and “its seductions.”
5. 6% cited ignorant leaders (such as Trump) could lead society to disaster.
6. Another 6% named terrorism and use of weapons of mass destruction.
7. 6% also highlighted the loss of truth through social media silos and where science is thrown into doubt and considered “fake news.”
8. And at 4%, artificial intelligence (AI) could surpass mankind and somehow turn against it.
Will Robots Kill Us?
From what I can tell, we really have no idea if the Singularity will occur or not. But for now, the consensus seems to be that AI will advance at a slower rate and that we overestimate the future.
Another way to sum this argument up is, we still don’t have flying cars…