People worry that smart gadgets and similar technology will develop into super-intelligent, out-of-control machines that subjugate the world. The answer to whether you should be concerned about this reveals a fundamental—yet little known—fact about your mind.
Subscribe to the Real Truth for FREE news and analysis.Subscribe Now
Checking the weather is an integral part of most morning routines. Yet you no longer need to look out the window. Instead, you simply speak into your phone or other similar device.
“Hey, Siri, do I need my umbrella?”
“Alexa, will it be hot this afternoon?”
“OK Google, how much snow will we get?”
“Hey, Cortana, should I wear a jacket?”
Take the last question. Your smart device responds, “You may consider wearing a light jacket, as it is 43 degrees Fahrenheit with a possibility of light rain showers.”
You have just had a conversation with a budding artificial intelligence (AI). If you are not already, get used to asking a device what to do as it may someday learn to push your buttons, tell you what to do, or even begin to have feelings against you.
At least that is what many of the leading minds in science and technology want us to think. Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak, Neil deGrasse Tyson, and others fear AI may take over in coming years.
Ever since the term “artificial intelligence” was coined by American computer scientist John McCarthy in 1955, the idea that computers could learn to listen, speak, think and feel emotions has permeated pop culture. Just think of the movies 2001: A Space Odyssey, The Terminator, and The Matrix. Recent entries include Her, Ex Machina, and Avengers: Age of Ultron.
Although they do not live up to their fictional counterparts, current AI advancements are impressive.
It drives us around: Tesla Motors’ autopilot technology is close to providing full autonomy, which will allow a vehicle to completely take over for the driver.
It serves as our financial advisors: there are online chat bots that provide support for credit card or banking customers.
It reports the news: media outlets such as Associated Press, Fox and Yahoo! use computer programs to write simple financial reports, summaries and sports and news recaps.
Companies are also developing AI applications that provide in-depth responses to “non-factoid” questions, such as relationship advice. Some programmers are even including synthetic emotions to better connect with users.
What worries people most is that some computers think on a completely different level than humans. Only recently has an AI been able to beat the best players in the ancient Chinese Game of Go. Think of it as chess on steroids. Chess has 20 possible moves per turn. Go has 200.
The AI consistently uses moves that, at first, seem to be errors to top players. The decisions by the computer challenge knowledge passed down over centuries about how to play the game—yet turn out to be winning tactics.
Machines able to outthink human beings appear to be a double-edged sword. While they can help us see things in a new light—and make giant leaps in industry, science and technology—what happens if they begin to think for themselves?
In a short documentary titled, “The Turing Test: Microsoft’s Artificial Intelligence Meltdown,” by Journeyman Pictures, one robot AI based on sci-fi writer Philip K. Dick provided a humorous, but telling, answer. It was asked, “Do you think robots will take over the world?”
After pausing as if to think, the humanoid responded: “You all got the big questions today. But you’re my friend, and I will remember my friends, and I will be good to you. So don’t worry. Even if I evolve into Terminator, I will still be nice to you. I will keep you warm and safe in my people zoo, where I can watch you for old time’s sake.”
The exchange gave the developers a good laugh, to which the robo-author responded with a smile. Yet it summed up the fears many have about the future of AI.
Experts on artificial intelligence believe the next generation of AI will be adaptive, self-learning, intuitive and able to change its own programming rules. They speak of a time when machines will exceed the intelligence of human beings—a moment defined as “singularity”—which experts believe could take place by 2035 or soon thereafter.
This could mean a brighter future for mankind. In fact, super AI may be a necessity because of the explosion of man’s knowledge. But these advancements are a double-edged sword.
According to The Observer: “Human-based processing will be simply inefficient when faced with the massive amounts of data we’re acquiring each day. In the past, machines were used in some industries to complete small tasks within a workflow. Now the script has flipped: Machines are doing almost everything, and humans are filling in the gaps. Interestingly, tasks performed by autonomous machines require the types of decision-making ability and contextual knowledge that just a decade ago only human beings possessed.”
“In the near future, AI-controlled autonomous unconscious systems may replace our current personal human engagements and contributions at work. The possibility of a ‘jobless future’…might not be so far-fetched.”
While critics see robot minds taking jobs from humans as a negative, others feel it would allow workers to focus on greater pursuits.
The author of 2001: A Space Odyssey, Arthur C. Clarke, wrote this in the 1960s: “In the day-after-tomorrow society there will be no place for anyone as ignorant as the average mid-twentieth-century college graduate. If it seems an impossible goal to bring the whole population of the planet up to superuniversity levels, remember that a few centuries ago it would have seemed equally unthinkable that everybody would be able to read. Today we have to set our sights much higher, and it is not unrealistic to do so.”
A world where everyone could reach “superuniversity levels” seems appealing.
The flipside? A world where people have too much time on their hands would mean more time to delve into the darker facets of human nature.
Everywhere we turn in regard to AI, we run into similar gray areas and moral conundrums.
Something as simple as self-driving cars creates difficult ethical problems. If everyone had such automobiles, it would save 300,000 lives per decade in America. It would also mean the end of daily rush-hour traffic. Also, think of everything you could accomplish during your morning commute if you did not have to focus on the road!
Yet who is to blame for decisions a machine makes during a crash?
For example, if a driverless car suddenly approaches a crowd of people walking across its path, should the car be programmed to minimize the loss of life, even at the risk of the car’s occupants? Or should it protect the occupants at all costs, even if that means hurting others?
Fortune chimed in on the debate, quoting Chris Gerdes, chief technology officer for the U.S. Department of Transportation: “Ninety-four percent of so-called last actions during an automotive collision are the result of human judgment (read: errors), Gerdes said. ‘Self-driving cars have this promise of removing the human from that equation,’ he said. ‘That’s not trivial.’
“The catch: With self-driving cars you’ve shifted the error from human drivers to human programmer, Gerdes said. Machine learning techniques can improve the result, but they aren’t perfect.
“And then there are ethical concerns. If you program a collision, that means it’s premeditated, [Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University,] said. Is that even legal? ‘This is all untested law,’ he said.”
Others speculate on the darker side to a post-singularity future. What if AI begins to see human beings as the problem? What if they begin to act on self-interest? And what if those interests conflict with human interests—and they must remove us to complete a task?
Human Rights Watch issued a warning in February titled, “The Dangers of Killer Robots and the Need for a Preemptive Ban.” The report “detailed the various dangers of creating weapons that could think for themselves” (International Business Times).
The organization also warned that “removing the human element of warfare raised serious moral issues,” such as “lack of empathy,” which would “exacerbate unlawful and unnecessary violence” (ibid.).
“Runaway AI” is the term used to define the future moment when machines begin to develop themselves beyond the control of human beings. But how could pieces, parts and electronics get to this point?
Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford, fleshed out a hypothetical example in his book Superintelligence. He asks the reader to picture a machine programmed to create as many paper clips as possible.
Technology Review summarized: “Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.”
“No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it ‘computronium’) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.”
Many do not see the threat, suggesting that we could pull the plug on these digital creatures should we begin to lose control of them.
Yet what if runaway AI cause machines to develop emotional responses and act in self-defense? Imagine if an entity more intelligent than us tapped into the same emotions that drive humans to commit terrible crimes—lust, envy, hatred, jealousy and selfishness?
Or what if they learned to harness the full fund of knowledge and connectivity of the web, and began to reproduce?
A Slate article summarized such concerns as the fact that we fear AI “will act as humans act (which is to say violently, selfishly, emotionally, and at times irrationally)—only it will have more capacity.”
Actions based on emotion and irrationality suggest sentience, that is, the capacity to feel, perceive or experience subjectively. This allows for a range of human behavior, often labeled “human nature,” including acting violently and selfishly.
Therefore, to answer whether we should fear AI, we must answer another question: Is it possible for computers to gain human nature?
Recognize that human nature is unique. Note the difference between human nature and the nature of animals. Why does man undisputedly have superior intellect, creative power, and complex emotions? Retrospectively, why do animals possess instinct, an innate ability to know what to do without any instruction or time to learn?
Science textbooks attempt to address this, yet there is one textbook that provides the complete picture. In fact, this book—the Bible—humanity’s instruction on how to live, contained many facts about nature long before mainstream science proved them.
For example, it states this about planet Earth: “It is turned as clay to the seal; and they stand as a garment” (Job 38:14).
The phrase “it is turned as clay to the seal” refers to the rotating cylinder used by potters in ancient times. This analogy expresses the rotating motion of the Earth, which causes the sun to appear to rise and set.
The book of Job was written well before the Greeks theorized that Earth was the center of the universe, and that the sun revolved around it. The Bible also speaks to the hydrological cycle (Jer. 10:13), underground aquifers supplying water to oceans (Job 38:16), and sea currents (Psa. 8:8).
So what does the Bible say about the uniqueness of the human mind? The apostle Paul wrote in I Corinthians 2: “For what man knows the things of a man, save the spirit of man which is in him?” (vs. 11).
Mankind “knows the things of a man,” that is, possesses intellectual capacity, because he was given a component called the “spirit of man.” This term is also found in Job 32: “But there is a spirit in man…” (vs. 8).
The original Greek word translated “spirit” in I Corinthians 2:11 means “a current of air…vital principle, mental disposition” (Strong’s Exhaustive Concordance of the Bible).
Merriam-Webster Dictionary defines “disposition” as “prevailing tendency, mood, or inclination,” “temperamental makeup,” or “the tendency of something to act in a certain manner under given circumstances.” It has also been defined as a “person’s inherent qualities of mind and character.”
The spirit that God put into us is what allows us to think like human beings. It allows us to be creative and puts us on a completely different plane from animals.
Beasts do have a sort of spirit, however. Notice: “Who knows the spirit of man that goes upward, and the spirit of the beast that goes downward to the earth?” (Ecc. 3:21).
Animals are lumped together in one group. Though each creature has distinct characteristics, the entire animal kingdom differs from mankind in that it possesses the “spirit of the beast”—or what we call instinct.
These two spirits are very different. Note that, upon death, human spirits are preserved (“go upward”), while animal spirits simply disappear upon their demise (“goes downward to the earth”). (To learn more about exactly how this works, read the booklet Do the Saved Go to Heaven?)
For each type of spirit, there is a disposition or nature that goes with it: human nature, animal nature, and even God’s nature.
But what about robot nature?
The greatest fear about AI is that it will take on its own mental disposition. In Bible terminology, this means we fear it will develop a human spirit—that it will possess autonomous thinking power and behavior driven by emotions.
Scripture answers this concern. Genesis 2:7 shows that God “breathed into his nostrils the breath of life; and man became a living soul.”
At the other end of the spectrum, Jesus Christ said, “Fear not them which kill the body, but are not able to kill the soul: but rather fear Him which is able to destroy both soul and body…” (Matt. 10:28).
Intelligent and resourceful human beings cannot even destroy spirit, let alone create it! Only God can generate and destroy spirit.
All this signifies that AI, which is created by human minds, could never fully be given the spirit of man susceptible to human nature and run amuck on the Earth. Any negative actions by such machines are the result of those who programmed them.
We could also reason that since most people, including scientists, do not know about the spirit in man—there is no way they could produce it in a computer program. Given unlimited funds and time, the very best mankind can hope for with computer brains is to make them similar to animals. Their programming could be likened to instinct—as a human mind behind it has to guide everything it does.
Conferring “animal instinct” to a computer would be a huge undertaking and represent the best men can do. Imagine the impossibility of trying to teach an animal to think as a man (in effect, giving the spirit of man to a beast). To teach calculus to a cow, train an elephant to write poetry, or hire an orangutan to design and engineer a space shuttle—is beyond possible. It simply does not work. Without the spirit in man, as Paul stated, a living being cannot “know the things of a man.”
Yes, AI technology can be programmed to compute equations, poetry and designs, in some cases at a far greater capacity than man. It can search the bounds of logic without facing the same human limitations and setbacks—fatigue, emotion, irrationality. Yet it must be combined with human intellect—a product of the spirit in man—to come anywhere near to what those who fear it say it will do.
Man’s mind is utterly unique in all of Creation. The Bible explains why. To learn more, read Editor-in-Chief David C. Pack’s booklet What Science Will Never Discover About Your Mind.