As AI embeds itself into every aspect of our lives, we must ask what we should fear more: man or machine?
Subscribe to the Real Truth for FREE news and analysis.
Subscribe NowHow concerned should we be about AI?
Here’s what Tesla Motors founder Elon Musk said about artificial intelligence during a South by Southwest tech conference in Austin, Texas: “I think that’s the single biggest existential crisis that we face and the most pressing one.”
Celebrity astrophysicist Neil deGrasse Tyson later agreed with this sentiment. Mr. Musk also said exploring computer-based intelligence is “summoning the demon.” In addition, Pope Francis asked adherents to pray that robots and AI and robots will always serve mankind.
Some of these concerns are founded on what happens if there is a runaway AI similar to the films 2001: A Space Odyssey, The Matrix and Avengers: Age of Ultron. But much of the worries are based on what could occur if this technology remains unregulated.
With pretty much limitless possibilities, there are pretty much limitless ways things can go wrong.
Despite this, AI has already wormed its way into nearly every aspect of our lives. It powers voice assistants Siri and Alexa. It allows Google Maps to identify gridlock traffic and suggest alternate routes. It helps determine fraudulent purchases on credit cards.
Even though Mr. Musk has concerns over where AI could go wrong, it is still used in every Tesla car. The company’s Autopilot technology can take over for the driver—with the software always learning and improving.
With each year bringing massive leaps in this field, the possibilities of AI are at once enticing and worrisome. This was reflected in the opinions of those interviewed by Pew for the 2022 study “AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns.”
A man in his 30s had cautious optimism: “AI can help slingshot us into the future. It gives us the ability to focus on more complex issues and use the computing power of AI to solve world issues faster. AI should be used to help improve society as a whole if used correctly. This only works if we use it for the greater good and not for greed and power. AI is a tool, but it all depends on how this tool will be used.”
A woman in her 60s had even greater worries about over-reliance on machines: “It’s just not normal. It’s removing the human race from doing the things that we should be doing. It’s scary because I’ve read from scientists that in the near future, robots can end up making decisions that we have no control over. I don’t like it at all.”
Along these lines, we are more and more seeing machines outthinking humans themselves.
Back in 2017, AI using Google’s DeepMind technology was able to beat the best players in the ancient Chinese Game of Go. Think of it as chess on steroids. Chess has 20 possible moves per turn. Go has 200.
The exercise showed how AI will approach problems in ways humans cannot anticipate. (And five years ago in AI tech is more like 100 years.) To win at Go, the AI consistently used moves that, at first, seemed to be errors to top players. The decisions by the computer challenged knowledge passed down over centuries about how to play the game—yet turn out to be winning tactics.
When it comes to machines able to outthink human beings, there are two sides to the coin. While they can help us see things in a new light—and make giant leaps in industry, science and technology—what happens if they begin to think for themselves?
Rise of the Humanoids
Experts on artificial intelligence believe the next generation of AI will be adaptive, self-learning, intuitive and able to change its own programming rules. They speak of a time when machines will exceed the intelligence of human beings—a moment defined as “singularity”—which experts believe could take place by 2035 or soon thereafter.
This could mean a brighter future for mankind. In fact, super AI may be a necessity because of the explosion of man’s knowledge.
According to The Observer: “Human-based processing will be simply inefficient when faced with the massive amounts of data we’re acquiring each day. In the past, machines were used in some industries to complete small tasks within a workflow. Now the script has flipped: Machines are doing almost everything, and humans are filling in the gaps. Interestingly, tasks performed by autonomous machines require the types of decision-making ability and contextual knowledge that just a decade ago only human beings possessed.”
“In the near future, AI-controlled autonomous unconscious systems may replace our current personal human engagements and contributions at work. The possibility of a ‘jobless future’…might not be so far-fetched.”
While critics see robot minds taking jobs from humans as a negative, others feel it would allow workers to focus on greater pursuits.
The author of 2001: A Space Odyssey, Arthur C. Clarke, wrote this in the 1960s: “In the day-after-tomorrow society there will be no place for anyone as ignorant as the average mid-twentieth-century college graduate. If it seems an impossible goal to bring the whole population of the planet up to superuniversity levels, remember that a few centuries ago it would have seemed equally unthinkable that everybody would be able to read. Today we have to set our sights much higher, and it is not unrealistic to do so.”
A world where everyone could reach “superuniversity levels” seems appealing.
The flipside? A world where people have too much time on their hands would mean more time to delve into the darker facets of human nature.
Everywhere we turn in regard to AI, we run into similar gray areas and moral conundrums.
Uncharted Territory
Something as simple as self-driving cars creates difficult ethical problems. If everyone had such automobiles, it would save 300,000 lives per decade in America. It would mean the end of daily rush-hour traffic. Also, think of everything you could accomplish during your morning commute if you did not have to focus on the road!
Yet who is to blame for decisions a machine makes during a crash?
For example, if a driverless car suddenly approaches a crowd of people walking across its path, should the car be programmed to minimize the loss of life, even at the risk of the car’s occupants? Or should it protect the occupants at all costs, even if that means hurting others?
Fortune chimed in on the debate, quoting Chris Gerdes, chief technology officer for the U.S. Department of Transportation: “Ninety-four percent of so-called last actions during an automotive collision are the result of human judgment (read: errors), Gerdes said. ‘Self-driving cars have this promise of removing the human from that equation,’ he said. ‘That’s not trivial.’
“The catch: With self-driving cars you’ve shifted the error from human drivers to human programmer, Gerdes said. Machine learning techniques can improve the result, but they aren’t perfect.
“And then there are ethical concerns. If you program a collision, that means it’s premeditated, [Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University,] said. Is that even legal? ‘This is all untested law,’ he said.”
Others speculate on the darker side to a post-singularity future. What if AI begins to see human beings as the problem? What if it begins to act on self-interest? And what if those interests conflict with human interests—and it must remove us to complete a task?
In the article “The Dangers of Killer Robots and the Need for a Preemptive Ban,” Human Rights Watch issued a warning. The report “detailed the various dangers of creating weapons that could think for themselves” (International Business Times).
The organization also warned that “removing the human element of warfare raised serious moral issues,” such as “lack of empathy,” which would “exacerbate unlawful and unnecessary violence.”
“Runaway AI” is the term used to define the future moment when machines begin to develop themselves beyond the control of human beings. But how could pieces, parts and electronics get to this point?
Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford, fleshed out a hypothetical example in his book Superintelligence. He asks the reader to picture a machine programmed to create as many paper clips as possible.
Technology Review summarized: “Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.”
“No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it ‘computronium’) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.”
Many do not see the threat, suggesting that we could pull the plug on these digital creatures should we begin to lose control of them.
Yet what if runaway AI cause machines to develop emotional responses and act in self-defense? Imagine if an entity more intelligent than us tapped into the same emotions that drive humans to commit terrible crimes—lust, envy, hatred, jealousy and selfishness?
Or what if they learned to harness the full fund of knowledge and connectivity of the web, and began to reproduce?
A Slate article summarized such concerns as the fact that we fear AI “will act as humans act (which is to say violently, selfishly, emotionally, and at times irrationally)—only it will have more capacity.”
Actions based on emotion and irrationality suggest sentience, that is, the capacity to feel, perceive or experience subjectively. This allows for a range of human behavior, often labeled “human nature,” including acting violently and selfishly.
Therefore, to answer whether we should fear AI, we must answer another question: Is it possible for computers to gain human nature?
Essential Element
Realize that human nature is utterly unique. The difference between human nature and the nature of animals makes this clear. Why does man undisputedly have superior intellect, creative power, and complex emotions? Retrospectively, why do animals possess instinct, an innate ability to know what to do without any instruction or time to learn?
Science textbooks attempt to address this, yet there is one textbook that provides the complete picture. In fact, this book—the Bible—humanity’s instruction on how to live, contained many facts about nature long before mainstream science proved them.
For example, it states this about planet Earth: “It is turned as clay to the seal; and they stand as a garment” (Job 38:14).
The phrase “it is turned as clay to the seal” refers to the rotating cylinder used by potters in ancient times. This analogy expresses the rotating motion of the Earth, which causes the sun to appear to rise and set.
The book of Job was written well before the Greeks theorized that Earth was the center of the universe, and that the sun revolved around it. The Bible also speaks to the hydrological cycle (Jer. 10:13), underground aquifers supplying water to oceans (Job 38:16), and sea currents (Psa. 8:8).
So what does the Bible say about the uniqueness of the human mind? The apostle Paul wrote in I Corinthians 2: “For what man knows the things of a man, save the spirit of man which is in him?” (vs. 11).
Mankind “knows the things of a man,” that is, possesses intellectual capacity, because he was given a component called the “spirit of man.” This term is also found in Job 32: “But there is a spirit in man…” (vs. 8).
The original Greek word translated “spirit” in I Corinthians 2:11 means “a current of air…vital principle, mental disposition” (Strong’s Exhaustive Concordance of the Bible).
Merriam-Webster Dictionary defines “disposition” as “prevailing tendency, mood, or inclination,” “temperamental makeup,” or “the tendency of something to act in a certain manner under given circumstances.” It has also been defined as a “person’s inherent qualities of mind and character.”
The spirit that God put into us is what allows us to think like human beings. It allows us to be creative and puts us on a completely different plane from animals.
Beasts do have a sort of spirit, however. Notice: “Who knows the spirit of man that goes upward, and the spirit of the beast that goes downward to the earth?” (Ecc. 3:21).
Animals are lumped together in one group. Though each creature has distinct characteristics, the entire animal kingdom differs from mankind in that it possesses the “spirit of the beast”—or what we call instinct.
These two spirits are very different. Note that, upon death, human spirits are preserved (“go upward”), while animal spirits simply disappear upon their demise (“goes downward to the earth”). (To learn more about exactly how this works, read the booklet Do the Saved Go to Heaven?)
For each type of spirit, there is a disposition or nature that goes with it: human nature, animal nature, and even God’s nature.
But what about robot nature?
AI Limitations
The greatest fear about AI is that it will take on its own mental disposition. In Bible terminology, this means we fear it will develop a human spirit—that it will possess autonomous thinking power and behavior driven by emotions.
Scripture answers this concern. Genesis 2:7 shows that God “breathed into his nostrils the breath of life; and man became a living soul.”
At the other end of the spectrum, Jesus Christ said, “Fear not them which kill the body, but are not able to kill the soul: but rather fear Him which is able to destroy both soul and body…” (Matt. 10:28).
Intelligent and resourceful human beings cannot even destroy spirit, let alone create it! Only God can generate and destroy spirit.
All this signifies that AI, which is created by human minds, could never fully be given the spirit of man susceptible to human nature and run amok on the Earth. Any negative actions by such machines are the result of those who programmed them.
We could also reason that since most people, including scientists, do not know about the spirit in man, there is no way they could produce it in a computer program. Given unlimited funds and time, the very best mankind can hope for with computer brains is to make them similar to animals. Their programming could be likened to instinct—as a human mind behind it has to guide everything it does.
Conferring “animal instinct” to a computer would be a huge undertaking and represent the best men can do. Imagine the impossibility of trying to teach an animal to think as a man (in effect, giving the spirit of man to a beast). To teach calculus to a cow, train an elephant to write poetry, or hire an orangutan to design and engineer a space shuttle—is beyond possible. It simply does not work. Without the spirit in man, as Paul stated, a living being cannot “know the things of a man.”
Yes, AI technology can be programmed to compute equations, poetry and designs, in some cases at a far greater capacity than man. It can search the bounds of logic without facing the same human limitations and setbacks—fatigue, emotion, irrationality. Yet it must be combined with human intellect—a product of the spirit in man—to come anywhere near to what those who fear it say it will do.
So should we be concerned about AI? Yes, because people with human nature are creating it and using it. There will undoubtedly be unforeseen negative consequences. And, in the wrong hands, artificial intelligence could cause untold harm.
AI perfectly demonstrates the duality of mankind: It is capable of incredible genius and terrible ills. Why? Read Did God Create Human Nature? and What Science Will Never Discover About Your Mind for the Bible answers.