Few concepts are as poorly understood as artificial intelligence. Opinion surveys show that even top business leaders lack a detailed sense of AI and that many ordinary people confuse it with super-powered robots or hyper-intelligent devices. Hollywood helps little in this regard by fusing robots and advanced software into self-replicating automatons such as the Terminator’s Skynet or the evil HAL seen in Arthur Clarke’s “2001: A Space Odyssey,” which goes rogue after humans plan to deactivate it. The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital “1984.”
Part of the problem is the lack of a uniformly agreed upon definition. Alan Turing generally is credited with the origin of the concept when he speculated in 1950 about “thinking machines” that could reason at the level of a human being. His well-known “Turing Test” specifies that computers need to complete reasoning puzzles as well as humans in order to be considered “thinking” in an autonomous manner.
Turing was followed up a few years later by John McCarthy, who first used the term “artificial intelligence” to denote machines that could think autonomously. He described the threshold as “getting a computer to do things which, when done by people, are said to involve intelligence.”
Since the 1950s, scientists have argued over what constitutes “thinking” and “intelligence,” and what is “fully autonomous” when it comes to hardware and software. Advanced computers such as the IBM Watson already have beaten humans at chess and are capable of instantly processing enormous amounts of information.
The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital “1984.”
Today, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.” According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. As argued by John Allen and myself in an April 2018 paper, such systems have three qualities that constitute the essence of artificial intelligence: intentionality, intelligence, and adaptability.
In the remainder of this paper, I discuss these qualities and why it is important to make sure each accords with basic human values. Each of the AI features has the potential to move civilization forward in progressive ways. But without adequate safeguards or the incorporation of ethical considerations, the AI utopia can quickly turn into dystopia.
Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. As such, they are designed by humans with intentionality and reach conclusions based on their instant analysis.
An example from the transportation industry shows how this happens. Autonomous vehicles are equipped with LIDARS (light detection and ranging) and remote sensors that gather information from the vehicle’s surroundings. The LIDAR uses light from a radar to see objects in front of and around the vehicle and make instantaneous decisions regarding the presence of objects, distances, and whether the car is about to hit something. On-board computers combine this information with sensor data to determine whether there are any dangerous conditions, the vehicle needs to shift lanes, or it should slow or stop completely. All of that material has to be analyzed instantly to avoid crashes and keep the vehicle in the proper lane.