Top

  Humanity’s Most Uncomfortable Question

Humanity’s Most Uncomfortable Question

Is consciousness an epiphenomenon of an archaic brain? Artificial Intelligence may soon have an answer.

Irving Ponders the Nature of Consciousness. Matte Stephens

The term artificial intelligence was first coined at a conference at Dartmouth College in 1956, and the concept of developing a “thinking machine” was expected to become a reality within 20 years. In fact, just about every estimation of the progress of AI has been wrong, and only recently has reality begun to match the hype. In 1997, chess champion Garry Kasparov was beaten in a game by IBM’s Deep Blue supercomputer, and in 2011, IBM’s Watson beat past winners of the TV quiz show Jeopardy at their own game. Both those examples are a long way from an actual thinking machine, but the pace of progress is accelerating.

Meanwhile, much confusion remains about just what AI is and how it can be defined. Basically, there are three buckets into which the various AI developments are poured.

Weak AI or Artificial Narrow Intelligence (ANI) is where we are right now. An example of Weak AI is a machine that is programmed to do a specific task that either replaces or assists the human in carrying out that task, like playing chess or Jeopardy.

Similarly, the current generation of virtual assistants such as Alexa and Siri relies on voice recognition and then transmitting your recorded question to a cloud-based service—for Alexa it’s AVS (Alexa Voice Services)—which takes large amounts of data and makes predictions or detects patterns that lead to preprogrammed answers. Weak AI housed in an android is also what makes Sophia  (see article here) seem so personable and empathic.

Strong AI or Artificial General Intelligence (AGI) is the point where the machine’s intellectual capability is functionally equal to a human’s. Initially, this includes problem solving, creativity, and reasoning—a generation of Alexa and Siri that will do the equivalent of thinking. 

To move from where we are today (Weak AI) to the next level (Strong AI) is a huge and, thus far, unmet challenge. But one way of measuring progress is by determining how many calculations per second (cps) a human brain can achieve: The best estimate we have is 10 quadrillion cps. One Chinese supercomputer, which uses 24 million watts of electric power and cost $390 million to build, can do 34 quadrillion cps. So that computer is already faster. But the real tipping point is where a $1,000 computer can match the speed of a human. Given our increasing rate of progress, that should happen in or before 2025—and Strong AI will likely become as ubiquitous as Weak AI is today.  

The holy grail of Strong AI is giving the machine the human attributes of self-awareness, sentience, and consciousness—that is, the much-debated notion of whether a machine can achieve consciousness within a computational theory of mind. Keep in mind that many AI researchers are “computationalists” who believe that the human brain is essentially a computer and, therefore, whatever our brains can do, computers can do even better. But consciousness may prove to be exempt from the computationalist hypothesis. Scientists may never be able to get a machine to have vivid experiences with seemingly intrinsic qualities—known as qualia—such as experiencing the redness of a tomato or the spiciness of a taco. But perceiving qualia and achieving consciousness may not be significant in the next stage of AI.

Artificial Super Intelligence (ASI) is defined by Professor Nick Bostrom, of Oxford’s Future of Humanity Institute, as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. This definition leaves open how the superintelligence is implemented: It could be a digital computer, an ensemble of networked computers, cultured cortical tissue, or what-have-you. It also leaves open whether the superintelligence is conscious and has subjective experiences.”

The prospect of ASI terrifies people like Tesla’s Elon Musk because the machine has no limits on its ability to learn. It doesn’t wake up one morning and tell itself “I’m done. I’m as smart as I want to be.” Instead, it simply keeps learning, and as it gathers more intelligence, it becomes easier to learn more and do it much faster, creating what has become known as an Intelligence Explosion. The late Professor Stephen Hawking described it this way:

It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

Our goal isn’t to scare you, but one scenario from Tim Urban’s “The AI Revolution” looks like this:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. Ninety minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

The challenge for policymakers, researchers, and, indeed, for the rest of us, as we try to adjust to what may lie ahead, is that nobody can say for certain what will happen or when it might happen. While some philosophers and scientists argue that humanity itself is under threat, others suggest that such arguments are widely overblown. For the humble citizen stuck in the middle of all this, it’s hard to discern ground truth. What does seem to be clear is that AI is the largest single technology development in the history of our world. Whether it develops to threaten humanity’s existence in 10 years or 40, the threat cannot be ignored because a thinking and evolving machine can evolve faster than we can control it. And, given the likelihood of technological determinism—if we can build it, we will build it—it will typically happen in advance of understanding the consequences.

Adapted from Artificial Intelligence: Confronting the Revolution, by James Adams and Richard Kletter.

Enjoying this content?

Get this article and many more delivered straight to your inbox weekly.