The Artificial Mind and the Posthuman Future

February 3, 2013 — Leave a comment

Beginning with the myth of the sculptor Pygmalion and his statuebride Galatea, the story of the artist’s creation that becomes real is among the most commonly recurring in world literature. Where the Greeks perceived divine intervention as necessary to endow matter with a will, wit, and intellect, we in the modern era see the problem as a mere technical hurdle, one solved through concerted effort and scientific inquiry-hence the pursuit of artificial intelligence, or AI.

After languishing during the late 1970s and early 1980s, the field of AI has produced some interesting successes in the last two decades. There’s the chess-playing computer Deep Fritz-which beat the world champion but couldn’t tell you the difference between a rook and the Pope-and Stanley, the car that drives itself but has no idea where it’s going. These are fun breakthroughs, worthy of encouragement, but they hardly suggest that it is at all possible to create a machine that can honestly think.

J. Storrs Hall, chief scientist and founder of Nanorex, argues that the bigger picture for AI is brighter than the sum of its parts. “After a half a century, the tortoise of AI practice is beginning to catch up to the hare of expectations,” he writes in his latest book, Beyond AI.

Hall makes a convincing case that human-level AI is coming, but the majority of his wonderfully written book focuses on where AI comes from-in terms of both the field’s history and the technologies and theories that form the basis of AI research.

Hall traces the roots of today’s AI back to an MIT mathematician named Norbert Wiener. In the early 1940s, the U.S. Army challenged Wiener to produce a better antiaircraft gun. The artillery the Army was using at the time didn’t target well and was prone to weird oscillations. On a whim, Wiener consulted a colleague at Harvard Medical School named Arturo Rosenblueth and learned that people who suffered from a neurological illness called “purpose tremor” were given to spasms similar to the guns’ seizures. He also realized the gunner’s aim improved depending on the amount of information that he or she had-how fast the target plane was going, what sort of defensive maneuvers it was capable of, how to wield the machine accordingly, etc. The ideal targeting system, Wiener concluded, was one that performed like a human brain-it would see the object, recognize it for what it is, consider what to do about it, and then instruct the limbs to react.

“It became clear to Wiener and Rosenblueth that there were some compelling parallels between the mechanisms of prediction, communication, and control in the mechanical gun-steering systems and the ones in the human body,” writes Hall. This realization gave rise to the field of cybernetics, which, in turn, spawned AI.

Increases in computer processing speed and collaboration among researchers have helped AI progress considerably, especially in the last 10 years, and the rate of innovation is accelerating. Many researchers optimistically predict that AI will cross the “human level” threshold before the middle of this century. Hall is among the most hopeful. In the March-April 2006 issue of THE FUTURIST, Ray Kurzweil forecast that “our computer intelligence will vastly exceed biological intelligence by the mid-2040s,” to which Hall gamely answered, “He’s too conservative.”

In Beyond AI, Hall finesses that prediction somewhat. “Answering a question like ‘when will AI arrive?’ with a numerical date makes about as much sense as answering the ultimate question of life, the universe, and everything with ’42,'” he writes, meaning that the question-in its broadness-is hopelessly inadequate to address the issue it seeks to explore.

Indeed, the advent of the computer age in the twentieth century has given rise to an existential dilemma in the twenty-first. If we already use AI to land planes, play chess, and drive cars, then what does it mean to produce an intelligence that performs on a par with humanity? Haven’t we already done so? To address this question, Hall advances a framework of six stages for understanding how AI is currently developing and where it might go in the years ahead.

1. The Hypohuman AI, as indicated by the Greek prefix hypo or “under,” would naturally be inferior to human intelligence and subject to human will-a development stage we have already reached. AI entities that perform calculations and execute commands, such as to help aerial drones take pictures, can fairly be called hypohuman AIs.

2. The Diahuman AI. During this stage, AI crosses over into human territory in terms of capability. The term dia, as in diagonal, means across. “It’s tempting to call this ‘human-equivalent,’ but the idea of equivalence is misleading. It’s already apparent that some AI abilities (chess-playing) are beyond the human scale while others (reading and writing) haven’t reached it yet,” says Hall. The diahuman AI would also have the ability to learn, but not noticeably faster than a person.

3. The Parahuman AI. A parahuman AI, from the term para, or “alongside,” is one that could pass for a person (not necessarily in appearance) and may even be part human, harkening back to AI’s roots in cybernetics. Parahuman may also come to refer to humans who use computer devices, such as implants, to improve biological performance. The parahuman stage could encompass either, but more likely both. Hall explains: “The upside of the parahuman AI is that it will enhance the interface between our native senses and abilities, adapted as they are for a hunting and gathering bipedal ape, and the increasingly formalized and mechanized world we are building. The parahuman PC [personal computer] should act like a lawyer, doctor, accountant, and secretary, all with deep knowledge and endless patience.”

4. The Allohuman AI. From “allo,” meaning a different but comparable intelligence, an allohuman would be a being that is functionally superior to the average person in many respects but inferior in others, and having a crude but still humanlike awareness of the world around it. An example of this would be the twittering, nervous C-3PO character from the popular Star Wars films who is fluent in over 6 million forms of communication but can’t tell a joke in any one of them.

5. The Epihuman AI. The epihuman artificial intelligence would possess what Hall calls “weakly godlike” powers and the ability to outperform humans in virtually every way, but would not be an unfathomably powerful being.

“We can straightforwardly predict, from Moore’s Law, that 10 years after the advent of a (learning but not radically self-improving) humanlevel AI, the same software running on machinery of the same cost would do the same human-level tasks 1,000 times as fast as we,” writes Hall. An epihuman AI would be able to “read an average book in one second with full comprehension; take a college course, with all due homework and research, in ten minutes; [and] write a book, again with ample research, in two or three hours,” to which one might add win a Nobel Prize by noon and retire by 12:01.

6. The Hyperhuman AI. During the sixth and final AI stage, humanity would see the birth of a sentient entity as intellectually productive and capable as the entire human race. For novice AI watchers, this final scenario is the most worrisome. “Where does an 800-pound gorilla sit? In the old joke, anywhere he wants to. Much the same thing will be true of hyperhuman AI,” says Hall, “except where it has to interact with other AIs. The really interesting question, then, will be: what will it want?”

In the end, the gorilla metaphor may be a more useful one for understanding AI than that of the beautiful statue springing to life. In much the same way that a wild animal raised in captivity will eventually revert to its natural instincts, so any highly sophisticated computer program will almost certainly develop its own interests apart from-and perhaps in direct conflict with-those of its creators.

Hall’s response to the threat of runaway AI is the same one that techno-enthusiasts have been repeating for years. Like most AI experts, he’s an optimist by necessity. “The things we value-those things will be better cared for by, more valued by, our moral superiors whom we have this opportunity to bring into being. Our machines will be better than we are-but having created them, we will be better, as well,” Hall writes.

In other words, trust the gorilla. What choice do you have?

Originally published in THE FUTURIST, September-October 2007.

Advertisements

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s