The AI Chasers

February 3, 2013 — Leave a comment

The advent of a human-level artificial intelligence-a machine capable of the richness of expression and nuance of thought that we associate with humanity- promises to generate tremendous wealth for the inventors and companies that develop it.

According to the Business Communications Company, the market for AI software and products reached $21 billion in 2007, an impressive figure that doesn’t touch on the wealth that a human-level artificial intelligence could generate across industries. At present, the world’s programmers have succeeded in auto mating the delivery of electricity to our homes, the trading of stocks on exchanges, and much of the flow of goods and services to stores and offices across the globe, but, after more than half a century of research, they have yet to reach the holy grail of computer science-an artificial general intelligence (AGI).

Is the tide turning? At the second annual Singularity Summit in San Francisco last September, I discovered that the thinkers and researchers at the forefront of the field are pitched in an intellectual battle over how soon AGI might arrive and what it might mean for the rest of us.

The Not-So-Rapid Progress Of AI Research

The scientific study of artificial intelligence has many roots, from IBM’s development of the first number-crunching computers of the 1940s to the U.S. military’s work in war-game theory in the 1950s. The proud papas of computer science- Marvin Minsky, Charles Babbage, Alan Turing, and John Von Neumann -were also the founding fathers of the study of artificial intelligence.

During the late 1960s and early 1970s, money for AI work was as easy as expectations were unrealistic, fueled by Hollywood images of cocktail -serving robots and a Hal 9000 (a non- homicidal one, presumably) for every home. In an ebullient moment in 1967, Marvin Minsky, proclaimed. “Within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved,” by which he meant a humanistic AI. Public interest dried up when the robot army failed to materialize by the early 1980s, a period that researchers refer to as the “AI winter.” But research, though seemingly dormant, continued.

The field has experienced a revival of late. Primitive-level AI is no longer just a Hollywood staple. It’s directing traffic in Seattle through a program called SmartPhlow, guiding the actions of hedge-fund managers in New York, executing Internet searches in Stockholm, and routing factory orders in Beijing over integrated networks like Cisco’s. More and more, the world’s banks, governments, militaries, and businesses rely on a variety of extremely sophisticated computer programs-what are sometimes called “narrow AIs” -to run our ever-mechanized civilization. We look to AI to perform tasks we can easily do ourselves but haven’t the patience for any longer. There are 1.5 million robot vacuum cleaners already in use across the globe. Engineers from Stanford University have developed a fully autonomous self-driving car named Stanley, which they first showcased in 2005 at the Defense Advanced Research Projects Agency’s (DARPA) Grand Challenge motor cross. Stanley represents an extraordinary improvement over the self-driving machines that the Stanford team was showing off in 1979. The original self-driving robot needed six hours to travel one meter. Stanley drove 200 meters in the same time.

“The next big leap will be an autonomous vehicle that can navigate and operate in traffic, a far more complex challenge for a ‘robotic’ driver,” according to DARPA director Tony Tether.

In other words, robot taxis are coming to a city near you.

The decreasing price and increasing power of computer processing suggest that, in the decades ahead, narrow AIs like these will become more effective, numerous, and cheap. But these trends don’t necessarily herald the sort of radical intellectual breakthrough necessary to construct an artificial general intelligence.

Many of the technical (hardware) obstacles to creating an AGI have fallen away. The raw computing power may finally exist-and be cheap enough-to run an AGI program. But the core semantic and philosophical problems that science has faced for decades are as palpable as ever today. How exactly do you write a computer program that can think like a human?

The War between the “Neats” and the “Scruffies”

There are two paths to achieving an AGI, says Peter Voss, a software developer and founder of the firm Adaptive A.I. Inc. One way, he says, is to “continue developing narrow AI, and the systems will become generally competent. It will become obvious how to do that. When that will happen or how it will come about, whether through simbots or some DARPA challenge or something, I don’t know. It would be a combination of those kinds of things. The other approach is to specifically engineer a system that can learn and think. That’s the approach that [my firm] is taking. Absolutely I think that’s possible, and I think it’s closer than most people think-five to 10 years, tops.”

The two approaches outlined by Voss-either tinkering with mundane programs to make them more capable and effective or designing a single comprehensive AGI system- speak to the long-standing philosophical feud that lies at the heart of AI research: the war between the “neats” and the “scruffies.” J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine (Prometheus Books, 2007), reduces this dichotomy to a scientific approach vs. an engineering mind-set.

“The neats are after a single, elegant solution to the answer of human intelligence,” Hall says. “They’re trying to explain the human mind by turning it into a math problem. The scruffies just want to build something, write narrow AI codes, make little machines, little advancements, use whatever is available, and hammer away until something happens.”

The neat approach descends from computer science in its purest form, particularly the war game studies of Von Neumann and his colleagues in the 1930s and 1940s. The 1997 defeat of world chess champion Garry Kasparov by IBM’s Deep Blue computer is considered by many the seminal neat success. Up until that moment, the mainstream scientific community generally accepted the premise that AIs could be written to perform specific tasks reasonably well, but largely resisted the notion of superhuman computing ability. Deep Blue proved that an AI entity could outperform a human at a supposedly “human” task, perceiving a chess board (Deep Blue could see 200 million board positions per second) and plotting a strategy (74 moves ahead as opposed to 10, the human record).

But the success of Deep Blue was limited. While the machine demonstrated technical expertise at chess, it didn’t show any real comprehension of the game it was playing, or of itself. As Paris Review editor George Plimpton observed after the match, “The machine isn’t going to walk out of the hotel there and start doing extraordinary things. It can’t manage a baseball team, can’t tell you what to do with a bad marriage.”

The validity of this observation isn’t lost on today’s AI community.

“What we thought was easy turned out to be hard, and what we thought was hard turned out to be easy,” says Stephen Omohundro, founder of the firm Self-Aware Systems. “Back in the early Sixties, people thought that something like machine vision would be a summer project for a master’s student. Today’s machine vision systems are certainly better than they were, but no vision system today can reliably tell the difference between a dog and a cat, something that small children have no problem doing. Meanwhile, beating a world chess champion turned out to be a snap.”

Human Hardware

So why are computers better at chess and people better at distinguishing dogs from cats? The answer lies in the unique nature of the human brain. That three-pound lump of grey matter we’ve got in our skulls simply isn’t well-suited for solving complex, theoretical problems. Few of us can comprehend the dense algorithms that allow Google Maps, the New York Stock Exchange, or the local utility company to operate continuously.

Unlike a machine, which an engineer can design to address specific abstract problems, the human brain evolved in response to natural environments where we were called upon to forage, hunt, avoid physical danger, and cooperate with other members of our species. As a result, we know how to do a lot of little things very well: We can spot patterns in nature, track multiple moving objects and figure out what they are, devise strategies for catching prey based on rapidly changing conditions, and evade the occasional predator using only our wits. A fancy computer term for this is parallel processing, or working through many millions of seemingly unrelated little problems at once. Computers can parallel process, too, but they don’t do so with the fluidity or dexterity of humans. The challenge that today’s AI researchers face is how to even identify, much less emulate, all the little processes that a human brain performs both simultaneously and unconsciously.

Enter the scruffies.

The advent of the semiconductor in the 1950s, which led in turn to the transistor and then to the integrated circuit, opened up a completely different area of research in computer science, wherein hardware and code could be combined almost spontaneously to achieve surprising results. This is the basis for scruffy research.

As a group, scruffies take a more experimental approach to AI and put a heavy emphasis on robotics. Rodney Brooks, former director of the MIT AI lab and founder of the iRobot Corporation (makers of the Roomba robot vacuum cleaner), is perhaps the most famous scruffy. He takes issue with Voss’s five-year time horizon for writing an AGI.

“It’s nice to think of AI as being a single technical hurdle,” Brooks says. “But I don’t believe that’s the case. There’s a whole raft of things we don’t understand yet. We don’t understand how to organize it; we don’t understand what its purpose is; we don’t understand how to connect it to perception; we don’t understand how to connect it to action. We’ve made lots of progress in AI. We’ve got lots of AI systems out there that affect our everyday lives all the time. But general AI? It’s early days-early, early days.”

Can Machines Learn?

Many researchers have discovered that creating a machine that can learn is an essential first step in developing a system that can think.

“Bayesian networks [see sidebar] are a good example of systems that have this ability to learn,” says Omohundro, “but the approach is rational and conceptually oriented. That’s the direction I’m going in. It’s a merger of the two schools. The kinds of systems I build are very carefully thought out and have a powerful rational basis to them, but much of the knowledge and structure comes from learning, their experience of the world, and their experience of their own operation.”

What do you teach a learning system to compel humanistic thought? According to Peter Norvig, director of research at Google, understanding human intelligence means first understanding what the brain does with words.

“I certainly believe language is critical to the way we think-the way we can form abstractions and think more carefully,” says Norvig. “The brain was meant for doing visual processing primarily: A large portion of the cortex is for that. It wasn’t meant for doing abstract reasoning. The fact that we can do abstract reasoning is an amazing trick. We’re able to do it because of language. We invent concepts and give them names, and that lets us do more with a concept because we can move it around on paper. Language derives all our thinking.”

Google is currently working on instantaneous language translation based on probabilistic modeling- translating articles in Chinese into English faster and with greater accuracy, says Norvig. “We tell the program that the one is a translation to the other. Then we refine the process through more data, more words, more articles.”

The vast amount of data, news reports, and language content that Google accesses is part of the reason the 10-year old Internet firm has a bigger stake in AI than just about anybody. There’s plenty of money to be made, but more importantly, any program that receives language input from humans on a massive scale could, theoretically, evolve over time into a humanistic AI-or provide a working basis for one. Every time you go to your computer and open Google, Yahoo, Ask.com, or any other search engine to look up some fact or figure, you might be doing more than getting information-you may be teaching a type of burgeoning mind how to think.

Barney Pell, CEO of Powerset, predicts that interest in AI will increase as search engine technology advances. Pell’s firm is working on a natural language-based search engine that he hopes will compete with Google.

“Search engines today are built on a concept of keywords,” he says. “They don’t really understand the documents that you search or the user’s query. Instead, they take your query as a bag of words, and they try to match keywords to keywords. The result is that the user, the human, has to try to figure out what words would appear in the documents that [he or she] wants. Some people are very good at that game. They use very advanced syntax and features and they get a better search experience. Others feel like they’re missing something. The time is coming when people will be able to use their own natural built-in power to say what they want just in English, for example, and have computers rise to work with the meaning and the expression of the question and match that against the meaning of the documents, giving you a different search experience. We at Powerset expect to come out with a fairly large search index-where a system has read every single sentence on millions of Web pages and is letting users do a search with natural language- over the course of the next year.”

Pell forecasts that within the next five years, we’ll be interacting with search engines as fluidly as we do with carbon-based customer-service representatives. But our interaction won’t be limited to what questions a human might be able to answer off the top of his or her head. Instead, we’ll be able to ask any question at all. Want to know why the bluebird sings? Forget the keyword hunt; simply go to your search engine, ask your question, and get a straight reply.

“There are already people tracking the length of the average query, and it’s been steadily increasing from two words to three words, steadily approaching four words,” says Pell. “There’ll be a crossover point where queries expressed in regular En glish will exceed the proportion that use keywords. It’s a concrete metric we can track. I’m going to call that in five years from now. Once that point is reached, companies will start pouring more money into natural language technology, AI, conversational interface, and semantics. The pace will pick up, and it will take people by surprise.”

Pell sees conversational artificial intelligence-a precursor to AGI- becoming part of our daily lives away from the keyboard, as well. In the future, he says, we’ll think of AI as a household utility as common as running water, operating in the background of our daily lives. “We’ll definitely get to the point where you will expect to engage your household systems in conversation,” he says. “But we’re a long way from that. In the meantime, over the next decade, we’ll expect to use voice rather than type to interface with all our systems- voice in, voice and data out.”

Life in SecondLife

Like Norvig and Pell, Ben Goertzel, a long-haired, jeans-clad AI superstar and author of From Complexity to Creativity ( Plenum, 1997), also sees the birth of AGI as intimately bound up in the Internet. But Goertzel believes that online games offer a more promising avenue of research than search engines alone.

“My prediction is that AI in virtual worlds may well serve as the catalyst that refocuses the AI research community on the grand challenge of creating AGI at the human level and beyond,” he writes in a recent essay for KurzweilAI.net. Goertzel’s software firm, Novamente, is experimenting with artificially intelligent pets for the popular massively multiplayer online role-playing game SecondLife. He says the pets can “carry out spontaneous behaviors while seeking to achieve their own goals, and can also specifically be trained by human beings to carry out novel tricks and other behaviors, which were not programmed into them, but rather must be learned by the AI on the fly.” Goertzel and company hope to launch a commercial version later in 2008. “These simpler virtual animals are an important first step,” says Goertzel, “but I think the more major leap will be taken when linguistic interaction is introduced into the mix-something that, technologically, is not at all far off. Take a simpler virtual animal and add a language engine, integrated in the appropriate way, and you’re on your way.”

Goertzel’s, Pell’s, and Norvig’s research suggests that a real thinking machine is just as likely to emerge in front of our eyes on our home computers as it is to come out of DARPA. If it succeeds, we can all take a little morsel of the credit.

Growth in the use and importance of these Internet-based AI systems is virtually guaranteed, Kurzweil writes in The Singularity Is Near. Information exchange is based on the trading of data. Robots can communicate data more efficiently than babbling humans. “As humans, we do not have the means to exchange the vast patterns of inter-neuronal connections and neurotransmitter concentration levels that comprise our learning, knowledge, and skills, other than through slow, languagebased communication,” says Kurzweil.

Unlike people, AI entities can communicate completely and immediately via binary code and electric current. More communication means faster command execution, and that means greater productivity.

As we continue to transfer our knowledge to the Web, posting more blogs, technical reports, news articles, academic writings, etc., and as we continue to develop programs and AI systems to help us categorize, store, retrieve, and analyze data, so those interlinked systems are accumulating more knowledge about human civilization. If Kurzweil, Hall, and other AI watchers are correct, these systems will eventually learn to behave and process information in a humanistic way. We may be hastening a day when any labor-intensive task can be automated or outsourced to an artificially intelligent entity, a day when such entities might be able to communicate, perform, govern, and even create art more effectively, persuasively, or beautifully than human beings. Kurzweil may have already invented a system to do precisely that.

According to its official patent (#6,647,395), the Kurzweil “poetic” computer program can actually read a poem, analyze what the poem is about, and then use that information to write coherent lyrical prose based on what the program perceives to be human language patterns. As Kurzweil told reporter Teresa Riordan of the New York Times, “The real power of human thinking is based on recognizing patterns. The better computers get at pattern recognition, the more humanlike they will become.” The program is available from Kurzweil’s Web site for $19.

Whether born of the Internet or the military, in one decade or 10, AGI is coming. If human-level AI exists within the realm of possibility, we will eventually create it. We’re doing so, incrementally, already. But what does that mean for the future?

Chickening Out of the Brave New World

“If popular culture has taught us anything, it is that someday mankind must face and destroy the growing robot menace. . . . How could so many Hollywood scripts be wrong?” writes robotics engineer Daniel Wilson in his satirical book, How to Stop a Robot Uprising. In it, Wilson captures our half-serious, half-ironic robot phobia with great aplomb. Hollywood has spent the last quarter century turning AI’s worst-case scenario- the robot insurrection-into an absurd cliché. Between the successful Terminator and Matrix franchises and countless Saturday morning cartoon show villains, it’s simply impossible to take the threat of what researchers call “runaway AI” very seriously. Not surprisingly, many AI watchers dismiss the scenario as well.

“I don’t think we’re going to have runaway AI in any sort of intentional form,” says Brooks. “There may well be accidents along the way where systems fail in horrible ways because of a virus or bug. But I don’t believe that the malicious AI scenario makes sense. There may be malicious intent from people using AI systems as vehicles. But I don’t think malicious intent from the AI itself is something that I’m going to lose sleep over in my lifetime. Five hundred years from now? Maybe.”

Others, like Omohundro, take a more cautious view. “The worst case,” he says, “would be an AI that takes off on its own momentum, on some very narrow task, and, in the process, squeezes out much of what we care most about as humans. Love, compassion, art, peace, the grand visions of humanity all could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, and the fact that people have to work at jobs that aren’t necessarily fulfilling, all of those could be taken care of by machine. This could usher in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.”

There’s no way to know whether the worst-case scenario is realistic until our new borg overlord IMs us with a list of demands. Dwelling on this scenario is probably unproductive. As venture capitalist and PayPal co-founder Peter Thiel says, “AI is so far out that it’s the only thing that makes sense-from a venture capital perspective-to get involved in. The Singularity will either be very successful and the greatest thing to happen to markets ever, or it would be a disaster, destroy the world, and there would be nothing left to invest in. If you’re betting that the world is going to end, even if you’re right, you’re not going to make a lot of money.”

A more interesting, complex, and frightening question is, How might the AI Era change human culture and behavior?

In the techno-utopia of Kurzweil and others, humans interact effortlessly with machines and no piece of information is ever out of reach for longer than the fraction of a second required to digitally process it. As a result, many of the skills and much of the knowledge we’ve worked hard to build up over centuries are as irrelevant to daily life as the ability to forage for food or hunt with a bow. The only foreseeable way that the assortment of abilities, aptitudes, and talents that we call “expertise” might endure in the context of everexpanding AI is if society makes a conscious decision to perpetuate them. Millions of people would have to voluntarily choose to do their own data research, write their own reports, read their own books, make their own stock trades, drive their own cars, and the like, even though other, more-immediate methods for accomplishing similar errands are readily available.

This is not an encouraging prospect. The notion that people would voluntarily choose an antique technology over the immediacy and convenience of a machine that can do the thinking and acting for them flouts our most basic understanding of human nature.

Rodney Brooks waves the scenario away.

“When I was a boy in elementary school,” he says, “there was a big fuss about using ballpoint pens, even fountain pens. We had to know how to use a nib and ink because, they said, ‘if we lost that skill later in life, we would not be able to get along.’ People keep saying, ‘they’re losing that skill and this.’ But they’re gaining other skills and they’re adapting to modern life. I just don’t buy it. People can become fantastic at using Google and getting information. Maybe a different set of people were fantastic at using other skills, but it’s a new set of survival skills, and people that are better at it will prosper.”

I’m less certain.

Every new technology forces the society that created it to make a trade-off. A skill or activity that had been important becomes unimportant. The artisan, the welder, stonemason, cobbler, or singer of epic poems becomes a relic. Knowledge, through disuse, is lost.

In his landmark novel The Time Machine, the Victorian writer H.G. Wells portrays a future culture similar to that of the robotically run utopia of Kurzweil and Brooks. But in the Wells scenario, the privileged classes-those with unlimited access to labor-saving devices and services- have no need to expend effort to care for themselves in any way. As a result, they’ve devolved into a race of mute, effete creatures, the Eloi- physically dependent on mechanical processes they can’t comprehend, unaware of any past or future, and doomed, by and large, to a miserable and violent death. AI probably won’t turn us into Eloi, at least not overnight. But, like any technology, it has the power to either liberate or limit depending on the choices, talents, and wisdom of those who use it.

Will faster and better AI systems receive any sort of serious governmental scrutiny? If they generate the sort of wealth that people like Thiel, Kurzweil, and others predict, the probable answer is No. As a species that has prospered by virtue of our inventiveness, modern humanity is perennially eager to incorporate new technologies into our daily lives and then let government or the free market address the effects of our shortsightedness after the fact.

This messy, ill-considered process brought us the automobile and, reciprocally, the safety belt; Scotchgard and the mandatory smoke detector; asbestos and the asbestos class-action lawsuit. It’s the story of our stumbling, haphazard method of inventing things and throwing them out into the world, a method that we-blindly and blissfully-call progress. It’s also the likely story of how artificial intelligence will evolve in the future. ?

 

[Sidebar]

The PackBot, a military robot developed by Rodney Brooks’s iRobot Corporation, is “rugged and yet light enough to be deployed by one person. A video-game style controller makes this robot easy to learn and use . . . [and] keeps warfighters and first responders safe.”

BAYESIAN NETWORKS

Bayesian nets use a probabilistic model to assign a mathematical value to certain variables. To pose an extremely simple example, if the janitor comes on Wednesday and Friday, and the janitor is here today, the chances of today being Wednesday are very good. We can award the “it’s Wednesday” option a 50% likelihood. The system is refined when you add more relevant data: If the janitor comes in the morning on Wednesday, and the afternoon on Friday, and the janitor is here, and it’s 10 a.m., then the already good chances of it being Wednesday double (assuming a universe where janitors are always where they’re scheduled to be).

What seems like a silly children’s riddle may also be the key to accomplishing something remarkable-teaching a system of code, transistors, and electricity to meaningfully differentiate between Wednesday and Friday. -PMT

 

[Sidebar]

FOLLOWING THE BRAIN MAP

The only thing that robotics engineers seem to like talking about more than computers is neuroscience, particularly functional magnetic brain imaging or fMRI. In the past decade, fMRI, which takes live pictures of the blood flow being diverted throughout the brain during thought processing, has given the world a unique window into the origins of thought.

For researchers considering how to design a physical system that can think, referring to the quintessential thinking machine- the human brain-is a nobrainer. In his bestselling book The SingulariTy iS near, inventor Ray Kurzweil contends that an artificial general intelligence (AGI) will necessarily be patterned off the biological processing of a brain and that fully 3D scans will allow us to reverseengineer a human brain sometime in the 2020s. Reverse engineering, he contends, is a key strategy for creating an AGI.

Other researchers, like Steve Omohundro, contend that, while AI watchers have a great deal to learn from neuroscience, following the brain map may lead to a dead end.

“I don’t prefer the brain scan idea as a route to AI,” he says. “I don’t think we want to build machines that are copies of human brains. The direction I’m pursuing, potentially, could actually produce a much more powerful system based on theorem proving. But theorem proving is very hard. No one has been able to do it.” -PMT

Originally published in THE FUTURIST, March-April 2008.

Download the original PDF

Advertisements

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s