"The Singularity: Humanity's Last Invention?"

ROBERT SIEGEL, host:

From NPR News, this is ALL THINGS CONSIDERED. I'm Robert Siegel.

How do you think life, as we know it, will end? Nuclear war? Climate change? How about an out-of-control computer?

(Soundbite of movie, "2001: A Space Odyssey")

Mr. DOUGLAS RAIN (Actor): (as HAL 9000) I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.

SIEGEL: That, of course, is HAL 9000 from Stanley Kubrick's science fiction masterpiece "2001: A Space Odyssey."

Well, in 2011, some people think we're getting closer to inventing an artificial intelligence that could figure out how to make itself smarter. If so, they say, it might be the last thing humans ever invent.

NPR's Martin Kaste has the story.

MARTIN KASTE: There's an apartment in downtown Berkeley where they're trying to save the world.

(Soundbite of knocking)

KASTE: Hello.

It's four apartments, actually, which have been rented by something called the Singularity Institute for Artificial Intelligence.

Mr. KEEFE ROEDERSHEIMER (Software Engineer, Singularity Institute for Artificial Intelligence): Hi, how is it going?

KASTE: Good. Thank you.

Mr. ROEDERSHEIMER: Can I offer you guys some tea?

KASTE: Keefe Roedersheimer is one of the institute's research fellows. Over cups of green tea, he explains that he's a software engineer who's done work for NASA, and that his idea of a good time is teaching a computer how to play poker like a human.

But right now, at the institute, he's trying to predict the rate of advancement of artificial intelligence or A.I.

Mr. ROEDERSHEIMER: So it's about knowing when this could happen.

KASTE: By this, he's talking about the invention of a computer that's not only smart but also capable of improving itself.

Mr. ROEDERSHEIMER: Is able to look at its own source code and say, ah, if I change this, I'm going to get smarter. And then by getting smarter, it sees new insights into how to get smarter. And then by having those insights into how to get smarter, it modifies its source code and gets smarter and gets some insights. And that creates an extraordinarily intelligent thing.

KASTE: They call this the A.I. singularity. Because the intelligence could grow so fast, human minds might not be able to keep up. And therein lies the danger.

You've already seen this movie.

(Soundbite of movie, "Terminator 2: Judgment Day")

Mr. ARNOLD SCHWARZENEGGER (Actor): (as The Terminator) Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Ms. LINDA HAMILTON (Actress): (as Sarah Connor) Skynet fights back.

Mr. SCHWARZENEGGER: (as The Terminator) Yes.

KASTE: They kind of hate it at the institute when you quote the "Terminator," but Roedersheimer says, at least, those movies gave people a sense of what could happen.

Mr. ROEDERSHEIMER: That's an A.I. that could get out of control. But if you really think about it, it's much worse than that.

KASTE: Much worse than "Terminator"?

Mr. ROEDERSHEIMER: Much, much worse.

KASTE: How could it possibly - that's a moonscape with people hiding under burnt out buildings and being shot by laser. I mean, what could be worse than that?

Mr. ROEDERSHEIMER: All the people are dead.

KASTE: In other words, forget the heroic human resistance. There'd be no time to organize one. Somebody presses enter, and we're done.

The singularity idea has floated around the edges of computer science since the 1960s, but these days, it's the subject of Silicon Valley philanthropy.

At a fund-raising party in San Francisco, the co-founder of PayPal, Peter Thiel, explains why he supports the Singularity Institute.

Mr. PETER THIEL (Co-Founder, PayPal): People are not worried about what supersmart computers will do to change the world, because we don't see those every day. And so I suspect that there are a lot of these issues that are being underestimated.

KASTE: Also at the party is Eliezer Yudkowsky, the 31-year-old who co-founded the institute. He's here to mingle with potential new donors. As far as he's concerned, preparing for the singularity takes primacy over other charitable causes.

Mr. ELIEZER YUDKOWSKY (Research Fellow and Director, Singularity Institute for Artificial Intelligence): If you want to maximize your expected utility, you try to save the world and the future of intergalactic civilization instead of donating your money to the society for curing rare diseases and cute puppies.

KASTE: Yudkowsky doesn't have formal training in computer science, but his writings have a following among some who do. He says he's not predicting that the future super A.I. will necessarily hate humans. It's more likely, he says, that it'll be indifferent to us - but that's not much better.

Mr. YUDKOWSKY: While it may not hate you, you're made of atoms that it can use for something else. So it's probably not a good thing to build that particular kind of A.I.

KASTE: What he and the institute are trying to do, he says, is start the process of figuring out how to build what he calls friendly A.I. before somebody inevitably builds the unfriendly variety.

But that day still seems a long way off when you look at the current state of A.I.

Good morning. Hello?

Unidentified Female: Are you looking for Eric?

KASTE: A computerized receptionist guards the office of Microsoft distinguished scientist Eric Horvitz.

Unidentified Female: Eric is working on something now. I think he won't mind too much, though, if you interrupt him. Would you like to go in?

KASTE: Horvitz is past president of the Association for the Advancement of Artificial Intelligence. He's working on systems that can greet visitors, do basic medical diagnoses and even read human body language.

Mr. ERIC HORVITZ: One whole direction we're going in is to bring together machine vision, machine learning, conversational abilities to explore what we call integrative A.I. And this is one path to brighter intelligences some day.

KASTE: But Horvitz doubts that one of these virtual receptionists could ever lead to something that takes over the world. He says that's like expecting a kite to evolve into a 747 on its own.

So does that mean he thinks the singularity is ridiculous?

Mr. HORVITZ: Well, no. I think there's been a mix of views, and I have to say that I have mixed feelings myself.

KASTE: In part because of ideas like the singularity, Horvitz and other A.I. scientists have been doing more to look at some of the ethical issues that might arise over the next few years with narrow A.I. systems.

They've also been asking themselves some more futuristic questions. For instance, how would you go about designing an emergency off switch for a computer that can redesign itself?

Mr. HORVITZ: I do think that the stakes are high enough where even if there was a low, small chance of some of these kinds of scenarios, that it's worth investing time and effort to be proactive.

KASTE: Still, many see the Singularity Institute and like-minded organizations as fringe. One computer scientist let slip the word cultish; others mock the singularity as the rapture of the nerds.

At the institute, they shrug this off. As far as they're concerned, it's just a matter of being rational about the future - relentlessly rational.

Jasen Murray, for instance, says he has no illusions about the institute's ability to succeed at its mission.

Mr. JASEN MURRAY (Program Manager, Singularity Institute for Artificial Intelligence): We have between 30 and 60 years to figure out this - to solve this ridiculously hard problem that we probably have a low chance of solving correctly and - ah, this is just really bad.

KASTE: But they're willing to try. The institute is looking to move out of its apartments in Berkeley and buy a big old Victorian house. That way, its researchers can have a more permanent home for whatever time humanity has left.

Martin Kaste, NPR News.