Futurism logo

Interview with 'Our Final Invention' Author James Barrat

In James Barrat's book, 'Our Final Invention,' he postulates that robots and AI can potentially end the dominating history of mankind.

By Futurism StaffPublished 7 years ago 9 min read
Like

James Barrat is the author of Our Final Invention: Artificial Intelligence and the End of the Human Era, an equal parts fascinating-and-terrifying book which explores the perils associated with the heedless pursuit of advanced artificial intelligence.

The discourse about artificial intelligence is often polarized. There are those who, like Singularity booster Ray Kurzweil, imagine our robo-assisted future as a kind of technotopia, an immortal era of machine-assisted leisure. Others, Barrat included, are less hopeful, arguing that we must proceed with extreme caution down the path towards artificial intelligence—lest it lap us before we even realize the race is on. Many of the building blocks towards functional AI are by definition "black box" systems—methods for programming with comprehensible outputs, but unknowable inner workings—and we might find ourselves outpaced far sooner than we expect.

The scary thing, Barrat points out, isn't the fundamental nature of artificial intelligence. We are moving unquestionably towards what might turn out to be quite innocuous technology indeed—even friendly. What's scary is that countless corporations and government agencies are working on it simultaneously, without oversight or communication. Like the development of atomic weapons, AI is a potentially lethal technology, and the percentage of researchers actively considering its implications, or taking steps to ensure its safe implementation, is alarmingly small.

Our Final Invention is an exceptionally well-researched book presenting arguments about AI and its drives that I have never read elsewhere. If nothing else, it's full of fascinating thought experiments, like this one: imagine a machine Super-Intelligence (or ASI) comes awake one day in a laboratory computer, untethered to a network. Like any sentient being, it would have drives: to survive, to gather resources, to be creative. Surrounded by comparatively dumb human keepers—like a human penned in by lab rats—it might feel unfairly imprisoned. It might do anything in its power to free itself, including cajoling, tricking, or otherwise forcing its keepers to let it loose on the world. The question the humans face here is unanswerable: how can you trust something which is, by definition, beyond your capacity to understand?

Barrat is a champ for grappling with these impossible questions in Our Final Invention—to say nothing of right here on OMNI.

OMNI: It's extremely refreshing to read a book about AI which presents a critique of Ray Kurzweil and his role in popularizing the concept of a technological Singularity. In your estimation, is Ray Kurzweil a dangerous man?

Barrat: Dangerous isn’t a word I’m comfortable using about someone who’s contributed so much to the world. He’s enriched countless lives, including mine, with his inventions and ideas. But he’s taken one idea—Vernor Vinge’s technological singularity—and rebranded it as a wholly positive techno-utopia, complete with freedom from disease and eternal life. It’s as if he doesn’t know that powerful technologies are often used for bad ends. Look at AI-based autonomous killing drones, being developed now. Look at how the NSA is using data mining AI to abuse the Constitution. Advanced AI is a dual use technology, like nuclear fission, able to lift us up, and to crush us. It’s different in kind from every other technology, from fire on up. Kurzweil isn’t himself dangerous, but to minimize the downside of advanced AI in the futurist narrative is reckless and misleading. I’m glad Our Final Invention presents the long overdue counterpoint.

In your book, you discuss the rise of an artificial super intelligence (ASI) as though it were a singular entity, but also mention that the research currently being conducted to achieve it is being undertaken by many groups, with many different techniques, around the world. Do you imagine that multiple ASIs might be able to coexist? Or does the first ASI on the scene preclude any future competitors?

There is a well-documented first-mover advantage in creating AGI, artificial general intelligence. That’s because it could quickly jump to ASI, artificial super intelligence, in the hard-takeoff version of the intelligence explosion. And ASI is unlikely to be reigned in by anything. I interviewed a lot of people for Our Final Invention to discover reasons why a hard takeoff could not happen, and came up short. Because so many AGI developers will work in secret—Google, DARPA, and China to name three—it seems likely there’ll be multiple AGI’s emerging on roughly the same timeline. Multiple intelligence explosions seems like a likely outcome. That’s unsurvivable by humans, a catastrophic scenario. That’s why everyone working on AGI needs to join the others in maintaining openness about their techniques and progress. And they must contribute to creating solutions for the catastrophic danger their work entails.

How can researchers determine if and when a system has reached AGI? Is there a level of intelligence that is quantifiable as being human-level? How is such a thing measured?

In the Turing Test, a human judge asks text-based questions to two contestants, one human, one machine. If the judge can’t tell the difference, the machine is determined to be intelligent. I anticipate that IBM will announce plans to pass the Turing Test with a machine named Turing in the early 2020s. IBM has a track record of issuing grand challenges against itself and being willing to lose in public. Deep Blue lost against Kasparov the year before it won. The Turing Test—which Turing himself called the imitation game—relies on language to carry the weight of imitating a human, and imitating intelligence. Like a lot of theorists do, I think successfully imitating intelligence is indistinguishable from intelligence. The processes of the brain are computable. Its techniques can be revealed or matched.

You propose that Singularitarianism, as a kind of technological religion, is flawed, arguing that it's impossible to think critically about AI when you believe it may render you immortal. But it's very difficult to draw lines in the sand when discussing intelligence: after all, we're talking about the self, about consciousness. Isn't this an inherently spiritual conversation?

Another great question. Let me answer in two parts. The Singularity as proposed by Kurzweil is a "singular moment" when exponentially accelerating technologies—nano, info, bio, and cogno—converge and solve mankind’s intractable problems, like death. It’s similar but different from the technological singularity proposed by Vernor Vinge—briefly, we can’t anticipate what will happen when we share our planet with greater intelligence than our own.

People who believe some technologies will grant them eternal life aren’t qualified to assess whether those technologies are safe or dangerous. They’ve got a dog in the fight, the biggest one there is. They’re hopelessly biased. Like others have written, I think religion and its trappings—god, eternal life, and so on—grew out of experiencing the death of loved ones and the fear of death. Questing for eternal life seems to me to be an unmistakably religious impulse.

Whether consciousness and spirituality can exist in machines is of course an open question. I don’t see why not. But long before we consider spiritual machines, we need to have a science for understanding how to control intelligent ones.

What's the difference between knowledge and intelligence?

In an AI sense, knowledge can be given to a computer by creating an ontology, or database of common sense knowledge about the world. Doug Lenat’s Cyc is one example, and another is NELL, CMU’s Never Ending Language Learning architecture. IBM’s Watson had 200 million pages of "knowledge," of various kinds. But having a big database of useful facts isn’t the same as being able to "achieve goals in novel environments, and learn (a concise definition of intelligence)." However, knowledge is an intelligence amplifier—it magnifies what an intelligent agent can achieve.

You frame the "Intelligence Race" as a successor to the nuclear arms race. That said, the thing which kept nuclear proliferation from occurring—mutually assured destruction—doesn't port over to the new paradigm. Why not? Why isn't the military threat of AGI enough to keep researchers from pushing forward with its development?

MAD, or mutually assured destruction, an acronym coined by polymath John von Neumann, is a really lousy way to manage hyper lethal technologies. No one planned to hold a gun to their own head for 50 years the way the human race did with the cold war’s nuclear arms race. You end up there because you had no maintenance plan for the technology—nuclear fission.

I’d hope that the threat of AGI and its jump to ASI would make AI researchers around the world want to work closely together. Scientists have collaborated on creating safety protocols for recombinant DNA research and nuclear energy (though it's unclear whether even the most technologically advanced countries, like Japan, can safely manage it). Collaboration with advanced AI is possible.

Think of how good a predictor of technology sci-fi has been. I’m afraid they’re right about AI too.

But consider that AGI will be the hottest commodity in the history of the world. Imagine the civilian and military applications for virtual brains at computer prices. Imagine clouds of thousands of PhD-trained brains working on problems like cancer, pharmaceutical research, weapons development. I fear economic pressure will prevent researchers from influencing policy decisions. AGI will be developed rapidly and in secret. And since it’s at least as volatile and lethal as nuclear weapons, imagine a dozen uncoordinated, unregulated Manhattan Projects. That’s happening right now.

Our Final Invention makes little mention of machines with consciousness or self-awareness. Are these terms too ambiguous to use, or does advanced artificial intelligence not necessarily require consciousness?

It will be a big advantage for a goal-pursuing AGI to have a model of itself as well as its environment. It would be a big advantage if it could improve its own programming, and for that it’d need a model of itself, i.e., self-awareness.

But can machines be conscious in the same way we can? I don’t know if they can be, or if it’s necessary for intelligence. Some believe it is. As you know, this is a huge book-length topic on its own. I wrote in Our Final Invention, "it won’t be solved here."

Do you think science fiction plays a role in the way we think about Artificial Intelligence? If so, why aren't we heeding Hollywood's more paranoid speculations?

Yes, 2001: A Space Odyssey's HAL 9000 will always be a touchstone for the problems of advanced AI. So will the Terminator and Skynet. I have a long endnote in Our Final Invention about the history of AI and robot takover going back to the Golem of Prague and ancient Greece.

I think however, science fiction, particularly sci-fi movies, have inoculated us from realistically assessing AI-risk. We’ve had too much fun with AI tropes to take them seriously. But I think it was Bill Joy who said "just because you saw it in a movie doesn’t mean it can’t happen." Think of how good a predictor of technology sci-fi has been. I’m afraid they’re right about AI too.

Our Final Invention: Artificial Intelligence and the End of the Human Era is out now on St. Martin's Press and should be required reading for residents of the 21st century.

Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, James Barrat's Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?

literatureinterviewartificial intelligence
Like

About the Creator

Futurism Staff

A team of space cadets making the most out of their time trapped on Earth. Help.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.