William Poundstone

You are currently browsing articles tagged William Poundstone.

I’m not worried about conscious, superintelligent machines doing away with humans anytime soon. As far as I can see into the future, I’m more concerned about the economic and ethical ramifications of Weak AI and the proliferation of automation. That will be enough of a challenge. If there is to be a people-killing “plague,” it will likely come from environmental devastation of our own making. That’s the “machine” we’ve unloosed.

On the topic of the Singularity, the excellent Edge.org asked a raft of thinkers in various disciplines to ponder this question: “What do you think about machines that think?” Excerpts follow from responses by philosopher Daniel C. Dennett, journalist William Poundstone and founding Wired editor Kevin Kelly.

___________________________

From Dennett:

The Singularity—an Urban Legend?

The Singularity—the fateful moment when AI surpasses its creators in intelligence and takes over the world—is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility (“Well, in principle I guess it’s possible!”) coupled with a deliciously shudder-inducing punch line (“We’d be ruled by robots!”). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts—Elon Musk, Stephen Hawking, and David Chalmers, among others—and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn’t it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?

I think, on the contrary, that these alarm calls distract us from a more pressing problem, an impending disaster that won’t need any help from Moore’s Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can’t think, prematurely putting civilization on auto-pilot. The process is insidious because each step of it makes good local sense, is an offer you can’t refuse. You’d be a fool today to do large arithmetical calculations with pencil and paper when a hand calculator is much faster and almost perfectly reliable (don’t forget about round-off error), and why memorize train timetables when they are instantly available on your smart phone? Leave the map-reading and navigation to your GPS system; it isn’t conscious; it can’t think in any meaningful sense, but it’s much better than you are at keeping track of where you are and where you want to go.•

___________________________

From Poundstone:

Can Submarines Swim?

My favorite Edsger Dijkstra aphorism is this one: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Yet we keep playing the imitation game: asking how closely machine intelligence can duplicate our own intelligence, as if that is the real point. Of course, once you imagine machines with human-like feelings and free will, it’s possible to conceive of misbehaving machine intelligence—the AI as Frankenstein idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I have concluded it’s not.

Here’s the case for overblown. Machine intelligence can go in so many directions. It is a failure of imagination to focus on human-like directions. Most of the early futurist conceptions of machine intelligence were wildly off base because computers have been most successful at doing what humans can’t do well. Machines are incredibly good at sorting lists. Maybe that sounds boring, but think of how much efficient sorting has changed the world.

In answer to some of the questions brought up here, it is far from clear that there will ever be a practical reason for future machines to have emotions and inner dialog; to pass for human under extended interrogation; to desire, and be able to make use of, legal and civil rights. They’re machines, and they can be anything we design them to be.

But that’s the point. Some people will want anthropomorphic machine intelligence.•

___________________________

From Kelly:

Call Them Artificial Aliens

The most important thing about making machines that can think is that they will think different.

Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose” because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.

The kind of thinking done by the emerging AIs in 2014 is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans can do, they don’t do it in a human-like fashion. Facebook has the ability to ramp up an AI that can start with a photo of any person on earth and correctly identifying them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this ability very un-human. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that theywon’t drive like humans, with our easily distracted minds.

In a pervasively connected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences, and entirely new ways of thinking. We don’t know what the full taxonomy of intelligence is right now.•

Tags: , ,