Douglas Hofstadter

You are currently browsing articles tagged Douglas Hofstadter.

When Norman Mailer was assigned to write of the Apollo 11 mission, he covered it as a prizefight that had been decided in the staredown, knowing that despite the outcome of this particular flight, the Eagle had already landed, technology had eclipsed humans in many ways, and our place in the world was to be recalibrated. Since then, we’ve constantly redefined why we exist

In the 2003 essay “Just Who Will Be We, in 2493?” Douglas Hofstadter argued we needn’t necessarily fear silicon-based creatures crowding us at the top. Perhaps, he thought, we should be welcoming, like natives embracing immigrants. An excerpt:

If we shift our attention from the flashy but inflexible kinds of game-playing programs like Deep Blue to the less glamorous but more humanlike programs that model analogy-making, learning, memory, and so forth, being developed by cognitive scientists around the world, we might ask, “Will this kind of program ever approach a human level of intelligence?” I frankly do not know. Certainly it’s not just around the corner. But I see no reason why, in principle, humanlike thought, consciousness, and emotionality could not be the outcome of very complex processes taking place in a chemical substrate different from the one that happens, for historical reasons, to underlie our species.

The question then arises – a very hypothetical one, to be sure, but an interesting one to ponder: When these “creatures” (why not use that term?) come into existence, will they be threats to our own species? My answer is, it all depends. What it depends on, for me, comes down to one word: benevolence. If robot/ computers someday roam along with us across the surface of our planet, and if they compose music and write poetry and come up with droll jokes- and if they leave us pretty much alone, or even help us achieve our goals- then why should we feel threatened? Obviously, if they start trying to push us out of our houses or to enslave us, that’s another matter and we should feel threatened and should fight back.

But just suppose that we somehow managed to produce a friendly breed of silicon-based robots that shared much of our language and culture, although with differences, of course. There would naturally be a kind of rivalry between our different types, perhaps like that between different nations or races or sexes. But when the chips were down, when push came to shove, with whom would we feel allegiance? What, indeed, would the word “we” actually mean?

There is an old joke about the Lone Ranger and his sidekick Tonto one day finding themselves surrounded by a shrieking and whooping band of Indians circling in on them with tomahawks held high. The Lone Ranger turns to his faithful pal and says, “Looks like we’re done for, Tonto … ” To which Tonto replies, “What do you mean, we, white man?”

Let me suggest a curious scenario. Suppose we and our artificial progeny had coexisted for a while on our common globe, when one day some weird strain of microbes arose out of the blue, attacking carbon-based life with a viciousness that made today’s Ebola virus and the old days’ Black Plague seem like long-lost friends. After but a few months, the entire human race is utterly wiped out, yet our silicon cousins are untouched. After shedding metaphorical tears over our disappearance, they then go on doing their thing- composing haunting songs (influenced by Mozart, the Beatles, and
Rachmaninoff ), writing searching novels (in English and other human languages), making hilarious jokes (maybe even ethnic and sexual ones), and so on. If we today could look into some crystal ball and see that bizarre future, would we not thank our lucky stars that we had somehow managed, by hook or by crook, to propagate ourselves into the indefinite future by means of a switchover in chemical substrate? Would we not feel, looking into that crystal ball, that “we” were still somehow alive, still somehow there? Or – would those silicon-chip creatures bred of our own fancy still be unworthy of being labeled “we” by us?•


Tasks that both humans and automated machines both can do will be left completely to the latter soon enough. We just can’t compete. But I don’t think that means these robots are truly Artificial Intelligence. I agree with Douglas Hofstadter about that. Scientists Miles Brundage and Joanna Bryson completely disagree with this line of thinking, arguing that IBM’s Watson, the second most interesting Jeopardy! champ, is indeed true AI. The opening of their article at Slate on the topic:

“Artificial intelligence is here now. This doesn’t mean that Cylons disguised as humans have infiltrated our societies, or that the processors behind one of the search engines have become sentient and are now making their own plans for world domination. But denying the presence of AI in our society not only takes away from the achievements of science and commerce, but also runs the risk of complacency in a world where more and more of our actions and intentions are being analyzed and influenced by intelligent machines. Not everyone agrees with this way of looking at the issue, though.

Douglas Hofstadter, cognitive scientist and Pulitzer Prize-winning author of Gödel, Escher, Bach, recently claimed that IBM’s Jeopardy! champion AI system Watson is not real artificial intelligence. Watson, he says, is ‘just a text search algorithm connected to a database, just like Google search. It doesn’t understand what it’s reading.’ This is wrong in at least two ways fundamental to what it means to be intelligent. First, although Watson includes many forms of text search, it is first and foremost a system capable of responding appropriately in real-time to new inputs. It competed against humans to ring the buzzer first, and Watson couldn’t ring the buzzer until it was confident it had constructed the right sentence. And, in fact, the humans quite often beat Watson to the buzzer even when Watson was on the right track. Watson works by choosing candidate responses, then devoting its processors to several of them at the same time, exploring archived material for further evidence of the quality of the answer. Candidates can be discarded and new ones selected. IBM is currently applying this general question-answering approach to real-world domains like health care and retail.

This is very much how primate brains (like ours) work.”

Tags: , ,

I agree with Douglas Hofstadter that today’s AI isn’t true AI because it can’t really think, but the machines we have (and are soon to have) possess an amazing utility. Erik Brynjolfsson and Andrew McAfee, authors of The Second Machine Age, believe as most do, that the near-term Computer Age will be rocky, but they’re more sanguine about long-term prospects. They see the Google Glass as half full. An excerpt from their new Atlantic piece:

“Today, people with connected smartphones or tablets anywhere in the world have access to many (if not most) of the same communication resources and information that we do while sitting in our offices at MIT. They can search the Web and browse Wikipedia. They can follow online courses, some of them taught by the best in the academic world. They can share their insights on blogs, Facebook, Twitter, and many other services, most of which are free. They can even conduct sophisticated data analyses using cloud resources such as Amazon Web Services and R, an open source application for statistics.13 In short, they can be full contributors in the work of innovation and knowledge creation, taking advantage of what Autodesk CEO Carl Bass calls ‘infinite computing.’

Until quite recently rapid communication, information acquisition, and knowledge sharing, especially over long distances, were essentially limited to the planet’s elite. Now they’re much more democratic and egalitarian, and getting more so all the time. The journalist A. J. Liebling famously remarked that, ‘Freedom of the press is limited to those who own one.’ It is no exaggeration to say that billions of people will soon have a printing press, reference library, school, and computer all at their fingertips.

We believe that this development will boost human progress. We can’t predict exactly what new insights, products, and solutions will arrive in the coming years, but we are fully confident that they’ll be impressive. The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world. It will make mockery out of all that came before.”

Tags: , ,

Douglas Hofstadter, cognitive scientist and author of Gödel, Escher, Bach, explains why Watson and Siri aren’t true AI and why the field lost its way decades ago, in a Q&A conducted by William Herkewitz at Popular Mechanics, which has a terribly designed website. The opening:


You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Douglas Hofstadter:

Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level—things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking—that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.


Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

Douglas Hofstadter:

I can’t really predict that. But what I can say is that I’ve monitored Google Translate—which uses a similar approach—for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.


So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

Douglas Hofstadter:

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically.”

Tags: ,

From “The Man Who Would Teach Machines to Think,” James Somers’ new Atlantic article about Douglas Hofstadter’s ongoing work in the field of AI which is meant to go many meters past Siri or Watson:

“For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.

Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.”

Tags: ,