Joseph Weizenbaum

You are currently browsing articles tagged Joseph Weizenbaum.

In a NYRB blog post, James Gleick tries to identify the invaders among us, the social bots that cajole and troll on Twitter. Who are they? Just as importantly: Who are we if we’re not quite sure if we’re communicating with something not human, or if we know we are and yet still choose to interact? An excerpt:

A well-traveled branch of futuristic fiction explores worlds in which artificial creatures—the robots—live among us, sometimes even indistinguishable from us. This has been going for almost a century now. Stepford wives. Androids dreaming of electric sheep. (Next, Ex Machina?)

Well, here they come. It’s understood now that, beside what we call the “real world,” we inhabit a variety of virtual worlds. Take Twitter. Or the Twitterverse. Twittersphere. You may think it’s a stretch to call this a “world,” but in many ways it has become a toy universe, populated by millions, most of whom resemble humans and may even, in their day jobs, behumans. But increasing numbers of Twitterers don’t even pretend to be human. Or worse, do pretend, when they are actually bots. “Bot” is of course short for robot. And bots are very, very tiny, skeletal, incapable robots—usually little more than a few crude lines of computer code. The scary thing is how easily we can be fooled.

Because the Twitterverse is made of text, rather than rocks and trees and bones and blood, it’s suddenly quite easy to make bots. Now there are millions, by Twitter’s own estimates—most of them short-lived and invisible nuisances. …

Most of these bots have all the complexity of a wind-up toy. Yet they have the potential to influence the stock market and distort political discourse. The surprising thing—disturbing, if your human ego is easily bruised—is how few bells and gears have to be added to make a chatbot sound convincing. How much computational complexity is powering our own chattering mouths? The grandmother of all chatbots is the famous Eliza, described by Joseph Weizenbaum at MIT in a 1966 paper (yes, children, Eliza is fifty years old). His clever stroke was to give his program the conversational method of a psychotherapist: passive, listening, feeding back key words and phrases, egging on her poor subjects. “Tell me more.” “Why do you feel that way?” “What makes you think [X]?” “I am sorry to hear you are depressed.” Oddly, Weizenbaum was a skeptic about “artificial intelligence,” trying to push back against more optimistic colleagues. His point was that Eliza knew nothing, understood nothing. Still the conversations could run on at impressive length. Eliza’s interlocutors felt her empathy radiating forth. It makes you wonder how often real shrinks get away with leaving their brains on autopilot.

Today Eliza has many progeny on Twitter, working away in several languages.

Tags: ,

The real shift in our time isn’t only that we’ve stopped worrying about surveillance, exhibitionism and a lack of privacy, but that we’ve embraced these things–demanded them, even. There must have been something lacking in our lives, something gone unfulfilled. But is this intimacy with technology and the sense of connection and friendship and relationship that attends it–often merely a likeness of love–an evolutionary correction or merely a desperate swipe in the wrong direction?

The opening of Brian Christian’s New Yorker piece about Spike Jonze’s Her, a film about love in the time of simulacra, in which a near-future man is wowed by a “woman” who seems to him like more than just another pretty interface:

“In 1966, Joseph Weizenbaum, a professor of computer science at M.I.T., wrote a computer program called Eliza, which was designed to engage in casual conversation with anybody who sat down to type with it. Eliza worked by latching on to keywords in the user’s dialogue and then, in a kind of automated Mad Libs, slotted them into open-ended responses, in the manner of a so-called non-directive therapist. (Weizenbaum wrote that Eliza’s script, which he called Doctor, was a parody of the method of the psychologist Carl Rogers.) ‘I’m depressed,’ a user might type. ‘I’m sorry to hear you are depressed,’ Eliza would respond.

Eliza was a milestone in computer understanding of natural language. Yet Weizenbaum was more concerned with how users seemed to form an emotional relationship with the program, which consisted of nothing more than a few hundred lines of code. ‘I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it,’ he wrote. ‘Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.’ He continued, ‘What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.’

The idea that people might be unable to distinguish a conversation with a person from a conversation with a machine is rooted in the earliest days of artificial-intelligence research.”

Tags: , , ,