Clive Thompson

You are currently browsing articles tagged Clive Thompson.

Watson has a way with words and Siri sounds sexy, but Cyc is almost silent. Why so silent, Cyc?

Cycorp’s ambitious project to create the first true AI has been ongoing for 31 years, much of the time in seclusion. A 2014 Business Insider piece by Dylan Love marked the three-decade anniversary of the odd endeavor, summing up the not-so-modest goal this way: to “codify general human knowledge and common sense.” You know, that thing. Every robot and computer could then be fed the system to gain human-level understanding.

The path the company and its CEO Doug Lenat have chosen in pursuit of this goal is to painstakingly teach Cyc every grain of knowledge until the Sahara has been formed. Perhaps, however, it’s all a mirage. Because the work has been conducted largely in quarantine, there’s been little outside review of the “patient.” But even if this artificial-brain operation is a zero rather than a HAL 9000, a dream unfulfilled, it still says something fascinating about human beings.

An excerpt from “The Know-It-All Machine,” Clive Thompson’s really fun 2001 Lingua Franca cover story on the subject: 

SINCE THIS is 2001, [Doug] Lenat has spent the year fielding jokes about HAL 9000, the fiendishly intelligent computer in Arthur C. Clarke’s 2001: A Space Odyssey. On one occasion, when television reporters came to film Cyc, they expected to see a tall, looming structure. But because Cyc doesn’t look like much—it’s just a database of facts and a collection of supporting software that can fit on a laptop—they were more interested in the company’s air conditioner. “It’s big and has all these blinking lights,” Lenat says with a laugh. “Afterwards, we even put a sign on it saying, CYC 2001, BETTER THAN HAL 9000.”

But for all Lenat’s joking, HAL is essentially his starting point for describing the challenges facing the creation of commonsense AI. He points to the moment in the film 2001 when HAL is turned on—and its first statement is “Good morning, Dr. Chandra, this is HAL. I’m ready for my first lesson.”

The problem, Lenat explains, is that for a computer to formulate sentences, it can’t be starting to learn. It needs to already possess a huge corpus of basic, everyday knowledge. It needs to know what a morning is; that a morning might be good or bad; that doctors are typically greeted by title and surname; even that we greet anyone at all. “There is just tons of implied knowledge in those two sentences,” he says.

This is the obstacle to knowledge acquisition: Intelligence isn’t just about how well you can reason; it’s also related to what you already know. In fact, the two are interdependent. “The more you know, the more and faster you can learn,” Lenat argued in his 1989 book, Building Large Knowledge-Based Systems, a sort of midterm report on Cyc. Yet the dismal inverse is also true: “If you don’t know very much to begin with, then you can’t learn much right away, and what you do learn you probably won’t learn quickly.”

This fundamental constraint has been one of the most frustrating hindrances in the history of AI. In the 1950s and 1960s, AI experts doing work on neural networks hoped to build self-organizing programs that would start almost from scratch and eventually grow to learn generalized knowledge. But by the 1970s, most researchers had concluded that learning was a hopelessly difficult problem, and were beginning to give up on the dream of a truly human, HAL-like program. “A lot of people got very discouraged,” admits John McCarthy, a pioneer in early AI. “Many of them just gave up.”

Undeterred, Lenat spent eight years of Ph.D. work—and his first few years as a professor at Stanford in the late 1970s and early 1980s—trying to craft programs that would autonomously “discover” new mathematical concepts, among other things. Meanwhile, most of his colleagues turned their attention to creating limited, task-specific systems that were programmed to “know” everything that was relevant to, say, monitoring and regulating elevator movement. But even the best of these expert systems are prone to what AI theorists call “brittleness”—they fail if they encounter unexpected information. In one famous example, an expert system for handling car loans issued a loan to an eighteen-year-old who claimed that he’d had twenty years of job experience. The software hadn’t been specifically programmed to check for this type of discrepancy and didn’t have the common sense to notice it on its own. “People kept banging their heads against this same brick wall of not having this common sense,” Lenat says.

By 1983, however, Lenat had become convinced that commonsense AI was possible—but only if someone were willing to bite the bullet and codify all common knowledge by brute force: sitting down and writing it out, fact by fact by fact. After conferring with MIT’s AI maven Marvin Minsky and Apple Computer’s high-tech thinker Alan Kay, Lenat estimated the project would take tens of millions of dollars and twenty years to complete.

“All my life, basically,” he admits. He’d be middle-aged by the time he could even figure out if he was going to fail. He estimated he had only between a 10 and 20 percent chance of success. “It was just barely doable,” he says.

But that slim chance was enough to capture the imagination of Admiral Bobby Inman, a former director of the National Security Agency and head of the Microelectronics and Computer Technology Corporation (MCC), an early high-tech consortium. (Inman became a national figure in 1994 when he withdrew as Bill Clinton’s appointee for secretary of defense, alleging a media conspiracy against him.) Inman invited Lenat to work at MCC and develop commonsense AI for the private sector. For Lenat, who had just divorced and whose tenure decision at Stanford had been postponed for a year, the offer was very appealing. He moved immediately to MCC in Austin, Texas, and Cyc was born.•

Tags: , ,

A couple of things I learned from “How to Tell When a Robot Has Written You a Letter,” a Medium piece by the reliably excellent Clive Thompson: 1) Companies remain which employ people to handwrite letters for you, and 2) Subtle differences make it possible to detect if a machine has written a missive rather than a human (though I must admit I would still be fooled even after reading Thompson’s post). An excerpt:

“So now robots are trying to write like us. But they’re not perfect yet! It turns out there are some intriguing quirks of human psychology and letter-formation that the machines can’t yet mimic. Learn those tricks, and you can spot the robots.

I first heard of these human-machine handwriting differences in a conversation last week with Brian Curliss and Daniel Jurek, the cofounders of the startup Maillift. If you need to send out 200 personalized letters to sales leads but haven’t got the time to handwrite them yourself — or if your handwriting is, like mine, grotesque — then Maillift will generate them for you, using teams of genuinely carbon-based people. (What sort of person enjoys handwriting letters for others? ‘Teachers,’ Curliss replies. Apparently teachers have spectacular handwriting, take enormous pride in the craft, and want to make some extra coin in their evenings and weekends.)

Curliss and Jurek also own a handwriting robot, so they’ve studied thousands of human-written letters and compared them to ones produced by machines. They’ve identified three crucial distinctions.”

Tags: , ,

Clive Thompson of Wired is one of those blessed journalists who’s as much of a joy to read for his lucid prose as his good ideas. In a new piece, he interviews multifaceted Canadian academic Vaclav Smil, a prolific author and a favorite of Bill Gates. An excerpt about manufacturing in America, which has been outsourced to a great degree in recent decades and in the next few will be increasingly lost to automation:

Clive Thompson:

Let’s talk about manufacturing. You say a country that stops doing mass manufacturing falls apart. Why?

Vaclav Smil:

In every society, manufacturing builds the lower middle class. If you give up manufacturing, you end up with haves and have-nots and you get social polarization. The whole lower middle class sinks.

 Clive Thompson:

You also say that manufacturing is crucial to innovation.

Vaclav Smil:

Most innovation is not done by research institutes and national laboratories. It comes from manufacturing—from companies that want to extend their product reach, improve their costs, increase their returns. What’s very important is in-house research. Innovation usually arises from somebody taking a product already in production and making it better: better glass, better aluminum, a better chip. Innovation always starts with a product.

Look at LCD screens. Most of the advances are coming from big industrial conglomerates in Korea like Samsung or LG. The only good thing in the US is Gorilla Glass, because it’s Corning, and Corning spends $700 million a year on research.

 Clive Thompson:

American companies do still innovate, though. They just outsource the manufacturing. What’s wrong with that?

 Vaclav Smil:

Look at the crown jewel of Boeing now, the 787 Dreamliner. The plane had so many problems—it was like three years late. And why? Because large parts of it were subcontracted around the world. The 787 is not a plane made in the USA; it’s a plane assembled in the USA. They subcontracted composite materials to Italians and batteries to the Japanese, and the batteries started to burn in-flight. The quality control is not there.”

Tags: ,

There are fewer postcards and hand-written notes today, but I don’t think anyone would argue against the idea that more people in the world are writing more in the Internet Age than at any moment in history. What we’re writing is largely bullshit, sure, but not all of it is. It’s really the full flowering of democracy, like it or not. From Walter Isaacson’s New York Times review of Clive Thompson’s glass-half-full tech book, Smarter Than You Think:

“Thompson also celebrates the fact that digital tools and networks are allowing us to share ideas with others as never before. It’s easy (and not altogether incorrect) to denigrate much of the blathering that occurs each day in blogs and tweets. But that misses a more significant phenomenon: the type of people who 50 years ago were likely to be sitting immobile in front of television sets all evening are now expressing their ideas, tailoring them for public consumption and getting feedback. This change is a cause for derision among intellectual sophisticates partly because they (we) have not noticed what a social transformation it represents. ‘Before the Internet came along, most people rarely wrote anything at all for pleasure or intellectual satisfaction after graduating from high school or college,’ Thompson notes. ‘This is something that’s particularly hard to grasp for professionals whose jobs require incessant writing, like academics, journalists, lawyers or marketers. For them, the act of writing and hashing out your ideas seems commonplace. But until the late 1990s, this simply wasn’t true of the average nonliterary person.'”

Tags: ,

Erika Anderson of Guernica interviewed Clive Thompson about his theory that early arcade games featured a type of information sharing that’s being used to greater good in our more interconnected world. The opening:

Guernica:

How would you describe the evolution of video games?

Clive Thompson:

When games started out, they were very, very simple affairs, and that was partly just technical—you couldn’t do very much. They had like 4K of memory. And so the games started off really not needing instructions at all. The first Pong game had one instruction. It was, ‘Avoid missing ball for high score.’ So it was literally just that: don’t fail to hit the ball. I remember when I read it, it was actually a confusing construction: avoid missing ball for high score. It’s weirdly phrased, as if it were being translated from Swedish or something, you know? But they didn’t know what they were doing.

But what started happening very early on was that if you were in the arcades as I was—I’m 44 in October, so I was right at that age when these games were coming out—the games were really quite hard in a way, and because they were taking a quarter from you, their goal was to have you stop playing quickly because they need more money. They ramped up in difficulty very quickly, like the next wave is harder, and the third wave is unbelievably harder. And so you had to learn how to play them by trial and error with yourself but you only had so much money. And so what you started doing was you started observing other people and you started talking to all the other people. What you saw when you went to a game was one person playing and a semi-circle of people around them and they were all talking about what was going on, to try to figure out how to play the game. And they would learn all sorts of interesting strategy.”

••••••••••

In 1972, Rod Serling teaches Steve Allen how to play the home version of Pong (forward to the 15:40 mark):

Tags: ,

In a new article in Wired, Clive Thompson interviews Microsoft’s principal researcher, Bill Buxton, about the “long nose” theory, which holds forth that innovations that seemingly come out of nowhere are actually incubated for a long time. At the piece’s conclusion, Thompson predicts which technology is ready to dominate in the next decade. An excerpt:

“Using a ‘long nose’ analysis, I have a prediction of my own. I bet electric vehicles are going to become huge—specifically, electric bicycles. Battery technology has been improving for decades, and the planet is urbanizing rapidly. The nose is already poking out: Electric bikes are incredibly popular in China and becoming common in the US among takeout/delivery people, who haul them inside their shops each night to plug them in. (Pennies per charge, and no complicated rewiring of the grid necessary.) I predict a design firm will introduce the iPhone of electric bikes and whoa: It’ll seem revolutionary!”

••••••••••

Prodeco Technologies introduces the next generation of electric bikes:

Tags: ,