Steven Levy

You are currently browsing articles tagged Steven Levy.

Penny Lane, or some place like it, used to be in our ears and in our eyes. Not so much in the twenty-first century. Now your head is supposed to be inside your phone, while sensors, cameras and computers aim to unobtrusively extract information from you.

These robots do not resemble us at all, so there’s no uncanny valley—you’re not meant to detect any dips. As cars become driverless and the Internet of Things proliferates, there will be no opting out, no covering up. As Leonard Cohen groaned in 1992, just three years after Tim Berners-Lee unwittingly gifted us with a Trojan Horse, which we gleefully wheeled inside the gates: “There’ll be the breaking of the ancient western code / Your private life will suddenly explode.” 

Three excerpts follow.


The opening of the Economist article “What Machines Can Tell From Your Face“:

The human face is a remarkable piece of work. The astonishing variety of facial features helps people recognise each other and is crucial to the formation of complex societies. So is the face’s ability to send emotional signals, whether through an involuntary blush or the artifice of a false smile. People spend much of their waking lives, in the office and the courtroom as well as the bar and the bedroom, reading faces, for signs of attraction, hostility, trust and deceit. They also spend plenty of time trying to dissimulate.

Technology is rapidly catching up with the human ability to read faces. In America facial recognition is used by churches to track worshippers’ attendance; in Britain, by retailers to spot past shoplifters. This year Welsh police used it to arrest a suspect outside a football game. In China it verifies the identities of ride-hailing drivers, permits tourists to enter attractions and lets people pay for things with a smile. Apple’s new iPhone is expected to use it to unlock the homescreen (see article).

Set against human skills, such applications might seem incremental. Some breakthroughs, such as flight or the internet, obviously transform human abilities; facial recognition seems merely to encode them. Although faces are peculiar to individuals, they are also public, so technology does not, at first sight, intrude on something that is private. And yet the ability to record, store and analyse images of faces cheaply, quickly and on a vast scale promises one day to bring about fundamental changes to notions of privacy, fairness and trust.

The final frontier

Start with privacy.•


From “The Next Challenge for Facial Recognition Is Identifying People Whose Faces Are Covered,” a James Vincent Verge piece:

The challenge of recognizing people when their faces are covered is one that plenty of teams are working on — and making quick progress.

Facebook, for example, has trained neural networks that can recognize people based on characteristics like hair, body shape, and posture. Facial recognition systems that work on portions of the face have also been developed (although, again; not ready for commercial use). And there are other, more exotic methods to identify people. AI-powered gait analysis, for example, can recognize individuals with a high degree of accuracy, and even works with low-resolution footage — the sort you might get from a CCTV camera.

One system for identifying masked individuals developed at the University of Basel in Switzerland recreates a 3D model of the target’s face based on what it can see. Bernhard Egger, one of the scientists behind the work, told The Verge that he expected “lots of development” in this area in the near future, but thought that there would always be ways to fool the machine. “Maybe machines will outperform humans on very specific tasks with partial occlusions,” said Egger. “But, I believe, it will still be possible to not be recognized if you want to avoid this.”

Wearing a rigid mask that covers the whole face, for example, would give current facial recognition systems nothing to go on. And other researchers have developed patterned glasses that are specially designed to trick and confuse AI facial recognition systems. Getting clear pictures is also difficult. Egger points out that we’re used to facial recognition performing quickly and accurately, but that’s in situations where the subject is compliant — scanning their face with a phone, for example, or at a border checkpoint.

Privacy advocates, though, say even if these systems have flaws, they’re still likely to be embraced by law enforcement.•


From “How Apple Is Putting Voices in Users’ Heads—Literally,” a Steven Levy Wired story about Apple technology that could be a boon for the hearing impaired—and, potentially, a bane for all of us:

Merging medical technology like Apple’s is a clear benefit to those needing hearing help. But I’m intrigued by some observations that Dr. Biever, the audiologist who’s worked with hearing loss patients for two decades, shared with me. She says that with this system, patients have the ability to control their sound environment in a way that those with good hearing do not—so much so that she is sometimes envious. How cool would it be to listen to a song without anyone in the room hearing it? “When I’m in the noisiest of rooms and take a call on my iPhone, I can’t hold my phone to ear and do a call,” she says. “But my recipient can do this.”

This paradox reminds me of the approach I’m seeing in the early commercial efforts to develop a brain-machine interface: an initial focus on those with cognitive challenges with a long-term goal of supercharging everyone’s brain. We’re already sort of cyborgs, working in a partnership of dependency with those palm-size slabs of glass and silicon that we carry in our pockets and purses. The next few decades may well see them integrated subcutaneously.

I’m not suggesting that we all might undergo surgery to make use of the tools that Apple has developed. But I do see a future where our senses are augmented less invasively. Pulling out a smartphone to fine-tune one’s aural environment (or even sending vibes to a brain-controlled successor to the iPhone) might one day be as common as tweaking bass and treble on a stereo system.•

Tags: ,

“We need to put everything online,” Bill Joy tells Steven Levy in an excellent Backchannel interview, and I’m afraid that’s what we’re going to do. It’s an ominous statement in a mostly optimistic piece about the inventor’s advances in batteries, which could be a boon in creating clean energy.

Of course, Joy doesn’t mean his sentiment to be unnerving. He looks at sensors, cameras and computers achieving ubiquity as a means to help with logistics of urban life. But they’re also fascistic in the wrong hands–and eventually that’s where they’ll land. These tools can help the trains run on time, and they can also enable a Mussolini.

Progress and regress have always existed in the same moment, but these movements have become amplified as cheap, widely available tools have become far more powerful in our time. So we have widespread governmental and corporate surveillance of citizens, while individuals and militias are armed with weapons more powerful than anything the local police possesses. This seems to be where we’re headed in America: Everyone is armed in one way or another in a very dangerous game. 

When Joy is questioned about the downsides of AI, he acknowledges “I don’t know how to slow the thing down.” No one really seems to.

An excerpt:

Steven Levy:

In the 1990s you were promoting a technology called Jini that anticipated mobile tech and the Internet of Things. Does the current progress reflect what you were thinking all those years ago?

Bill Joy:

Exactly. I have some slides from 25 years ago where I said, “Everyone’s going to be carrying around mobile devices.” I said, “They’re all going to be interconnected. And there are 50 million cars and trucks a year, and those are going to be computerized.” Those are the big things on the internet, right?

Steven Levy:

What’s next?

Bill Joy:

We’re heading toward the kind of environment that David Gelernter talked about in his book, Mirror Worlds, when he said, “The city becomes a simulation of itself.” It’s not so interesting just to identify what’s out there statically. What you want to do is have some notion of how that affects things in the time domain. We need to put everything online, with all the sensors and other things providing information, so we can move from static granular models to real simulations. It’s one thing to look at a traffic map that shows where the traffic is green and red. But that’s actually backward-looking. A simulation would tell me where it’s going to be green and where it’s going to be red.

This is where AI fits in. If I’m looking at the world I have to have a model of what’s out there, whether it’s trained in a neural net or something else. Sure, I can image-recognize a child and a ball on this sidewalk. The important thing is to recognize that, in a given time domain, they may run into the street, right? We’re starting to get the computing power to do a great demo of this. Whether it all hangs together is a whole other thing.

Steven Levy:

Which one of the big companies will tie it together?

Bill Joy:

Google seems to be in the lead, because they’ve been hiring these kind of people for so long. And if there’s a difficult problem, Larry [Page, Google’s CEO] wants to solve it. Microsoft has also hired a lot of people, as well as Facebook and even Amazon. In these early days, this requires an enormous amount of computing power. Having a really, really big computer is kind of like a time warp, in that you can do things that aren’t economical now but will be economically [feasible] maybe a decade from now. Those large companies have the resources to give someone like Demis [Hassabis, head of Google’s DeepMind AI division] $100 million, or even $500 million a year, for computer time, to allow him to do things that maybe will be done by your cell phone 10 years later.

Steven Levy:

Where do you weigh in on the controversy about whether AI is a threat to humanity?

Bill Joy:

Funny, I wrote about that a long time ago.

Steven Levy:

Yes, in your essay The Future Doesn’t Need Us.” But where are you now on that?

Bill Joy:

I think at this point the really dangerous nanotech is genetic, because it’s compatible with our biology and therefore it can be contagious. With CRISPR-Cas9 and variants thereof, we have a tool that’s almost shockingly powerful. But there are clearly ethical risks and danger in AI. I’m at a distance from this, working on the clean-tech stuff. I don’t know how to slow the thing down, so I decided to spend my time trying to create the things we need as opposed to preventing [what threatens us]. I’m not fundamentally a politician. I’m better at inventing stuff than lobbying.•

Tags: ,

Sebastian Thrun is a brilliant guy who was positioned at the starting line of the recent boom in driverless technology, but he’s no stranger to irrational exuberance. A couple years ago, the Udacity founder earnestly announced that “if I could double the world’s GDP, it would be very gratifying to me.” Yes, that would be nice.

The computer scientist and entrepreneur is now employed as CEO of Larry Page’s Kitty Hawk, engaged in trying to perfect the flying car, a vehicle of retrofuture dreams that seems exceedingly unnecessary. Wouldn’t it be far better for society if he and others like him were engaged in innovation aimed at more practical public transportation solutions for the masses? The thing about childhood dreams is that most of them are childish.

Steven Levy held roughly the same view last month when he sat down to interview Thrun for Backchannel (now housed at Wired). The opening:

Steven Levy:

Why do we need flying cars?

Sebastian Thrun: 

It is a childhood dream. Flying is just such a magical thing to do. Making personalized flight available to everybody really opens up a set of new experiences. But in the long term there’s a practicality to the idea of a flying vehicle that takes off vertically like a helicopter, is very quiet, and can serve short range transportation. The ground is getting more and more congested. In the US, road usage increases by about three percent every year. But we don’t build any roads. And countries like China that very recently witnessed an explosion of automotive ownership are suffering tremendously from unbelievable traffic jams. While the ground infrastructure of roads is one-dimensional, the sky is three-dimensional, and it is much, much larger.

Steven Levy:

But it you build flying cars, won’t the air be just as congested?

Sebastian Thrun: 

The nice thing about the air is there is more of it. You could have virtual highways in the sky and stack them vertically. So you never have a traffic intersection or similar.

Steven Levy:

But highways have lanes. You can’t have dotted lines in the sky.

Sebastian Thrun: 

Yes, you can, it turns out. Thanks to the US government we have the Global Positioning System that gives us precision location information. We can paint virtual highways into the sky. We are actually doing this today. When you look at the way planes fly, they use equipment that effectively constructs highways in the sky.

Steven Levy:

Still, the number of planes is tiny compared to cars, which you want to put in the air. Plus, everybody is buying drones. If you folks get your way, the sky is going to be completely full.

Sebastian Thrun: 

Every idea put to the extreme sounds odd.•

Tags: ,

Overall I enjoyed Garry Kasaprov’s Deep Thinking. Have philosophical disagreements with it, for sure, and there is some revisionism in regards to his personal history, but the author’s take on his career developing parallel to the rise of the machines and his waterloo versus IBM is fascinating. It’s clear that if there had been a different World Chess Champion during Kasparov’s reign, one who lacked his significant understanding of the meaning of computers and maverick mindset, the game would have been impoverished for it. I’ll try to make time this weekend to write a long review.

The 20-year retrospective on Deep Blue’s 1997 victory would be incomplete without reflection by Steven Levy, who penned the famous Newsweek cover story “The Brain’s Last Stand” as a preface to the titanic match in which humanity sunk. (It turns out Levy himself composed that perfectly provocative cover line that no EIC could refuse.)

The writer focuses in part on the psychological games that Deep Blue was programmed to play, an essential point to remember as computers are integrated into every aspect of life–when nearly every object becomes “smart.” Levy points out that no such manipulations were required for DeepMind to conquer Go, but those machinations might be revisited when states and corporations desire to nudge our behaviors.

An excerpt:

The turning point of the match came in Game Two. Kasparov had won the first game and was feeling pretty good. In the second, the match was close and hard fought. But on the 36th move, the computer did something that shook Kasparov to his bones. In a situation where virtually every top-level chess program would have attacked Kasparov’s exposed queen, Deep Blue made a much subtler and ultimately more effective move that shattered Kasparov’s image of what a computer was capable of doing. It seemed to Kasparov — and frankly, to a lot of observers as well — that Deep Blue had suddenly stopped playing like a computer (by resisting the catnip of the queen attack) and instead adopted a strategy that only the wisest human master might attempt. By underplaying Deep Blue’s capabilities to Kasparov, IBM had tricked the human into underestimating it. A few days later, he described it this way: “Suddenly [Deep Blue] played like a god for one moment.” From that moment Kasparov had no idea what — or who — he was playing against. In what he described as “a fatalistic depression,” he played on, and wound up resigning the game.

After Game Two, Kasparov was not only agitated by his loss but also suspicious at how the computer had made a move that was so…un-computer like. “It made me question everything,” he now writes. Getting the printouts that explained what the computer did — and proving that there was no human intervention — became an obsession for him. Before Game Five, in fact, he implied that he would not show up to play unless IBM submitted printouts, at least to a neutral party who could check that everything was kosher. IBM gave a small piece to a third party, but never shared the complete file.

Kasparov was not the same player after Game Two.•

“It was very easy, all the machines are only cables and bulbs.”

Tags: ,

It’s only possible to guess from afar, but I’d wager that the diminution of neo-Nazi trolls and bots on Twitter in the post-election period isn’t largely due to tweaks made by Jack Dorsey & co., but has rather come about because those deployed to disrupt the election are no longer receiving checks from the Kremlin and god knows where else.

While the tweetstorm shitstorm has abated to a degree, we won’t know until the next election season if that rough beast is just waiting to be reborn. And things must be different next time because the tool was used like a weapon in 2016, bludgeoning democracy and decency.

The company isn’t in an easy position when it comes to the Tweeter-in-Chief, who uses the platform to slander, something that might not be as tolerated from those with lesser titles. 

When Dorsey asserts in a smart Backchannel Q&A conducted by Steven Levy that Trump’s current tweets are “consistent with his tweets back in 2011-2012,” I have to assume he’s referring to form and not content, since the pathological liar now regularly contradicts his earlier criticisms of President Obama. People have fun retweeting his old comments to point out the hypocrisy, which I suppose is useful, or at least entertaining, but we may be entertaining ourselves to death.

One thing that’s not consistent with 2011-2012 is that Trump is now President and his 140-character discharges can lead to international incidents, even wars. 

An excerpt in which Dorsey has clearly bought into the “forgotten Americans” narrative, which isn’t exactly Silicon Valley’s biggest problem:


Lately, a lot of people have been alleging that social media, including Twitter, has degraded the quality of public discourse. What do you think?

Jack Dorsey:

You can have conversation that’s distracting and you can have conversation that is focusing. I don’t think it’s a matter of the tool — it’s how people use the tool. Could we encourage better usage of Twitter through changing the product? Absolutely. We are always going to be looking for opportunities to make it easier, but also to show what matters faster. We moved from a completely time-ordered, reverse-chronological timeline to actually bubbling up what you should be seeing and what matters according to our understanding of what you’re interested in—and potentially showing the other side of what you’re interested in, as well. One of the values Twitter espouses is that it can show every side of a debate. I get the New York Times and I follow Fox, too, because I just want to challenge what I’m seeing. And that’s awesome. Whether you choose to dive into it or not is really up to you. We’re not going to force that on people.


But do you think Silicon Valley has worsened the divide?

Jack Dorsey:

It’s not just technology companies that are out of touch with a big part of the country and the world. I think it’s all of us. I think this city is out of touch with Missouri, where I’m from, and other people in areas like that. It is our responsibility to help bridge some of those gaps, because we’re building tools that people are using on a daily basis to connect with each other and to see the world. If we’re only fulfilling their bias, then we’re doing the wrong thing. We feel that burden and we want to help fix it. And the only way we can do that is by talking with people. So we go out of our way to listen and to have real conversations, not just seeing what people are saying on Twitter but actually bringing people in and interviewing them and talking about what they like and what they don’t like and what they are experiencing. The question I ask of anyone I meet who uses Twitter is: How do you use it, and why?•

Tags: ,


echo_familyThere’s something inherently human about people trying to make machines speak. It may seem a paradox, but history has proven it so.

Just one example: Nearly 170 years before Siri, German inventor Joseph Faber demonstrated his Talking Machine, most commonly known as “Euphonia,” which was able to speak sentences in a human if monotone voice. The marvel became a staple of Barnum’s shows, billed as the “Scientific Sensation of the Age,” though it proved a fleeting success.

With Siri, machine conversation was here to stay. Amazon’s Echo, the Alexa-enabled next big thing after Apple’s invention, can answer all manner of question, but the company wants long conversations not quick exchanges, the better to keep you engaged in its products for longer spans. Such dialogues may make us less lonely, though we’ll become a little more artificial as gadgets become increasingly “real.”

In an interesting Backchannel interview, Steven Levy questions Rohit Prasad, Amazon’s VP of Alexa. One passage about human-machine conversation:

Rohit Prasad:

Are you aware of the Alexa Prize competition?

Steven Levy:

This is the $2.5 million challenge to computer science students that you announced in September?

Rohit Prasad:

Yes. In academia it’s hard to do research in conversation areas because they don’t have a system like Alexa to work with. So we are making it easy to build new conversational capabilities with a modified version of the Alexa skills kit. This grand challenge is to create a social bot that can carry on a meaningful, coherent, and engaging conversation for 20 minutes.

Steven Levy:

Would that be a Turing-level kind of conversation, do you think?

Rohit Prasad:

No, the Turing test comes down to human gullibility — can you fool an outsider into thinking it’s a human? If you think about certain tasks, Alexa is already better than a human. It’s super hard for a human to play a particular song out of millions of catalog entries within a second, right? If you ask Alexa to compute factorial of 60, that’s hard for a human. So we definitely did not want it to be like a Turing test. It’s more about coherence and engagement.

Steven Levy:

What are people going to be talking about in these 20 minute conversations with Alexa?

Rohit Prasad:

We are giving topics. Like, “Can you talk on the trending topics in today’s newspaper?” We expect the social bot to be able to chat with you on topics like scientific inventions, or the financial crisis.•

Tags: ,

Perhaps I’m too much a product of the West, but I think the downfall of autocratic societies, especially protectionist ones, is contained in their DNA, China included. The system seems antithetical to the human spirit and opposed to nurturing a creative class. That could be the reason why China has thus far not produced any great products.

That said, it’s impossible to overlook how far and fast China’s economy has grown, all while dominating its own massive market. In a smart Backchannel article, Steven Levy analyzes, in the wake of Uber’s capitulation, the impervious nature of the nation’s tech sector for American companies. An excerpt:

China is the world’s biggest internet market, and it’s destined to become the leading economy of this century. American technology companies are desperate to compete there, with dreams of reaching the same dominant market share in China that they have elsewhere in the world. But instead of commercial triumph, there has been a series of ignominious retreats, even for some of the most glorious pillars of American tech: Amazon, eBay, Google, and so on. Meanwhile, Facebook hasn’t even gotten far enough in the market to make a retreat. It keeps edging closer, even to the point where its CEO has learned to speak Mandarin— but can’t figure out how to enter the country while still following China’s strict rules of censorship and control of data.

Uber was the latest gladiator, and seemingly one that had a chance at victory. It was going head to head with its Chinese rival Didi with a war chest full of cash and a world domination mentality. As late as this past June, Uber was predicting it would pass its rival within a year. Now Uber is simply the most recent American internet giant who decided China was not worth the fight. And it probably won’t be the last.

China is hard. The reasons differ according to the sector and the company, but the combination of culture, nationalism, and especially a government that likes to tilt the playing field has prevented American giants who excel overseas from dominating in China. This is not to say that Chinese government regulation drove Uber’s deal with Didi, which was clobbering Uber in the ride-sharing market; in fact, Uber felt it was treated fairly by a government interested in transportation innovation. According to reports on the ground, Didi used its local knowledge to act more nimbly in satisfying Chinese customers. But my guess is that if the American ride-sharing company had been more successful, China would have put a Mao-sized thumb on the scales.•



Steve Case won’t be around to read his obituary, which is probably a good thing.

It would no doubt pain him that the lead will be the disastrous America Online-Time Warner merger, an attempt at synergy that wound up a lose-lose of historic proportions. Case, then the AOL CEO, bet on old media at a time when he needed to walk even more boldly into the future with the Internet. It was one step backwards, and he lost his leg.

AOL has long been done as a major player in any sector, but Case continues apace, with entrepreneurial endeavors and charitable work. Steven Levy just interviewed him about his book, The Third Wave: An Entrepreneur’s Vision of the Future, an attempt to predict what comes after Web 1.0 and 2.0. The journalist ventures into an apt topic in this insane political season: If technology has gifted us with more information than ever, why does the public seem less informed?

An excerpt:

Steven Levy:

In the book you include a very prescient statement you made after graduating college in the early 1980s about how technology would affect our lives. We have been transformed by all sorts of gadgets and networks that augment our powers. But judging from the current election process, it doesn’t seem to have made people smarter. You could even make a case for the opposite, saying people are dumber — anti-science, and more susceptible to mob thinking than they used to be.

Steve Case:

That’s fair. One of the things we felt passionate about 30 years ago was leveling the playing field so that everybody can have a voice. Back then when there were three television networks, unless you were rich and owned a printing press, you didn’t really have the opportunity to have your voice heard. Having millions of voices heard is awesome, but it gets noisy and some people are saying things that are inaccurate and not constructive and worse. There is absolutely this dynamic, of people living in a filtered bubble, hearing voices that reinforce their views and not really being exposed to the views of other people. That drives this hyper partisanship. I’m very concerned about it. We need to figure how to rebuild a center. Compromise should become a good word, not a bad word.

Steven Levy:

Has technology made it harder to find compromise?

Steve Case:

It has. In high school I wouldn’t have said this, but also sometimes to reach compromise you have to have a quiet discussion and cut a deal. When you have to have those negotiations, essentially in public, and talking points and sound bites on two-minute cable TV, things get noisier and it gets less constructive. With the current election, it is noisy and a little uncomfortable. The political process is getting disrupted.•

Tags: ,


With his AI enterprise, Viv, Dag Kittlaus is not trying to create Frankenstein but Igor, a digital assistant that goes far beyond Siri (which he co-created), one that attaches a pleasing voice to a “giant brain in the sky.” It’s not easy, however, for technologists to discuss such heady enterprises without offering phrases, like the one in the headline, embedded with unintended meanings.

From Marco della Cava’s USA Today piece about technology talk at SXSW, which featured Steven Levy questioning Kittlaus:

When the moderator, tech author Steven Levy, asked Kittlaus if in fact supercomputers might not take over for entrepreneurs, using their digital brains to create things faster than humans, [Dag] Kittlaus nodded.

“Yes, it will happen,” he said. “It’s just a matter of when.”

Kittlaus, it can be argued, is hastening the arrival of that day. Later this year, he will unveil Viv, an open source and cloud-based personal assistant that will allow humans “to talk to the Internet” and have the Internet talk back.

“The more you ask of Viv, the more it will get to know you,” he said. “Siri was chapter one, and now it’s almost like a new Internet age is coming. Viv will be a giant brain in the sky.”

Kittlaus said Viv would differ from Siri, Microsoft’s Cortana and Amazon’s Echo by being able to make mental leaps.

For example, asking Viv “What’s the weather near the Super Bowl” would cause it to “write its own program to find the answer, one that first determines where the Super Bowl is, and then what the weather will be in that city,” he said.•

Tags: , ,

tentaclearm (1)

In addition to yesterday’s trove of posts about the late, great Marvin Minsky, I want to refer you to a Backchannel remembrance of the AI pioneer by Steven Levy, the writer who had the good fortune to arrive on the scene at just the right moment in the personal-computer boom and the great talent to capture it. The journalist recalls Minsky’s wit and conversation almost as much as his contributions to tech. Just a long talk with the cognitive scientist was a perception-altering experience, even if his brilliance was intimidating. The opening:

There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening.

But maybe Marvin could imagine it. His imagination respected no borders.

Minsky died Sunday night, at 88. His body had been slowing down, but that mind had kept churning. He was more than a pioneering computer scientist — he was a guiding light for what intellect itself could do. He was also our Yoda. The entire computer community, which includes all of us, of course, is going to miss him. 

I first met him in 1982; I had written a story for Rolling Stone about young computer hackers, and it was optioned by Jane Fonda’s production company. I traveled to Boston with Fonda’s producer, Bruce Gilbert; and Susan Lyne, who had engineered my assignment to begin with. It was my first trip to MIT; my story about been about Stanford hackers.

I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement.•

Tags: ,

images (16)

In a Backchannel piece, Steven Levy shares everything most things he learned during an inside look at Google’s autonomous-car mission command at the decommissioned Castle Air Force Base in Atwater, California. Most of the (non-)drivers hired to put miles on the vehicles are recent Liberal Arts grads who test the prototypes on streets in Mountain View and Austin. Some are even employed as human props, known as “professional pedestrians.” “We just have to learn to trust,” one tells Levy. It seems the tight-lipped company’s testing of the cars may have gone beyond what people realize.

An excerpt:

Google’s ultimate goal, of course, is to make a transition from testing to systems where no safety drivers are needed — just passengers. For some time, Google has been convinced that the semiautonomous systems that others champion (which include various features like collision prevention, self-parking, and lane control on highways) are actually more dangerous than to the so-called Level Four degree of control, where the car needs no human intervention. (Each of the other levels reflects a degree of driver involvement.) The company is convinced that with cars that almost but don’t drive themselves, humans will be lulled into devoting attention elsewhere and unable to take quick control in an emergency. (Google came to that conclusion when it allowed some employees to commute with the cars, using autodrive only on premapped freeways. One Googler, perhaps forgetting that the company was capturing the whole ride on video, pretty much crawled into the backseat for a phone charger while the car sped along at 65 miles per hour.)

Google also believes that cars should be able to move around even with no humans in them, and it has been hoping for an official go-ahead to begin a shuttle service between the dozens of buildings it occupies in Mountain View, where slow-moving, no-steering-wheel prototypes would putter along by themselves to pick up Googlers. It was bitterly disappointed when the California DMV ruled it was not yet time for driverless cars to travel the streets, even in those limited conditions. The DMV didn’t even propose a set of requirements that Google could satisfy to make this happen. Meanwhile, Elon Musk, CEO of Tesla, is barreling ahead, introducing a driverless feature in his Tesla cars called Summon. He predicts that by 2018, Tesla owners will be able to summon their cars from the opposite coast, though it’s a mystery how the cars would recharge themselves every 200 or so miles.

But maybe Musk is not the first. When I discussed this with [program director Chris] Urmson, he postulated that in most states — California not among them — it was not illegal to operate driverless cars on public streets. I asked him whether Google had sent out cars with no one in them to pick up people in Austin. He would not answer.•

Tags: ,


It’s difficult to imagine anything as intractable as a Big Auto corporation with thousands of employees and shareholders, and they don’t get more venerable than Ford, birthplace of the Model T, brainchild of the namesake plutocrat who was sometimes a populist but just as often an employer of strike-breaking Pinkertons. It was Henry Ford, after all, who sold America its first set of wheels.

That legendary car maker is interested in reinventing itself as a “smart mobility” company, as Steven Levy learned while poking around the premises. In a smart Backchannel piece, Levy writes that old Hank’s great-grandson, William (Bill) Clay Ford, Jr., doesn’t fear Apple or Tesla, believing his own outfit can achieve bleeding-edge Digital Age greatness, that Detroit’s most famous name can compete with Silicon Valley and its EVs and ride-sharing and autonomous.

The opening:

Is the Ford Motor Company…pivoting?

Startups do it all the time, occasionally with seismic consequences. Android was originally conceived as an operating system for cameras. Slack began as a video game. Airbnb really was all about air mattresses. But none of these companies was a 113-year-old pillar of the economy, with 197,000 employees, billions of dollars spent on branding, and countless tons of metal emblazoned with the company logo rumbling along the world’s roadways. The mind reels at the notion that Ford — Ford! — would change directions like an angel-funded six-person SOMA venture switching gears after a failed app.

Yet that’s what the Ford Motor Company seems to be doing. Or at least that’s what I sensed when I attended a Ford media day in Dearborn, Michigan, last month. (It was a palate cleanser for 2016 events — CES, followed by this week’s giant Detroit Auto Show.) The point of the day was to emphasize Ford’s evolving strategy. Making cars will remain a big part of Ford, but the company is committed to an additional but vital business model, a high-tech effort based on “smart mobility.” This approach not only doesn’t focus on selling vehicles, but even embraces some instances where potential car owners might forgo a Ford, or any other vehicle, in their driveway. Part of the vision would even point people to public transit. Sounds like a sea change to me.

To confirm whether this is indeed an epochal moment, I tap the perfect source: William Clay Ford, Jr. He’s executive chairman (and a former CEO) of the company founded by his great-grandfather in 1903, and he’s altogether one the most intriguing figures in the auto industry; his weaves between anachronism and futurist qualify him for a cognitive DUI. Bill, I ask (Can I call you Bill?), is Ford attempting the biggest pivot of all time?

Tags: , ,


The next phase of Artificial Intelligence may be top-heavy initially but not for long. As with the Internet, it will be unloosed into the world, into the hands of individuals, and that makes for both wonderful and awful possibilities. It’s interesting that Elon Musk, who fears superintelligence may be an existential risk for the species, favors an arrangement in which as many as possible will possess key AI information. He feels there’s safety in numbers. Perhaps. Some of the interested parties will have bad intentions, of course, bad intentions and powerful tools. 

An excerpt from Steven Levy’s Backchannel interview with Musk and other leaders of OpenAI:

Elon Musk:

As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.

And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.

Steven Levy:

Human will?

Elon Musk:

As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phone — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.•

Tags: ,

For all his many flaws, Timothy Leary did prove to be prescient in numerous ways. One was his abandonment of LSD for a new and more powerful drug: computer software. The good doctor believed psychedelics, while a useful first step out of conformity, would become a crutch. He wanted to “plug into” a better machine. Computers (and space travel), he believed, would offer far wider horizons and deeper questions about reality. It is funny that Leary wanted us to stop being “beloved robots” only to devote his attention to actual ones. But he was right: Even yesterday’s bleeding-edge drugs were blunt instruments. 

On a related topic, Steven Levy has written “Inside Deep Dreams: How Google Made Its Computers Go Crazy,” a beauty of a Backchannel piece about computer scientist Alexander Mordvintsev, who fell into a terrible dream and awoke to the possibilities of artificial neural networks. He educated himself in the way of NNs and then rerouted them in a novel way, from passive to active, from literal to metaphorical. Of his initial experimentations, Levy writes that the “image looked like the work of a mad person. Or someone on LSD.”

The opening:

In the very early hours of May 18, 2015, Alexander Mordvintsev was wrenched from sleep. A nightmare, he later described it to me, in the first interview he has granted on the experience. Or, at least, a dream, a deeply disturbing dream, where an intruder had crossed the threshold of the Zurich apartment that he, his pregnant wife, and his 3-year-old son had been occupying for the past few months. They had moved to Switzerland from St. Petersburg that last November, when the Russian computer scientist got a job at Google’s engineering center there.

Now it was darkest night and Mordvintsev, jarred awake by his savage slumber, leapt from the bed to check the door. It was closed; all was quiet. But his mind was afire. Okay, it’s 2 a.m., but I can’t sleep, he told himself. So time to write a few lines of code.

It would be a decision that would eventually unleash a torrent of fantastic images, torn from an alien perspective, that intrigued and twisted the minds of those who viewed them. A decision that would reveal the power of artificial neural nets, our potential future overlords in an increasingly one-sided relationship with machine intelligence. And a decision that would change Mordvintsev’s own life.•


Tags: , ,


It bothers me to no end that Ron Howard’s Cinderella Man depicts boxer Max Baer as a semi-psychotic villain for the sake of narrative convenience. It’s cinematic license taken to an ugly extreme.

In general, the Hollywood biopic is a troubling compromise that will satisfy no one completely–or at least it shouldn’t. The best-case scenario is that you come away with some sort of an impressionistic truth but realize that, no, Richard Nixon never made a drunken, late-night phone call to David Frost.

Perhaps each film should be labeled with a Surgeon General-ish warning: “Believing the events of this film are true can be injurious to history.” That agreement has always been tacit, but I can’t tell you how many people over the years have cited the “facts” in Oliver Stone’s overwrought bullshit JFK. There’s really no easy answer.

Steven Levy, who reported on Steve Jobs and knew him, was troubled by his portrayal in the new Aaron Sorkin-Danny Boyle film. In a Backchannel Q&A, he interviewed the former about writing a screenplay on an actual historical figure. An excerpt:

Steven Levy:

Let’s take a specific example of history and fabrication. In the first act, you have Steve’s obsession with the 1983 Time Magazine story about him. You’re right to zero in on that — he was complaining about that when I interviewed him for Rolling Stone before the Macintosh launch, and he was complaining about it 20 years later.

Aaron Sorkin:

That’s right.

Steven Levy:

But you took it a step farther. In your screenplay, someone at Apple ordered boxes of the magazine and was going to place one on every seat in the shareholder’s meeting until someone figured out it would make Steve crazy. In real life, that didn’t happen.

Aaron Sorkin:

Right. That’s exactly the kind of thing I don’t mind making up. Here is what’s true, here is the important truth. As a matter of happy coincidence, Walter Isaacson, who was at Time Magazine in 1983 when all this happened, was able to tell me that Steve was never in the conversation for Man of the Year. Steve had always blamed Dan Kottke for spilling the beans in that article about Steve having to take a paternity test and that whole situation with Lisa and believed that was the reason why he didn’t get the cover. But, as Walter pointed out, it had nothing to do with Kottke — if you look at the cover, it’s a sculpture of a man at a desk with a computer, and that sculpture would have had to have been commissioned months and months in advance. In fact, the sculptor himself is a well known guy whose name I forget.

So that information is something that I want to use. I want to use it to introduce the paternity issue, I want to use it because it’s going to pay off in the third act both when Joanna [Hoffman] is giving a demonstration of his reality distortion field… And the final payoff is that Lisa, who now has Internet access at school, has read it — has read about her father denying that he’s not her father.

So I never worried that what the audience was going to go away with was there were cartons and cartons of Time Magazine backstage at this event. It didn’t seem to be important that the audience gets that right or wrong, that it was a fact of history. It has no negative effect on anyone’s life. You can’t say, who was the idiot who put those cartons of Time Magazine backstage? But that [represented] something truer and I felt this was an interesting way to dramatize it.•

Tags: , ,

Carly Fiorina’s disastrous stretch as Hewlett-Packard’s CEO can be summed up thusly: a huge debacle, a golden parachute, a long period of inactivity. Now she’s trying to ride failure and inertia to the White House, and in this strange era of anti-politics, she’s actually one of the Republican frontrunners, seemingly rewarded for her lack of government experience.

Fiorina namechecked Steve Jobs at the most recent GOP debate, trying to get a posthumous rub from our era’s most-celebrated businessperson. Steven Levy, having written the book on the iPod (quite literally), recalls in a Backchannel article how the Apple chief defeated Fiorina in a landslide in the two companies’ dealings. Let’s put it this way: Steve Jobs would have been a terrible President, and the person he clearly outmaneuvered maybe shouldn’t get the gig, either.

From Levy:

Ms. Fiorina’s trainwreck stint at HP has been well documented. But I want to address one tiny but telling aspect of her misbegotten reign: an episode that involved her good friend Steve Jobs. It is the story of the HP iPod.

The iPod, of course, was Apple’s creation, a groundbreaking digital music player that let you have “a music library in your pocket.” Introduced in 2001, it gained steam over the next few years and by the end of 2003, the device was a genuine phenomenon. So it was news that in January 2004, Steve Jobs and Carly Fiorina made a deal where HP could slap its name on Apple’s wildly successful product. Nonetheless, HP still managed to botch things. It could not have been otherwise, really, because Steve Jobs totally outsmarted the woman who now claims she can run the United States of America.

I can talk about this with some authority. Not only have I written a book about the iPod, but I interviewed Fiorina face to face when she introduced the HP iPod at the 2004 Consumer Electronics Show, and then got Steve Jobs’s side of the story.•


Tags: , ,

There is a fascinating premise underpinning Steven Levy’s Backchannel interview with Jerry Kaplan, the provocatively titled, “Can You Rape a Robot?”: AI won’t need become conscious for us to treat it as such, for the new machines to require a very evolved sense of morality. Kaplan, the author of Humans Need Not Apply, believes that autonomous machines will be granted agency if they can only mirror our behaviors. Simulacrum on an advanced level will be enough. The author thinks AI can vastly improve the world, but only if we’re careful to make morality part of the programming.

An exchange:

Steven Levy:

Well by the end of your book, you’re pretty much saying we will have robot overlords — call them “mechanical minders.”

Jerry Kaplan:

It is plausible that certain things can [happen]… the consequences are very real. Allowing robots to own assets has severe consequences and I stand by that and I will back it up. Do I have the thing about your daughter marrying a robot in there?

Steven Levy:


Jerry Kaplan:

That’s a different book. [Kaplan has a sequel ready.] I’m out in the far future here, but it’s plausible that people will have a different attitude about these things because it’s very difficult to not have an emotional reaction to these things. As they become more a part of our lives people may very well start to inappropriately imbue them with certain points of view.•

Tags: ,

One of the really wonderful online publications to pop up recently is Steven Levy’s Backchannel, which is full of interesting ideas about our technological world, how we got here and where we’re headed.

Case in point: After 1.7 million miles of road tests, Chris Urmson, director of Google’s self-driving program, has written a piece about the search giant’s foray into completely remaking transportation and reorganizing cities, reducing traffic and pollution. 

Of course, it’s not Urmson’s job to worry about the societal upheaval that will arise should autonomous vehicles be perfected. That will be up to you and I. While robocars will likely save lives, they will kill so many jobs. If new industries don’t emerge to replace these disappeared positions, how do we proceed? It’s not about holding back progress but dealing with disruption in an intelligent and equitable manner. 

Anyhow, Urmson reports fewer than a dozen fender benders thus far for Google’s driverless cars, with all being caused by human drivers. His analysis unsurprisingly favors a driverless future, but it would be pretty widely reported if he was characterizing the safety record inaccurately. An excerpt:

If you spend enough time on the road, accidents will happen whether you’re in a car or a self-driving car. Over the 6 years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident.

Rear-end crashes are the most frequent accidents in America, and often there’s little the driver in front can do to avoid getting hit; we’ve been hit from behind seven times, mainly at traffic lights but also on the freeway. We’ve also been side-swiped a couple of times and hit by a car rolling through a stop sign. And as you might expect, we see more accidents per mile driven on city streets than on freeways; we were hit 8 times in many fewer miles of city driving. All the crazy experiences we’ve had on the road have been really valuable for our project. We have a detailed review process and try to learn something from each incident, even if it hasn’t been our fault.

Not only are we developing a good understanding of minor accident rates on suburban streets, we’ve also identified patterns of driver behavior (lane-drifting, red-light running) that are leading indicators of significant collisions. Those behaviors don’t ever show up in official statistics, but they create dangerous situations for everyone around them.

Lots of people aren’t paying attention to the road.

Tags: ,

Albert Einstein deservedly wrested the “greatest Jew since Jesus” title from the far inferior Leon Trotsky, but having such a beautiful mind came with costs, and it’s been well-documented that the scientist’s brain had a bumpy life of his own after his passing in 1955. Steven Levy, the best tech reporter of the personal-computing era, located the great man’s gray matter, in 1978, for New Jersey Monthly. He recalls the strange reconnaissance mission in the Backchannel piece “Yes, I Found Einstein’s Brain“:

The reporters who by then had heard of the news and begun gathering in Princeton did not have access to the body. According to his wishes, Einstein’s body was incinerated. The cremation took place at 4:30 that day in Trenton. Nathan disposed of the ashes in the Delaware River.

But not all of the body was cremated. According to an article in the New York Times that ran on April 20, the brain was saved for study. The headline was “KEY CLUE SOUGHT IN EINSTEIN BRAIN.” That article was the last piece of actual news regarding Einstein’s brain that would appear for over 20 years.

The next piece of news would come from me.

“I want you to find Einstein’s brain.”

My editor was giving me the weirdest assignment in my young career. It was the late spring of 1978. I was working for a regional magazine calledNew Jersey Monthly, based in Princeton, New Jersey. It was my first real job. I was 27 years old and had been a journalist for three years.

The editor, a recent hire named Michael Aron, had come to New Jersey with a white whale of a story idea, one that he once had begun himself but gotten nowhere on. Years earlier, he had put together a package at Harper’smagazine on brain science. He had read Ronald Clark’s magisterial biography of Albert Einstein, and had been fascinated by one phrase at the end.

“He had insisted his brain be used for research…”

What had happened to the brain? Aron wondered. He had seen that April 20New York Times article. But that seemed to be the last mention of the brain. He looked at all sorts of indexes of publications and journals for any hint of a study and couldn’t find a thing. He wrote to Ronald Clark; the biographer didn’t know. Clark referred Aron to Nathan, the executor of the estate. Nathan’s prompt response was a single terse paragraph. He confirmed that the brain had been removed during the autopsy, and the person performing the procedure had been a pathologist named Thomas Harvey. “As far as I know,” Nathan wrote, “he is no longer with the hospital.” And that was it. Aron had hit a dead end.

But Aron never gave up on the idea, and when he got to New Jersey — where Einstein had lived and died, right there in Princeton — he immediately assigned me the story. He scheduled it for our August cover story. It was late spring. I had about a month.

Tags: ,

Dan Pfeiffer, the outgoing White House communications advisor who planted President Obama between two ferns among other off-center platforms, spoke with Steven Levy at Backchannel about POTUS PR in a time of social media and selfie sticks and the future of such non-traditional communications. He sees a long-tail tomorrow. An excerpt:

Steven Levy:

How do you picture White House communications in the future—what’s your vision of the environment in 2020?

Dan Pfeiffer:

A bigger part of the job for White House government officials will be online engagement. If you’re doing climate change policy in the White House, instead of getting X number of hours a week to meet with the environmental groups, you will be spending time on Twitter, Facebook or whatever the next social platforms are, engaging people who are interested in that topic. You will not be reaching the quantity of people that you would reach by having a big broadcast television interview but the quality of the outreach will be better because you’ll be getting very engaged people who can take action on behalf of the thing you care about.

And I think that—and this one is tricky—a White House will have to have many more resources dedicated to producing content. We have a lot of people around here who write written words—speeches, talking points, press releases—and you will need people who are creating visual, graphical and video images to communicate the same message. It’s tricky because you don’t want to be in a world where it is propaganda. You’re going to have to vet this and give it scrutiny, but there is an insatiable appetite for content out there. Your traditional news outlets don’t have the resources to produce the amount of content that the Internet requires on a 24/7 basis.

There’s this funny thing where it’s like, if we put out a press release, it is accepted as a proper form of Presidential communication. But if we put out a video, that’s somehow propaganda. The mentality is going have to shift [to acknowledge that] a video is just a more shareable, more enjoyable way of communicating the same information as the press release. Everyone is going to have to adjust to that.•

Tags: , ,

The always great Steven Levy filed a story at Backchannel about Twitter usage measured by neuroscience, which revealed greater stimulation of emotion and memory in test subjects than they displayed during more general web use. Not surprising, since personal engagement is more intense in tweets than on newsfeeds, even personalized ones. Clearly such info can be, for better or worse, used by the company for neuromarketing purposes. An excerpt:

[Twitter senior director of market research Jeffrey] Graham’s team arranged a study at Twitter’s UK headquarters. One hundred and fourteen people participated, in groups of around twenty. Videos of the sessions show people putting on the helmets, which look like a cross between a Snoopy-style Red Baron helmet and a polka dot shower cap destined for Katy Perry’s cranium in a music video.

Then, during 45-minute sessions, they alternated between normal Web-surfing activities and using Twitter — reading their timelines, tweeting, and other birdy stuff.

Graham had hoped that the brain profiles of people using Twitter would show the difference between his employer and more static Web use. “When I go on Twitter, oftentimes I really get sucked into it,” he says. “I get this strong anticipation to see what engagement is going to be.” But he admits that he had no idea what the data would actually show.

The results were more than he’d dreamed of. The study first tried to measure a neural signature that tends to correlate with information relating to you—a “sense of personal relevance.” It did this by comparing how participants’ brains activated when either passively scrolling and browsing on Twitter, actively tweeting and retweeting, or engaging in normal online activity. The brain data suggested that passive Twitter use increased a sense of personal relevance by 27 percent. Active use boosted that number to 51 percent. The representative from NeuroInsight told Twitter that in all the testing the research company has done, there’s been only one result as high: when people opened personal mail. (The physical kind.)

The most dramatic results reflected emotional intensity.•


In a Backchannel interview largely about strategies for combating global poverty, Steven Levy asks Bill Gates about the existential threat of superintelligent AI. The Microsoft founder sides more with Musk than Page. The exchange:

Steven Levy:

Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates:

I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.•

Tags: ,

There’s a line near the end of 1973’s Westworld, after things have gone haywire, that speaks to concerns about Deep Learning. A technician, who’s asked why the AI has run amok and how order can be restored, answers: “They’ve been designed by other computers…we don’t know exactly how they work.”

At Google, search has never been the point. It’s been an AI company from the start, Roomba-ing information to implement in a myriad of automated ways. Deep Learning is clearly a large part of that ultimate search. On that topic, Steven Levy conducted a Backchannel interview with Demis Hassabis, the company’s Vice President of Engineering for AI projects, who is a brilliant computer-game designer. For now, it’s all just games. An excerpt:

Steven Levy:

I imagine that the more we learn about the brain, the better we can create a machine approach to intelligence.

Demis Hassabis:

Yes. The exciting thing about these learning algorithms is they are kind of meta level. We’re imbuing it with the ability to learn for itself from experience, just like a human would do, and therefore it can do other stuff that maybe we don’t know how to program. It’s exciting to see that when it comes up with a new strategy in an Atari game that the programmers didn’t know about. Of course you need amazing programmers and researchers, like the ones we have here, to actually build the brain-like architecture that can do the learning.

Steven Levy:

In other words, we need massive human intelligence to build these systems but then we’ll —

Demis Hassabis:

… build the systems to master the more pedestrian or narrow tasks like playing chess. We won’t program a Go program. We’ll have a program that can play chess and Go and Crosses and Drafts and any of these board games, rather than reprogramming every time. That’s going to save an incredible amount of time. Also, we’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain. As humans, if I show you some new board game or some new task or new card game, you don’t start from zero. If you know to play bridge and whist and whatever, I could invent a new card game for you, and you wouldn’t be starting from scratch—you would be bringing to bear this idea of suits and the knowledge that a higher card beats a lower card. This is all transferable information no matter what the card game is.•

Tags: ,

The computer revolution was not the residue of phreaked phones but of gronked trains. From a Medium post by Steven Levy, the preeminent tech journalist of the personal-computing age, an excerpt about MIT’s 1950s-era Tech Model Railroad Club:

“Peter Samson had been a member of the Tech Model Railroad Club since his first week at MIT in the fall of 1958. The first event that entering MIT freshmen attended was a traditional welcoming lecture, the same one that had been given for as long as anyone at MIT could remember. LOOK AT THE PERSON TO YOUR LEFT . . . LOOK AT THE PERSON TO YOUR RIGHT . . . ONE OF YOU THREE WILL NOT GRADUATE FROM THE INSTITUTE. The intended effect of the speech was to create that horrid feeling in the back of the collective freshman throat that signaled unprecedented dread. All their lives, these freshmen had been almost exempt from academic pressure. The exemption had been earned by virtue of brilliance. Now each of them had a person to the right and a person to the left who was just as smart. Maybe even smarter.

There were enough obstacles to learning already—why bother with stupid things like brown-nosing teachers and striving for grades? To students like Peter Samson, the quest meant more than the degree.

Sometime after the lecture came Freshman Midway. All the campus organizations—special-interest groups, fraternities, and such— set up booths in a large gymnasium to try to recruit new members. The group that snagged Peter was the Tech Model Railroad Club. Its members, bright-eyed and crew-cutted upperclassmen who spoke with the spasmodic cadences of people who want words out of the way in a hurry, boasted a spectacular display of HO gauge trains they had in a permanent clubroom in Building 20. Peter Samson had long been fascinated by trains, especially subways. So he went along on the walking tour to the building, a shingle-clad temporary structure built during World War II. The hallways were cavernous, and even though the clubroom was on the second floor it had the dank, dimly lit feel of a basement.

The clubroom was dominated by the huge train layout. It just about filled the room, and if you stood in the little control area called ‘the notch’ you could see a little town, a little industrial area, a tiny working trolley line, a papier-mache mountain, and of course a lot of trains and tracks. The trains were meticulously crafted to resemble their full-scale counterparts, and they chugged along the twists and turns of track with picture-book perfection. And then Peter Samson looked underneath the chest-high boards which held the layout. It took his breath away. Underneath this layout was a more massive matrix of wires and relays and crossbar switches than Peter Samson had ever dreamed existed. There were neat regimental lines of switches, and achingly regular rows of dull bronze relays, and a long, rambling tangle of red, blue, and yellow wires—twisting and twirling like a rainbow-colored explosion of Einstein’s hair. It was an incredibly complicated system, and Peter Samson vowed to find out how it worked.

There were two factions of TMRC. Some members loved the idea of spending their time building and painting replicas of certain trains with historical and emotional value, or creating realistic scenery for the layout. This was the knife-and-paintbrush contingent, and it subscribed to railroad magazines and booked the club for trips on aging train lines. The other faction centered on the Signals and Power Subcommittee of the club, and it cared far more about what went on under the layout. This was The System, which worked something like a collaboration between Rube Goldberg and Wernher von Braun, and it was constantly being improved, revamped, perfected, and sometimes ‘gronked’—in club jargon, screwed up. S&P people were obsessed with the way The System worked, its increasing complexities, how any change you made would affect other parts, and how you could put those relationships between the parts to optimal use.”

Tags: ,

In a Medium post, Steven Levy interviews Andrew Conrad of Google X, who explains the company’s attempts at an early-detection health system, which will be miniaturized computer particles that “live” in your bloodstream and are read by an external wearable, something akin to a Star Trek tricorder. An excerpt:

Andrew Conrad:

The first thing you realize is the triggers of diseases usually start way before they’re clinically apparent. They are usually subtle and rare. Most of the time people are not sick. That means monitoring would have to be done continuously. You have to measure all the time because if you only measure once a year when people visit he doctor—or in men’s cases, once a decade—you’re going to miss huge swaths of the possibility of detecting disease early. So we have to make a continuous monitoring and measuring device. Since it’s continuous, it has to be something people wear, right? Can you imagine if you had to carry a sixty-pound thing around with a radar dish on your head and poke yourself with needles every hour? People just wouldn’t do it.

So the radical solution was to move away from the episodic, ‘Wait ‘til you feel a big lump in your chest before you go into the doctor’ approach, and do a continuous measurement of key biological markers through non-invasive devices. And we would do that by miniaturizing electronics. We can make a little computer chip which has three hundred and sixty thousand transistors on it, yet it’s the size of a piece of glitter. One of the other ways is to functionalize nanoparticles. Nanoparticles are the smallest engineered particles, the smallest engineered machines or things that you can make. Nature does its business on the molecular level or the cellular level. But for two thousand years we’ve looked at medicine at the organ or the organism level. That’s not the right way to do it. Imagine that you’re trying to describe the Parisian culture by flying over Paris in an airplane. You can describe the way the city looks and there’s a big tower and a river down the middle. But it’s really, really hard to opine or understand the culture from doing that. Its the same thing when we look at systems—you can see that there’s a complex system, but unless you’re down at the level where the transactions occur, it’s very hard for you to imagine how it works.”

Tags: ,

« Older entries