Eric Brynjolfsson

You are currently browsing articles tagged Eric Brynjolfsson.

As someone consumed by robotics, automation, the potential for technological unemployment and its societal and political implications, I read as many books as possible on the topic, and I feel certain that The Second Machine Age, the 2014 title coauthored by Andrew McAfee and Eric Brynjolfsson, is the best of the lot. If you’re just beginning to think about these issues, start right there.

In his Financial Times blog, McAfee, who believes this time is different and that the Second Machine Age won’t resemble the Industrial Age, has published a post about an NPR debate on the subject with MIT economist David Autor, who disagrees. An excerpt: 

Over the next 20-40 years, which was the timeframe I was looking at, I predicted that vehicles would be driving themselves; mines, factories, and farms would be largely automated; and that we’d have an extraordinarily abundance economy that didn’t have anything like the same bottomless thirst for labour that the Industrial Era did.

As expected, I found David’s comments in response to this line of argument illuminating. He said: “If we’d had this conversation 100 years ago I would not have predicted the software industry, the internet, or all the travel or all the experience goods … so I feel it would be rather arrogant of me to say I’ve looked at the future and people won’t come up with stuff … that the ideas are all used up.”

This is exactly right. We are going to see innovation, entrepreneurship, and creativity that I can’t even begin to imagine (if I could, I’d be an entrepreneur or venture capitalist myself). But all the new industries and companies that spring up in the coming years will only use people to do the work if they’re better at it than machines are. And the number of areas where that is the case is shrinking — I believe rapidly.•

Tags: , ,

In a Big Think video, Andrew McAfee explains how automation is coming for your collar, white or blue, limo driver and lawyer alike. He leaves off by talking about new industries being created as old ones are being destroyed, but from his writing in The Second Machine Age, the book he co-authored with Eric Brynjolfsson, it’s clear he fears the shortfall between old and new may be significant and society could be in for a bumpy transition.

Tags: ,

Andrew McAfee and Eric Brynjolfsson’s The Second Machine Age, a deep analysis of the economic and political ramifications of Weak AI in the 21st century, was one of the five best books I read in 2014, a really rich year for titles of all kinds. I pretty much agree with the authors’ summation that there’s a plentitude waiting at the other end of the proliferation of automation fast approaching, though the intervening decades will be a serious societal challenge. In a post at his Financial Times blog, McAfee reconsiders, if somewhat, his reluctance to join in with the Hawking-Bostrom-Musk AI anxiety. An excerpt:

The group came together largely to discuss AI safety — the challenges and problems that might arise if digital systems ever become superintelligent. I wasn’t that concerned about AI safety coming into the conference, for reasons that I have written about previously. So did I change my mind?

Maybe a little bit. The argument that we should be concerned about any potentially existential risks to humanity, even if they’re pretty far in the future and we don’t know exactly how they’ll manifest themselves, is a fairly persuasive one. However, I still feel that we’re multiple “Watson and Crick moments” away from anything we need to worry about, so I haven’t signed the open letter on research priorities that came out in the wake of the conference — at least not yet. But who knows how quickly things might change?

At the gathering, in answer to this question I kept hearing variations of “quicker than we thought.” In robotic endeavours as diverse as playing classic Atari video games,competing against the top human players in the Asian board game Go, creating self-driving cars, parsing and understanding human speech, folding towels and matching socks, the people building AI to do these things said that they were surprised at the pace of their own progress. Their achievements outstripped not only their initial timelines, they said, but also their revised ones.

Why is this? My short answer is that computers keep getting more powerful, the available data keeps getting broader (and data is the lifeblood of science), and the geeks keep getting smarter about how to do their work. This is one of those times when a field of human inquiry finds itself in a virtuous cycle and takes off.•

Tags: ,

Andrew McAfee, co-author with Eric Brynjolfsson of The Second Machine Age, believes that Weak AI will destabilize employment for decades, but he doesn’t think species-threatening Artificial Intelligence is just around the bend. From his most recent Financial Times blog post:

“AI does appear to be taking off: after decades of achingly slow progress, computers have in the past few years demonstrated superhuman ability, from recognising street signs in pictures and diagnosing cancer to discerning human emotions and playing video games. So how far off is the demon?

In all probability, a long, long way away; so long, in fact, that the current alarmism is at best needless and at worst counterproductive. To see why this is, an analogy to biology is helpful.

It was clear for a long time that important characteristics of living things (everything from the colour of pea plant flowers to the speed of racehorses) was passed down from parents to their offspring, and that selective breeding could shape these characteristics. Biologists hypothesised that units labelled ‘genes’ were the agents of this inheritance, but no one knew what genes looked like or how they operated. This mystery was solved in 1953 when James Watson and Francis Crick published their paper describing the double helix structure of the DNA molecule. This discovery shifted biology, giving scientists almost infinitely greater clarity about which questions to ask and which lines of inquiry to pursue.

The field of AI is at least one ‘Watson and Crick moment’ away from being able to create a full artificial mind (in other words, an entity that does everything our brain does). As the neuroscientist Gary Marcus explains: ‘We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.’ We also do not have any clear idea how a human child is able to know so much about the world — that is a cat, that is a chair — after being exposed to so few examples. We do not know exactly what common sense is, and it is fiendishly hard to reduce to a set of rules or logical statements. The list goes on and on, to the point that it feels like we are many Watson and Crick moments away from anything we need to worry about.”

Tags: ,

The unemployment rate is falling in America, but wages aren’t rising in most sectors, which is counterintuitive. Two explanatory notes about U.S. employment in the aftermath of the 2008 economic collapse, one from Eric Brynjolfsson and Andrew McAfee’s The Second Machine Age, and the other from Derek Thompson’s Atlantic article “The Rise of Invisible Unemployment.”

____________________________

From Brynjolfsson and McAfee:

“A few years ago we had a very candid discussion with one CEO, and he explained that he knew for over a decade that advances in information technology had rendered many routine information-processing jobs superfluous. At the same time, when profits and revenues are on the rise, it can be hard to eliminate jobs. When the recession came, business as usual obviously was not sustainable, which made it easier to implement a round of streamlining and layoffs. As the recession ended and profits and demand returned, the jobs doing routine work were not restored. Like so many other companies in recent years, his organization found it could use technology to scale up without the workers.”

____________________________

From Thompson:

“3. The rise of invisible work is too large to ignore.

By ‘invisible work,’ I mean work done by American companies that isn’t done by Americans workers. Globalization and technology is allowing corporations to expand productivity, which shows up in earnings reports and stock prices and other metrics that analysts typically associate with a healthy economy. But globalization and technology don’t always show up in US wage growth because they often represent alternatives to US-based jobs. Corporations have used the recession and the recovery to increase profits by expanding abroad, hiring abroad, and controlling labor costs at home. It’s a brilliant strategy to please investors. But it’s an awful way to contribute to domestic wage growth.

Tags: , ,

Eric Brynjolfsson and Andrew McAfee’s The Second Machine Age, a first-rate look at the technological revolution’s complicated short- and mid-term implications for economics, is one of the best books I’ve read in 2014. The authors make a compelling case that the Industrial Revolution bent time more substantially than anything humans had previously done, and that we’re living through a similarly dramatic departure right now, one that may prove more profound than the first, for both good and bad reasons. In a post at his new Financial Times blog, McAfee takes on Peter Thiel’s contention that monopolies are an overall win for society. An excerpt:

“His provocation in Zero to One is that tech monopolies are generally good news since they spend heavily to keep innovating (and sometimes do cool things unrelated to their main businesses such as building driverless cars) and these innovations benefit all of us. If they stop investing and innovating, or if they miss something big, they quickly become irrelevant.

For example, Microsoft’s dominance of the PC industry was once so worrying the US government went after it in an antitrust battle that lasted two decades. Microsoft still controls more than 75 per cent of the market for desktop operating systems today, but nobody is now worried about the company’s ability to stifle tech innovation. Thiel paraphrases Leo Tolstoy’s most famous sentence: ‘All happy companies are different: each one earns a monopoly by solving a unique problem. All failed companies are the same: they failed to escape competition.’

I like Thiel’s attempt to calm the worries about today’s tech giants. Big does not always mean bad and, in the high-tech industries, big today certainly does not guarantee big tomorrow. But I’m not as blithe about monopolies as Thiel. The US cable company Comcast qualifies as a tech monopoly (it’s my only choice for a fast internet service provider) and I struggle mightily to perceive any benefit to consumers and society from its power. And there are other legitimate concerns about monopsonists (monopoly buyers), media ownership concentration and so on.

I once heard the Yale law professor Stephen Carter lay down a general rule: we should be vigilant about all great concentrations of power. We won’t need to take action against all of them but nor should we assume that they’ll always operate to our benefit.”

Tags: , , ,

In his 1970 Apollo 11 account, Of a Fire on the Moon, Norman Mailer realized that his rocket wasn’t the biggest after all, that the mission was a passing of the torch, that technology, an expression of the human mind, had diminished its creators. “Space travel proposed a future world of brains attached to wires,” Mailer wrote, his ego having suffered a TKO. And just as the Space Race ended the greater race began, the one between carbon and silicon, and it’s really just a matter of time before the pace grows too brisk for humans.

Supercomputers will ultimately be a threat to us, but we’re certainly doomed without them, so we have to navigate the future the best we can, even if it’s one not of our control. Gary Marcus addresses this and other issues in his latest New Yorker blog piece, “Why We Should Think About the Threat of Artificial Intelligence.” An excerpt:

“It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.

For some people, that future is a wonderful thing. [Ray] Kurzweil has written about a rapturous singularity in which we merge with machines and upload our souls for immortality; Peter Diamandis has argued that advances in A.I. will be one key to ushering in a new era of ‘abundance,’ with enough food, water, and consumer gadgets for all. Skeptics like Eric Brynjolfsson and I have worried about the consequences of A.I. and robotics for employment. But even if you put aside the sort of worries about what super-advanced A.I. might do to the labor market, there’s another concern, too: that powerful A.I. might threaten us more directly, by battling us for resources.

Most people see that sort of fear as silly science-fiction drivel—the stuff of The Terminator and The Matrix. To the extent that we plan for our medium-term future, we worry about asteroids, the decline of fossil fuels, and global warming, not robots. But a dark new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, lays out a strong case for why we should be at least a little worried.

Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro’s words, ‘if it is smart enough, a robot that is designed to play chess might also want to build a spaceship,’ in order to obtain more resources for whatever goals it might have.”

Tags: , , , ,