Andrew McAfee

You are currently browsing articles tagged Andrew McAfee.

babybot (2)

In an excellent Five Books interview, writer Calum Chace suggests a quintet of titles on the topic of Artificial Intelligence, four of which I’ve read. In recommending The Singularity Is Near, he defends the author Ray Kurzweil against charges of techno-quackery, though the futurist’s predictions have grown more desperate and fantastic as he’s aged. It’s not that what he predicts can’t ever be be done, but his timelines seem to me way too aggressive.

Nick Bostrom’s Superintelligence, another choice, is a very academic work, though an important one. Interesting that Bostrom thinks advanced AI is a greater existential threat to humans than even climate change. (I hope I’ve understood the philosopher correctly in that interpretation.) The next book is Martin Ford’s Rise of the Robots, which I enjoyed, but I prefer Chace’s fourth choice, Andrew McAfee and Erik Brynjolfsson’s The Second Machine Age, which covers the same terrain of technological unemployment with, I think, greater rigor and insight. The final suggestion is one I haven’t read, Greg Egan’s sci-fi novel Permutation City, which concerns intelligence uploading and wealth inequality.

An excerpt about Kurzweil:

Question:

Let’s talk more about some of these themes as we go through the books you’ve chosen. The first one on your list is The Singularity is Near, by Ray Kurzweil. He thinks things are moving along pretty quickly, and that a superintelligence might be here soon. 

Calum Chace:

He does. He’s fantastically optimistic. He thinks that in 2029 we will have AGI. And he’s thought that for a long time, he’s been saying it for years. He then thinks we’ll have an intelligence explosion and achieve uploading by 2045. I’ve never been entirely clear what he thinks will happen in the 16 years in between. He probably does have quite detailed ideas, but I don’t think he’s put them to paper. Kurzweil is important because he, more than anybody else, has made people think about these things. He has amazing ideas in his books—like many of the ideas in everybody’s books they’re not completely original to him—but he has been clearly and loudly propounding the idea that we will have AGI soon and that it will create something like utopia. I came across him in 1999 when I read his book, Are We Spiritual Machines? The book I’m suggesting here is The Singularity is Near, published in 2005. The reason why I point people to it is that it’s very rigorous. A lot of people think Kurzweil is a snake-oil salesman or somebody selling a religious dream. I don’t agree. I don’t agree with everything he says and he is very controversial. But his book is very rigorous in setting out a lot of the objections to his ideas and then tackling them. He’s brave, in a way, in tackling everything head-on, he has answers for everything. 

Question:

Can you tell me a bit more about what ‘the singularity’ is and why it’s near?

Calum Chace:

The singularity is borrowed from the world of physics and math where it means an event at which the normal rules break down. The classic example is a black hole. There’s a bit of radiation leakage but basically, if you cross it, you can’t get back out and the laws of physics break down. Applied to human affairs, the singularity is the idea that we will achieve some technological breakthrough. The usual one is AGI. The machine becomes as smart as humans and continues to improve and quickly becomes hundreds, thousands, millions of times smarter than the smartest human. That’s the intelligence explosion. When you have an entity of that level of genius around, things that were previously impossible become possible. We get to an event horizon beyond which the normal rules no longer apply.

I’ve also started using it to refer to a prior event, which is the ‘economic singularity.’ There’s been a lot of talk, in the last few months, about the possibility of technological unemployment. Again, it’s something we don’t know for sure will happen, and we certainly don’t know when. But it may be that AIs—and to some extent their peripherals, robots—will become better at doing any job than a human. Better, and cheaper. When that happens, many or perhaps most of us can no longer work, through no fault of our own. We will need a new type of economy.  It’s really very early days in terms of working out what that means and how to get there. That’s another event that’s like a singularity — in that it’s really hard to see how things will operate at the other side.•

Tags: , , , , , ,

Chad: Hey, bro.

Technology won’t make us poorer, at least not in the aggregate. Distribution could be a thorny problem, but there are worse things than having to come up with political solutions to close a yawning wealth inequality in a time of plenty.

In his latest Financial Times column, Andrew McAfee focuses on a different issue concerning to him in regards to technological unemployment: the devil and idle (non-robotic) hands. He cites the alarming Case-Deaton findings of a scary spike in the mortality rate of white, middle-aged Americans, believing the collapse of industrial jobs and of communities is a matter of causation, with the former prompting the latter.

It’s difficult to know for sure if that’s so, but it’s possible McAfee is on to something. Already it seems we’ve become too much a nation anesthetized by prescription painkillers, fantasy football and smartphones, not willing to take a good look in the mirror, unless it’s the black mirror. Sure, there’s nothing new in feeling lost, but you only get to find yourself through a life lived with purpose. My best guess is that people in a less-workcentric world will eventually find new kinds of purpose, but the transition may be a bumpy one.

An excerpt:

So what happens when the industrial-era jobs that underpinned the middle class start to go away? Voltaire offered a prescient caution when he observed that: “Work saves us from three great evils: boredom, vice, and need.”

Of the three of these, I’m the least worried about need. Trade and technological progress, after all, make a society wealthier in aggregate. The problem that they bring is not one of scarcity — of not enough to go around. Instead, they bring up thorny questions of allocation.

But rather than spending time on that issue here (if you’re interested, Erik Brynjolfsson and I dedicate a lot of our book The Second Machine Age to it), I want to focus on Voltaire’s other two evils, boredom and vice. How bad are they? How worried should we be about them?

I sometimes hear the argument that we shouldn’t be that worried at all. If we don’t need people’s labour, this logic goes, why should we care what they do with their time? Why should traditional notions of boredom and vice matter? If people want to drink, take drugs, engage in casual sex or play video games all day, where’s the harm? These are not the most conventionally respectable or productive activities, but why should we let convention continue to hold sway?

 

Tags:

cardinnurses (1)

If we’re talking “ever,” then of course diagnostics and other key medical functions will be revolutionized by computer hardware, which continues to grow smaller, faster and cheaper. But will it be later than sooner? Measurement and maintenance of biological functions are much more challenging tasks to perform than simply using a smartphone to order an Uber or burger. 

In a FT blog post, Andrew McAfee addresses concerns that the health sector is being left behind in the Digital Age. An excerpt:

All the gear packed into most modern phones — compasses, accelerometers, gyroscopes, thermometers, WiFi and cellular antennas, and so on — have been getting better, smaller and cheaper at a rate that makes Moore’s Law look pokey by comparison. It’s easy to conclude that this is just what happens when the protean properties of silicon are brought to bear on an industry.

At a private event I attended recently, however, the CEO of a global pharmaceutical company said that sensors for healthcare were not improving quickly enough, and that we don’t yet have the gear we need for next-generation diagnosis and monitoring. The current turmoil at the US blood diagnostics start-up Theranos seems to support this view. A damning expose in the Wall Street Journal revealed (and Theranos has confirmed) that the company uses industry-standard machines instead of its own equipment for most of the blood work it performs, and that its proprietary methods have so far only been approved by the US Food and Drug Administration for a single test.

So will the sensor revolution skip healthcare? Will our bodies not ever be brought into the “internet of things”? I think the answer to these questions is “no.” We’ll get great gear in this area, but it might take a while.•

Tags:

As has been said before, the problem with technology is one of distribution, not scarcity. Not a small challenge, of course.

We’ll need to work our way through what will likely be a wealthier if lopsided aggregate, but we all stand to gain in a more vital way: environmentally. The new tools, through choice and some fortuitousness, are almost all designed to make the world greener, something we desperately need to snake our way through the Anthropocene. 

In Andrew McAfee’s latest post at his excellent Financial Times blog, he pivots off of Jesse Ausubel’s “The Return of Nature,” an essay which says that technological progress and information becoming the coin of the realm have led to a “dematerialization process” in America that is far kinder ecologically. Remember during the 1990s when everyone was freaking out about how runaway crime would doom society even as the problem had quietly (and mysteriously) begun a marked decline? Ausubel argues that a parallel situation is currently occurring with precious resources.

Two excerpts follow: 1) Ausubel asserts that a growing U.S. population hasn’t led to a spike in resource use, and 2) McAfee writes that the dematerialization process may explain some of the peculiarities of the economy.

______________________________

From Ausubel:

In addition to peak farmland and peak timber, America may also be experiencing peak use of many other resources. Back in the 1970s, it was thought that America’s growing appetite might exhaust Earth’s crust of just about every metal and mineral. But a surprising thing happened: even as our population kept growing, the intensity of use of the resources began to fall. For each new dollar in the economy, we used less copper and steel than we had used before — not just the relative but also the absolute use of nine basic commodities, flat or falling for about 20 years. By about 1990, Americans even began to use less plastic. America has started to dematerialize. 

The reversal in use of some of the materials so surprised me that Iddo Wernick, Paul Waggoner, and I undertook a detailed study of the use of 100 commodities in the United States from 1900 to 2010. One hundred commodities span just about everything from arsenic and asbestos to water and zinc. The soaring use of many resources up to about 1970 makes it easy to understand why Americans started Earth Day that year. Of the 100 commodities, we found that 36 have peaked in absolute use, including the villainous arsenic and asbestos. Another 53 commodities have peaked relative to the size of the economy, though not yet absolutely. Most of them now seem poised to fall. They include not only cropland and nitrogen, but even electricity and water. Only 11 of the 100 commodities are still growing in both relative and absolute use in America. These include chickens, the winning form of meat. Several others are elemental vitamins, like the gallium and indium used to dope or alloy other bulk materials and make them smarter. …

Much dematerialization does not surprise us, when a single pocket-size smartphone replaces an alarm clock, flashlight, and various media players, along with all the CDs and DVDs.

But even Californians economizing on water in the midst of a drought may be surprised at what has happened to water withdrawals in America since 1970. Expert projections made in the 1970s showed rising water use to the year 2000, but what actually happened was a leveling off. While America added 80 million people –– the population of Turkey –– American water use stayed flat.•

______________________________

From McAfee:

Software, sensors, data, autonomous machines and all the other digital tools of the second machine age allow us to use a lot fewer atoms throughout the economy. Precision agriculture enables great crop yields with much less water and fertiliser. Cutting-edge design software envisions buildings that are lighter and more energy efficient than any before. Robot-heavy warehouses can pack goods very tightly, and so be smaller. Autonomous cars, when (not if) they come, will mean fewer vehicles in total and fewer parking garages in cities. Drones will replace delivery trucks. And so on.

The pervasiveness of this process, which Mr Ausubel labels “dematerialisation,” might well be part of the reason that business investment has been so sluggish even in the US, where profits and overall growth have been relatively robust. Why build a new factory, after all, if a few new computer-controlled machine tools and some scheduling software will allow you to boost output enough from existing ones? And why build a new data centre to run that software when you can just put it all in the cloud?•

 

Tags: ,

I’m just old enough to have been dragged as a small child to the final Automat in Manhattan during its last, sad days, before it was sacrificed at the feet of Ray Kroc, as if it were just one more cow.

In its heyday, despite the lack of servers and cashiers, the Automat was an amazingly social experience and an especially democratic one, with people of all classes and kinds rubbing elbows over cheap turkey sandwiches and cheaper coffee. You could sit there forever. Al Pacino has spoken many times of how he misses the welter of people, the strands of surprising conversations. You don’t get that at McDonalds or Starbucks. It’s just a different vibe (and in the latter case, price point). 

The Automat was the past…and, perhaps, prelude. Well, to some degree. I’ve blogged before about Eatsa (here and here), the so-called “digital Automat” which recently opened in San Francisco. It’s disappeared all workers from the front of the restaurant and probably, in time, from back-room preparation. It certainly doesn’t bode well for fast-casual employees, but what of the social dynamic? 

In his latest insightful Financial Times blog post about how life is changing in the Second Machine Age, Andrew McAfee thinks about the meaning of Eatsa in our “interesting and uncertain” era. He’s not concerned about wait staff being disappeared from his conversations but acknowledges the restaurant, whose model will likely spread, is not a good sign for Labor. An excerpt:

So is this just an updated version of the old automats, with iPads replacing coin slots, or is it something more? There are indications that Eatsa’s founders want it to be the start of something legitimately new: a close to 100 per cent automated restaurant. Food preparation there is already highly optimised and standardised, and it’s probably not a coincidence that the location’s first general manager had a background in robotics.

But the fact that the restaurant’s “front of house” (ie the dining area and customer interactions) are already virtually 100 per cent automated is more interesting to me than the question of whether the “back of house” (the kitchen) ever will be. Interesting because as front of the house automation spreads, it’s going to put to the test one of the most widely held notions about work in the second machine age: that there will always be lots of service jobs because we desire a lot of human interaction.

I agree with the second half of that statement, but I’m not so sure about the first. We are a deeply social species, and even an introvert like me enjoys spending time with friends and loved ones in the physical world. I’ve also learnt to value business lunches and dinners (even though I’d rather be off by myself reading or writing) because they’re an important part of how work advances.

But in the great majority of cases, when I’m out I don’t value the interactions with the waiting staff and other service workers. They’re not unpleasant or terribly burdensome, but they do get in the way of what I want from the restaurant experience: to eat well, and to talk to my tablemates.•

 

Tags:

There are many reasons, some more valid than others, that people are wary of so-called free Internet services like Facebook and Google, those companies of great utility which make money not through direct fees but by collecting our information and encouraging us to create content we’re not paid for.

Foremost, there are fears about surveillance, which I think are very valid. Hacks have already demonstrated how porous the world is now and what’s to come. More worrisome, beyond the work of rogue agents it’s clear the companies themselves cooperated in myriad ways with the NSA in handing over intel, some of which may have been necessary and most of which is troubling. Larry Page has said that we should trust the “good companies” with our information, but we shouldn’t trust any of them. Of course, there’s almost no alternative but to allow them into our lives and play by their rules.

Of course, the government isn’t alone in desiring to learn more about us. Advertisers certainly want to and long have, but there’s never before been this level of accessibility, this collective brain to be picked. These companies aren’t just looking to peddle information but also procure it. Their very existence depends on coming up with better and subtler ways of quantifying us.

I think another reason these businesses give pause isn’t because of something they actually do but what they remind us of: Our anxieties about the near-term future of Labor. By getting us to “work” for nothing and create content, they tell us that even information positions have been reduced, discounted. The dwindling of good-paying jobs, the Gig Economy and the fall of the middle class all seem to be encapsulated in this new arrangement. When a non-profit like Wikipedia does it, it can be destabilizing but doesn’t seem sinister. They same can’t be said for Silicon Valley giants.

In his latest Financial Times blog post, Andrew McAfee makes an argument in favor of these zero-cost services, which no doubt offer value, though I believe he gives short shrift to privacy concerns.

An excerpt:

Web ads are much more precisely targeted at me because Google and Facebook have a lot of information about me. This thrills advertisers, and it’s also OK with me; once in a while I actually see something interesting. Yes, we are “the product” in ad-supported businesses. Only the smallest children are unaware of this.

The hypothetical version of the we’re-being-scammed argument is that the giant tech companies are doing or planning something opaque and sinister with all that data that we’re giving them. As law professor Tim Wu wrote recently about Facebook: “[T]he data is also an asset. The two-hundred-and-seventy-billion-dollar valuation of Facebook, which made a profit of three billion dollars last year, is based on some faith that piling up all of that data has value in and of itself… One reason Mark Zuckerberg is so rich is that the stock market assumes that, at some point, he’ll figure out a new way to extract profit from all the data he’s accumulated about us.”

It’s true that all the information about me and my social network that these companies have could be used to help insurers and credit-card companies pick customers and price discriminate among them. But they already do that, and do it within the confines of a lot of regulation and consumer protection. I’m just not sure how much “worse” it would get if Google, Facebook and others started piping them our data.•

 

Tags:

The new Foreign Affairs issue on automation, which I’ve excerpted several times (here and here and here), would not have been complete without a piece by Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age, an excellent book that asks all right questions about the rapid growth of robotics, trying to answer them as well.

In “Will Humans Go the Way of Horses?” the duo wisely points out that regardless of machine progress, we aren’t likely going to become our equine brothers. There’s some chance superintelligence might obliterate humans one day in the very long run, but interpersonal skills, common sense, political will and revolution are some of the tools the authors believe may slow or even mitigate the lower-case calamities on the horizon, keeping us from the stable or glue factory, even if we’re no longer the heart of production. An excerpt:

It’s possible, however, to imagine a “robot dividend” that created more widespread ownership of robots and similar technologies, or at least a portion of the financial benefits they generated. The state of Alaska provides a possible template: courtesy of the Alaska Permanent Fund, which was established in 1976, the great majority of the state’s residents receive a nontrivial amount of capital income every year. A portion of the state’s oil revenues is deposited into the fund, and each October, a dividend from it is given to each eligible resident. In 2014, this dividend was $1,884.

Even if human labor becomes far less necessary overall, people, unlike horses, can choose to prevent themselves from becoming economically irrelevant.


It’s important to note that the amendment to the Alaska state constitution establishing the Permanent Fund passed democratically, by a margin of two to one. That Alaskans chose to give themselves a bonus highlights another critical difference between humans and horses: in many countries today, humans can vote. In other words, people can influence economic outcomes, such as wages and incomes, through the democratic process. This can happen directly, through votes on amendments and referendums, or indirectly, through legislation passed by elected representatives. It is voters, not markets, who are picking the minimum wage, determining the legality of sharing-economy companies such as Uber and Airbnb, and settling many other economic issues.

In the future, it’s not unreasonable to expect people to vote for policies that will help them avoid the economic fate of the horse.•

Tags: ,

Excerpts from a pair of recent Harvard Business Review articles which analyze the increasing insinuation of robots in the workplace. The opening of Walter Frick’s “When Your Boss Wears Metal Pants” examines the emotional connection we quickly make with robots who can feign social cues. In “The Great Decoupling,” Amy Bernstein and Anand Raman discuss technological unemployment, among other topics, with Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age.

___________________________

From Frick:

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society.•

___________________________

From Bernstein and Raman:

Harvard Business Review:

As the Second Machine Age progresses, will there be any jobs for human beings?

Andrew McAfee:

Yes, because humans are still far superior in three skill areas. One is high-end creativity that generates things like great new business ideas, scientific breakthroughs, novels that grip you, and so on. Technology will only amplify the abilities of people who are good at these things.

The second category is emotion, interpersonal relations, caring, nurturing, coaching, motivating, leading, and so on. Through millions of years of evolution, we’ve gotten good at deciphering other people’s body language…

Eric Brynjolfsson:

…and signals, and finishing people’s sentences. Machines are way behind there.

The third is dexterity, mobility. It’s unbelievably hard to get a robot to walk across a crowded restaurant, bus a table, take the dishes back into the kitchen, put them in the sink without breaking them, and do it all without terrifying the restaurant’s patrons. Sensing and manipulation are hard for robots.

None of those is sacrosanct, though; machines are beginning to make inroads into each of them.

Andrew McAfee:

We’ll continue to see the middle class hollowed out and will see growth at the low and high ends. Really good executives, entrepreneurs, investors, and novelists—they will all reap rewards. Yo-Yo Ma won’t be replaced by a robot anytime soon, but financially, I wouldn’t want to be the world’s 100th-best cellist.•

Tags: , , , ,

Hod Lipson loves robots, but love is complicated. 

The robotics engineer is among the growing chorus of those concerned about technological unemployment leading to social unrest, something Norbert Wiener warned of more than 60 years ago. Is it, at long last, in this Digital Age, happening?

In a long-form MIT Technology Review article, David Rotman wonders if the new technologies may be contributing to wealth inequality and could ultimately lead to an even a greater divide, while considering the work of analysts on both sides of automation issue, including Sir Tony Atkinson, Martin Ford, Andrew McAfee and David Autor. The opening:

The way Hod Lipson describes his Creative Machines Lab captures his ambitions: “We are interested in robots that create and are creative.” Lipson, an engineering professor at Cornell University (this July he’s moving his lab to Columbia University), is one of the world’s leading experts on artificial intelligence and robotics. His research projects provide a peek into the intriguing possibilities of machines and automation, from robots that “evolve” to ones that assemble themselves out of basic building blocks. (His Cornell colleagues are building robots that can serve as baristas and kitchen help.) A few years ago, Lipson demonstrated an algorithm that explained experimental data by formulating new scientific laws, which were consistent with ones known to be true. He had automated scientific discovery.

Lipson’s vision of the future is one in which machines and software possess abilities that were unthinkable until recently. But he has begun worrying about something else that would have been unimaginable to him a few years ago. Could the rapid advances in automation and digital technology provoke social upheaval by eliminating the livelihoods of many people, even as they produce great wealth for others?

“More and more computer-guided automation is creeping into everything from manufacturing to decision making,” says Lipson. In the last two years alone, he says, the development of so-called deep learning has triggered a revolution in artificial intelligence, and 3-D printing has begun to change industrial production processes. “For a long time the common understanding was that technology was destroying jobs but also creating new and better ones,” says Lipson. “Now the evidence is that technology is destroying jobs and indeed creating new and better ones but also fewer ones. It is something we as technologists need to start thinking about.”•

Tags: , , , , ,

As someone consumed by robotics, automation, the potential for technological unemployment and its societal and political implications, I read as many books as possible on the topic, and I feel certain that The Second Machine Age, the 2014 title coauthored by Andrew McAfee and Eric Brynjolfsson, is the best of the lot. If you’re just beginning to think about these issues, start right there.

In his Financial Times blog, McAfee, who believes this time is different and that the Second Machine Age won’t resemble the Industrial Age, has published a post about an NPR debate on the subject with MIT economist David Autor, who disagrees. An excerpt: 

Over the next 20-40 years, which was the timeframe I was looking at, I predicted that vehicles would be driving themselves; mines, factories, and farms would be largely automated; and that we’d have an extraordinarily abundance economy that didn’t have anything like the same bottomless thirst for labour that the Industrial Era did.

As expected, I found David’s comments in response to this line of argument illuminating. He said: “If we’d had this conversation 100 years ago I would not have predicted the software industry, the internet, or all the travel or all the experience goods … so I feel it would be rather arrogant of me to say I’ve looked at the future and people won’t come up with stuff … that the ideas are all used up.”

This is exactly right. We are going to see innovation, entrepreneurship, and creativity that I can’t even begin to imagine (if I could, I’d be an entrepreneur or venture capitalist myself). But all the new industries and companies that spring up in the coming years will only use people to do the work if they’re better at it than machines are. And the number of areas where that is the case is shrinking — I believe rapidly.•

Tags: , ,

Here’s an example of what Andrew McAfee wrote about tech toys of the rich becoming tools of the masses: An article from Peter H. Lewis in the July 19, 1992 New York Times, which was prescient about the emergence of smartphones (if a decade too early), without realizing they’d be for everyone. Andy Grove, quoted in the piece, thought it all fantasy. An excerpt:

Sometime around the middle of this decade no one is sure exactly when — executives on the go will begin carrying pocket-sized digital communicating devices. And although nobody is exactly sure what features these personal information gizmos will have, what they will cost, what they will look like or what they will be called, hundreds of computer industry officials and investors at the Mobile ’92 conference here last week agreed that the devices could become the foundation of the next great fortunes to be made in the personal computer business.

“We are writing Chapter 2 of the history of personal computers,” said Nobuo Mii, vice president and general manager of the International Business Machines Corporation’s entry systems division.

How rich is this lode? At one end of the spectrum is John Sculley, the chief executive of Apple Computer Inc., who says these personal communicators could be ‘the mother of all markets.’

At the other end is Andrew Grove, the chairman of the Intel Corporation, the huge chip maker based in Santa Clara, Calif. He says the idea of a wireless personal communicator in every pocket is “a pipe dream driven by greed.”

These devices are expected to combine the best features of personal computers, facsimile machines, computer networks, pagers, personal secretaries, appointment books, address books and even paperback books and pocket CD players — all in a hand-held box operated by pen, or even voice commands.

Stuck in traffic on a business trip, an executive carrying a personal communicator could send and receive electronic mail and facsimile messages from anywhere in the country. She could also call up a local map on a 3-inch by 5-inch screen, draw a line between her current position (confirmed by satellite positioning signals) and her intended destination, and the device would give her specific driving instructions (as well as real-time warnings about traffic jams or accidents). Certainly, these are just predictions for now, but they sure are fun to think about.•

Tags: , , , ,

Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s great Second Machine Age, recently argued in a Financial Times blog post that the economy’s behavior is puzzling these days. It’s difficult to find fault with that statement.

Inflation was supposed to be soaring by now, but it’s not. Technology was going to make production grow feverishly, but traditional measures don’t suggest that. Job growth and wages were supposed to return to normal once the financial clouds cleared, though that’s been largely a dream deferred. What gives?

In a sequel of sorts to that earlier post, McAfee returns to try to suss out part of the answer, which he feels might be that the new technologies have created an abundance which has suppressed inflation. That seems to be certain feature of the future as 3D printers move to the fore, but has it already happened? And has this plenty made jobs scarcer and suppressed wages? An excerpt:

In a Tweetstorm late last year, venture capitalist Marc Andreessen argued that technological progress might be another important factor driving prices down. He wrote: “While I am a bull on technological progress, it also seems that much of that progress is price deflationary in nature, so even extremely rapid tech progress may not show up in GDP or productivity stats, even as it = higher real standards of living.”

Prof [Larry] Summers shot back quickly, noting: “It is… not clear how one would distinguish deflationary and inflationary progress. The price level reflects the value of goods in terms of money, so it is hard to analyze without thinking about monetary and financial conditions.” This is surely correct, but is Prof Summers being too dismissive of Mr Andreessen’s larger point? Can tech progress be contributing to price declines?

Moore’s law — that computer processing power doubles roughly every two years — has made computers themselves far cheaper. It has also pretty directly led to the shrinkage of industries as diverse as encyclopedias, recorded music, film photography and standalone GPS devices. An intriguing analysis by writer Chris Goodall found that the “UK began to reduce its consumption of physical resources in the early years of the last decade.” Technological progress, which by its nature allows us to do more with less, is a big part of this move past “peak stuff.”

It’s also probably a big part of the reason that corporate profits remain so high, even while overall economic growth stagnates.

Tags: , , , ,

To be an early adopter in technology, you sometimes need to have as much money as vision. As Andrew McAfee notes in his latest Financial Times blog post, if you want to see how the 99% will soon live, just take a look at the 1%. No, the majority won’t soon have more money (less, probably), but the coveted goods and services of the privileged will soon probably become accessible to almost all.

Of course, the cheapening of these lifestyle choices, a further Walmartization of our economy, isn’t good for Labor. McAfee offers a remedy, if not a new one. An excerpt:

Of the many things I’ve learnt from Google’s chief economist Hal Varian, perhaps my favourite is his elegant and thrifty approach to prediction. “A simple way to forecast the future,” he says, “is to look at what rich people have today.” This works. Applying this method a few years ago would have led one to foresee the rise of Uber and the spread of smartphones around the world, to take just two examples.

Hal’s point is that tech progress quite quickly makes initially expensive things — both goods and services — cheaper, and so hastens their spread. Which is why this progress is the best economic news on the planet (I wish there were stiffer competition for that title these days).

So what do the rich have today that will soon spread widely? A recent article in the online magazine Matter probably holds a clue. Lauren Smiley’s “The Shut-In Economydetails the parade of delivery people and service providers that show up each evening at the apartment complexes that house San Francisco’s tech elite. Smiley writes that “Outside my building there’s always a phalanx of befuddled delivery guys… Inside, the place is stuffed with the goodies they bring.”•

Tags: , ,

Excerpts follow from two posts (one from Andrew McAfee at the Financial Times and one from the TED blog) that look at the progress of driverless cars, which have improved at a stunning pace since theDebacle in the Desert in 2004. Elements of driverless will be helpful, but they change the game in many ways–some wonderful, some concerning–only when they become completely autonomous. McAfee has been further convinced about the sector by recent developments.

_________________________

From McAfee:

Transportation. Most of us have heard of driverless cars by now. I had the chance to ride in one of Google’s in 2012. It was an experience that went from mind-bending to boring remarkably quickly; the car is such a good and stable driver that I quickly lost all sense of adventure while I was in it. Still, though, I was unprepared for how much progress has been made since then with autonomous road vehicles. Google project director Chris Urmson brought us up to speed with that company’s work, and made a compelling case that we should be striving not for more and better tech to assist human drivers, but instead to replace them. Doing so will save lives, open up opportunity to the blind and disabled and free us from a largely tedious task. And in response to the criticism that self-driving cars aren’t good at dealing with unanticipated events, he showed a video of what happened when one of his fleet encountered a woman in a wheelchair chasing a duck around in the street. The car responded beautifully; we in the audience lost our minds.•

From the TED blog:

Why we need self-driving cars. “In 1885, Carl Benz invented the automobile,” says Chris Urmson, Director of Self-Driving Cars at Google[x]. “A year later, he took it out for a test drive and, true story, promptly crashed it into a wall.” Throughout the history of the car, “We’ve been working around the least reliable part of the car: the driver.” Every year, 1.2 million people are killed on roads around the world. And there are two approaches to using machines to help solve that problem: driver assistance systems, which help make the driver better, and self-driving cars, which take over the art of driving. Urmson firmly believes that self-driving cars are the right approach. With simulations that break a road down to a series of lines, boxes and dots, he shows us how Google’s driverless cars handles all types of situations, from a turning truck to a woman chasing ducks through the street. Every day, these systems go through 3 million miles of simulation testing. “The urgency is so large,” says Urmson. “We’re looking forward to having this technology on the road.”•

Tags: ,

At his Financial Times blog, Andrew McAfee identifies five keys to economic growth in which America may not be failing but is grading out average or worse. We’re not hopeless by any means, but we need to be less myopic politically. An excerpt:

Despite their universally acknowledged importance, we’re not doing a good job on these.

Education: The primary school system in the US has been called the country’s best idea, but at present the country’s students are no better than middle of the pack internationally. There is alarming evidence that college students are often learning very little and there’s still too much focus on rote learning and mastering skills that technology is already quite good at.

Infrastructure: World-class roads, airports and networks are investment in the future and the foundation of strong growth. But the the American Society of Civil Engineers gives the US an overall grade of D+ and internet speeds here are slower than in many other countries.

Entrepreneurship: Young businesses, especially fast-growing ones, are a prime source of new jobs. Unfortunately, entrepreneurship in the US has been on a slow, steady decline.

Immigration: Many of the world’s most talented and ambitious people want to come to the US to build their lives and careers, and the evidence is clear that immigrant-founded companies have been great job-creation engines. Yet our policies in this area are far too restrictive, and our procedures are nightmarishly bureaucratic.

Basic research: Companies tend to concentrate on applied research where they can capture the rewards from their efforts. This means that government has a role to play in supporting original, early-stage work for which the rewards are spread more broadly. Most of today’s tech marvels, from the internet to the smartphone, have a government programme somewhere back in their family tree. But funding for basic research in the US is on the decline as a share of gross domestic product.•

Tags:

Every day, hundreds of millions of people all over the world create an astounding amount of free content for Facebook and Twitter and the like. It would be by far the largest sweatshop in the world except that even sweatshop laborers are paid a nominal amount. Sure, we get a degree of utility from such services, but we’ve essentially turned ourselves into unpaid volunteers for multibillion-dollar corporations. There is, perhaps, something evolutionary about such participation, the ants cooperating to piece together a colony, but from an economic standpoint, it’s a stunning turn of events, and it all pivots on the new technologies.

When Robots Steal Our Jobs,” a BBC radio program about machines and automation being introduced into reliably white-collar fields like law and medicine, sums up this phenomenon really well with this fact: “Last year, we collectively spent nearly 500 million hours each day updating Facebook. That’s 25 times the amount of labor it took to build the Panama Canal. And we did it all for free.”

Andrew McAfee, co-author of The Second Machine Age, and David Graeber are among the voices heard. The latter thinks capitalism won’t survive automation, but perhaps they’ll be something worse (e.g., techno-fascism). 

One thing I feel sure about in the aforementioned intersection of AI and medicine is that robotics will be handling the majority of surgery at some point in the future. 

The show plays a clip of Woody Allen doing stand-up in San Francisco in 1968, addressing a fear that began to take hold that decade: “My father was fired. He was technologically unemployed. My father worked for the same firm for 12 years. They replaced him with a tiny gadget this big that does everything that my father does but does it much better. The depressing thing is that my mother ran out and bought one.”

Tags: , ,

Paul Krugman is continually taken to task for predicting in 1998 that the Internet would be no more important economically than the fax machine by 2005. Culturally, of course, this new medium has been a watershed event. But he had a point on some level: the Internet–and computers, more broadly–still disappoint from a productivity perspective. Either that or all conventional measurements are insufficient to gauge this new machine. At his Financial Times blog, Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s wonderful The Second Machine Age, wonders about the confusing state of contemporary economics. An excerpt:

The economy’s behaviour is puzzling these days. No matter what you think is going on, there are some facts — important ones — that don’t fit your theory well at all, and/or some important things left unexplained.

For example, if you believe that technological progress is reshaping the economy (as Erik and I do) then you’ve got to explain why productivity growth is so low. As Larry Summers pointed out on the first panel, strong labour productivity growth is the first thing you’d expect to see if tech progress really were taking off and reshaping the economy, disrupting industries, hollowing out the middle class, and so on. So why has it been so weak for the past 10 years? Is it because of mismeasurement? William Baumol’s “Cost Disease” (the idea that all the job growth has come in manual, low-productivity sectors)? Or is it that recent tech progress is in fact economically unimpressive, as Robert Gordon and others believe?

If you believe that tech progress has not been that significant, however, you’ve got to explain why labor’s share of income is declining around the world.•

Tags: , ,

In a belated London Review of Books assessment of The Second Machine Age and Average Is Over, John Lanchester doesn’t really break new ground in considering Deep Learning and technological unemployment, but in his customarily lucid and impressive prose he crystallizes how quickly AI may remake our lives and labor in the coming decades. Two passages follow: The opening, in which he charts the course of how the power of a supercomputer ended up inside a child’s toy in a few short years; and a sequence about the way automation obviates workers and exacerbates income inequality.

__________________________________

In 1996, in response to the 1992 Russo-American moratorium on nuclear testing, the US government started a programme called the Accelerated Strategic Computing Initiative. The suspension of testing had created a need to be able to run complex computer simulations of how old weapons were ageing, for safety reasons, and also – it’s a dangerous world out there! – to design new weapons without breaching the terms of the moratorium. To do that, ASCI needed more computing power than could be delivered by any existing machine. Its response was to commission a computer called ASCI Red, designed to be the first supercomputer to process more than one teraflop. A ‘flop’ is a floating point operation, i.e. a calculation involving numbers which include decimal points (these are computationally much more demanding than calculations involving binary ones and zeros). A teraflop is a trillion such calculations per second. Once Red was up and running at full speed, by 1997, it really was a specimen. Its power was such that it could process 1.8 teraflops. That’s 18 followed by 11 zeros. Red continued to be the most powerful supercomputer in the world until about the end of 2000.

I was playing on Red only yesterday – I wasn’t really, but I did have a go on a machine that can process 1.8 teraflops. This Red equivalent is called the PS3: it was launched by Sony in 2005 and went on sale in 2006. Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred quid. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world’s richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.

The force at work here is a principle known as Moore’s law. This isn’t really a law at all, but rather the extrapolation of an observation made by Gordon Moore, one of the founders of the computer chip company Intel. By 1965, Moore had noticed that silicon chips had for a number of years been getting more powerful, in relation to their price, at a remarkably consistent rate. He published a paper predicting that they would go on doing so ‘for at least ten years’. That might sound mild, but it was, as Erik Brynjolfsson and Andrew McAfee point out in their fascinating book, The Second Machine Age, actually a very bold statement, since it implied that by 1975, computer chips would be five hundred times more powerful for the same price. ‘Integrated circuits,’ Moore said, would ‘lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles and personal portable communications equipment’. Right on all three. If anything he was too cautious.•

__________________________________

Note that in this future world, productivity will go up sharply. Productivity is the amount produced per worker per hour. It is the single most important number in determining whether a country is getting richer or poorer. GDP gets more attention, but is often misleading, since other things being equal, GDP goes up when the population goes up: you can have rising GDP and falling living standards if the population is growing. Productivity is a more accurate measure of trends in living standards – or at least, it used to be. In recent decades, however, productivity has become disconnected from pay. The typical worker’s income in the US has barely gone up since 1979, and has actually fallen since 1999, while her productivity has gone up in a nice straightish line. The amount of work done per worker has gone up, but pay hasn’t. This means that the proceeds of increased profitability are accruing to capital rather than to labour. The culprit is not clear, but Brynjolfsson and McAfee argue, persuasively, that the force to blame is increased automation.

That is a worrying trend. Imagine an economy in which the 0.1 per cent own the machines, the rest of the 1 per cent manage their operation, and the 99 per cent either do the remaining scraps of unautomatable work, or are unemployed. That is the world implied by developments in productivity and automation. It is Pikettyworld, in which capital is increasingly triumphant over labour. We get a glimpse of it in those quarterly numbers from Apple, about which my robot colleague wrote so evocatively. Apple’s quarter was the most profitable of any company in history: $74.6 billion in turnover, and $18 billion in profit. Tim Cook, the boss of Apple, said that these numbers are ‘hard to comprehend’. He’s right: it’s hard to process the fact that the company sold 34,000 iPhones every hour for three months. Bravo – though we should think about the trends implied in those figures. For the sake of argument, say that Apple’s achievement is annualised, so their whole year is as much of an improvement on the one before as that quarter was. That would give them $88.9 billion in profits. In 1960, the most profitable company in the world’s biggest economy was General Motors. In today’s money, GM made $7.6 billion that year. It also employed 600,000 people. Today’s most profitable company employs 92,600. So where 600,000 workers would once generate $7.6 billion in profit, now 92,600 generate $89.9 billion, an improvement in profitability per worker of 76.65 times. Remember, this is pure profit for the company’s owners, after all workers have been paid. Capital isn’t just winning against labour: there’s no contest. If it were a boxing match, the referee would stop the fight.•

Tags: , , , ,

In a Big Think video, Andrew McAfee explains how automation is coming for your collar, white or blue, limo driver and lawyer alike. He leaves off by talking about new industries being created as old ones are being destroyed, but from his writing in The Second Machine Age, the book he co-authored with Eric Brynjolfsson, it’s clear he fears the shortfall between old and new may be significant and society could be in for a bumpy transition.

Tags: ,

Andrew McAfee and Eric Brynjolfsson’s The Second Machine Age, a deep analysis of the economic and political ramifications of Weak AI in the 21st century, was one of the five best books I read in 2014, a really rich year for titles of all kinds. I pretty much agree with the authors’ summation that there’s a plentitude waiting at the other end of the proliferation of automation fast approaching, though the intervening decades will be a serious societal challenge. In a post at his Financial Times blog, McAfee reconsiders, if somewhat, his reluctance to join in with the Hawking-Bostrom-Musk AI anxiety. An excerpt:

The group came together largely to discuss AI safety — the challenges and problems that might arise if digital systems ever become superintelligent. I wasn’t that concerned about AI safety coming into the conference, for reasons that I have written about previously. So did I change my mind?

Maybe a little bit. The argument that we should be concerned about any potentially existential risks to humanity, even if they’re pretty far in the future and we don’t know exactly how they’ll manifest themselves, is a fairly persuasive one. However, I still feel that we’re multiple “Watson and Crick moments” away from anything we need to worry about, so I haven’t signed the open letter on research priorities that came out in the wake of the conference — at least not yet. But who knows how quickly things might change?

At the gathering, in answer to this question I kept hearing variations of “quicker than we thought.” In robotic endeavours as diverse as playing classic Atari video games,competing against the top human players in the Asian board game Go, creating self-driving cars, parsing and understanding human speech, folding towels and matching socks, the people building AI to do these things said that they were surprised at the pace of their own progress. Their achievements outstripped not only their initial timelines, they said, but also their revised ones.

Why is this? My short answer is that computers keep getting more powerful, the available data keeps getting broader (and data is the lifeblood of science), and the geeks keep getting smarter about how to do their work. This is one of those times when a field of human inquiry finds itself in a virtuous cycle and takes off.•

Tags: ,

A debate (by proxy) between Nicholas Carr and Andrew McAfee, two leading thinkers about the spreed of automation, takes place in Zoë Corbyn’s Guardian article about Carr’s most-recent book, The Glass Cage. I doubt the proliferation of Weak AI will ultimately be contained much beyond niches despite any good dissenting arguments. An excerpt:

As doctors increasingly follow automated diagnostic templates and architects use computer programs to generate their building plans, their jobs become duller. “At some point you turn people into computer operators – and that’s not a very interesting job,” Carr says. We now cede even moral choices to our technology, he says. The Roomba vacuum cleaning robot, for example, will unthinkingly hoover up a spider that we may have saved.

Not everyone buys Carr’s gloomy argument. People have always lamented the loss of skills due to technology: think about the calculator displacing the slide rule, says Andrew McAfee, a researcher at the MIT Sloan School of Management. But on balance, he says, the world is better off because of automation. There is the occasional high-profile crash – but greater automation, not less, is the answer to avoiding that.

Carr counters that we must start resisting the urge to increase automation unquestioningly. Reserving some tasks for humans will mean less efficiency, he acknowledges, but it will be worth it in the long run.•

Tags: , ,

Andrew McAfee, co-author with Eric Brynjolfsson of The Second Machine Age, believes that Weak AI will destabilize employment for decades, but he doesn’t think species-threatening Artificial Intelligence is just around the bend. From his most recent Financial Times blog post:

“AI does appear to be taking off: after decades of achingly slow progress, computers have in the past few years demonstrated superhuman ability, from recognising street signs in pictures and diagnosing cancer to discerning human emotions and playing video games. So how far off is the demon?

In all probability, a long, long way away; so long, in fact, that the current alarmism is at best needless and at worst counterproductive. To see why this is, an analogy to biology is helpful.

It was clear for a long time that important characteristics of living things (everything from the colour of pea plant flowers to the speed of racehorses) was passed down from parents to their offspring, and that selective breeding could shape these characteristics. Biologists hypothesised that units labelled ‘genes’ were the agents of this inheritance, but no one knew what genes looked like or how they operated. This mystery was solved in 1953 when James Watson and Francis Crick published their paper describing the double helix structure of the DNA molecule. This discovery shifted biology, giving scientists almost infinitely greater clarity about which questions to ask and which lines of inquiry to pursue.

The field of AI is at least one ‘Watson and Crick moment’ away from being able to create a full artificial mind (in other words, an entity that does everything our brain does). As the neuroscientist Gary Marcus explains: ‘We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.’ We also do not have any clear idea how a human child is able to know so much about the world — that is a cat, that is a chair — after being exposed to so few examples. We do not know exactly what common sense is, and it is fiendishly hard to reduce to a set of rules or logical statements. The list goes on and on, to the point that it feels like we are many Watson and Crick moments away from anything we need to worry about.”

Tags: ,

At his Financial Times blog, Andrew McAfee talks about the plunging “red line” of Labor’s portion of earnings, accelerating in the wrong direction as automation permeates the workplace. Robotics, algorithms and AI will make companies more profitable and shareholders richer, but job seekers (and holders) will suffer. And going Luddite will help no one in the long run. An excerpt:

“I expect the red line to continue to fall as robots, artificial intelligence, 3D printing, autonomous vehicles, and the many other technologies that until recently were the stuff of science fiction, permeate industry after industry. Policies intended to keep these advances out of a country might halt the decline of the labour share for a while, but they’d also halt competitiveness pretty quickly, thus leaving both capital and labour worse off.

I think the continued decline of the labour share, brought on by tech progress, will be a central dynamic, if not the central dynamic, of the world’s economies and societies in the 21st century. It’s a story much more about what will happen to the livelihood of the 50th percentile worker than to the wealth of the 1 per cent. And a much more important story.”

Tags:

The unemployment rate is falling in America, but wages aren’t rising in most sectors, which is counterintuitive. Two explanatory notes about U.S. employment in the aftermath of the 2008 economic collapse, one from Eric Brynjolfsson and Andrew McAfee’s The Second Machine Age, and the other from Derek Thompson’s Atlantic article “The Rise of Invisible Unemployment.”

____________________________

From Brynjolfsson and McAfee:

“A few years ago we had a very candid discussion with one CEO, and he explained that he knew for over a decade that advances in information technology had rendered many routine information-processing jobs superfluous. At the same time, when profits and revenues are on the rise, it can be hard to eliminate jobs. When the recession came, business as usual obviously was not sustainable, which made it easier to implement a round of streamlining and layoffs. As the recession ended and profits and demand returned, the jobs doing routine work were not restored. Like so many other companies in recent years, his organization found it could use technology to scale up without the workers.”

____________________________

From Thompson:

“3. The rise of invisible work is too large to ignore.

By ‘invisible work,’ I mean work done by American companies that isn’t done by Americans workers. Globalization and technology is allowing corporations to expand productivity, which shows up in earnings reports and stock prices and other metrics that analysts typically associate with a healthy economy. But globalization and technology don’t always show up in US wage growth because they often represent alternatives to US-based jobs. Corporations have used the recession and the recovery to increase profits by expanding abroad, hiring abroad, and controlling labor costs at home. It’s a brilliant strategy to please investors. But it’s an awful way to contribute to domestic wage growth.

Tags: , ,

One thing that drove me crazy during the 2012 Presidential debates was Mitt Romney myopically (and incorrectly) stating that the government’s loan to Tesla Motors was a boondoggle. Not only are such investments wise for business, they’re also matters of national security. Whatever country wins the race to alternative energy and robotics and AI will be the most secure. In a blog post at the Financial Times, Andrew McAfee handicaps the potential leaders of, as he calls it, the Second Machine Age. The opening:

“How much should it worry Germany that the world’s coolest car company no longer hails from that country?

This question occurred to me as I sat in a meeting a short time ago with a senior figure responsible for Germany’s economic growth and future trajectory. He was confident that his country’s many strengths would allow it to continue to prosper, and to lead in what it has labelled ‘Industry 4.0.’ This is the anticipated fourth industrial revolution (after the ones powered by steam; electricity and the internal combustion engine; and the computer) during which the real and virtual worlds will merge.

I believe this merger is coming, and coming fast. But who’s going to lead it? Which country’s companies will grow by creating new markets and disrupting existing ones? These questions matter not just because national pride is at stake, but also because national prosperity is.

Consumers around the world will benefit no matter where the next set of profound innovations originates. To some extent the same is true for investors, who can now invest in markets and companies far from home. Citizens and workers, however, do best when their countries are the ones doing the most to create the future. These countries tend to grow more quickly, expanding the tax base, job opportunities, and overall affluence.

Because it was the birthplace of the first industrial revolution, Britain pulled away from the rest of Europe during the 19th century. America then took over, becoming the world’s largest and most productive economy at the start of the 20th century as it developed many of the breakthroughs of the second and third. Will the lead change hands again as we head into Industry 4.0, the Second Machine Age, or whatever you want to call it?”

Tags:

« Older entries