Tim Harford

You are currently browsing articles tagged Tim Harford.

In his annual letter to shareholders, JPMorgan Chase CEO Jamie Dimon, whose former reputation as one of banking’s “good guys” was more a referendum on his sordid peers than a compliment to himself, defended the Trump Administration’s ardor for deregulation with the following dubious claim: “Essentially, too big to fail has been solved — taxpayers will not pay if a bank fails.”

His underlying assumption is that bankers will behave rationally because there will (potentially) be serious penalties should they grab for short-term sacks of cash in a manner that may eventually imperil their firms. This is a failure to understand psychology as well as economics.

Most of the poor behavior on Wall Street that led to the collapse of 2008 was done with very little thought for the future, that nebulous thing. Now mattered far more than then. Humans will always be, to some degree, irrational and exuberant, especially when money is involved. Smart regulations are drawn and enforced to save us from ourselves.

· · ·

Speaking of regulation: In his latest Financial Times column, Tim Harford writes of an oft-overlooked aspect of the challenges facing contemporary workers. Outsourcing, robotics and contracting aren’t the only threats to our positions. The uber-consolidation within American business sectors has made for “superstar firms,” behemoths that can operate with brute efficiency, a development that can be devastating for employees. It’s a problem that won’t likely solve itself.

An excerpt:

Superstar firms, instead, seem to be the cause. The story is simple. These businesses are highly productive and achieve more with less. Because of this profitability, more of the value added by the company flows to shareholders and less to workers. And what happens in these groups will tend to be reflected in the economy as a whole, because superstar firms have an increasingly important role.

All this poses a headache for policymakers — assuming policymakers can pay attention to the issue for long enough. The policy response required is subtle: after all, the growth of innovative, productive companies is welcome. It’s the unintended consequences of that growth that pose problems.

Those consequences are not easy to predict, but here are two possibilities. Either the US economy ends up like Amazon, or it ends up like Microsoft. The Amazon future is one of relentless competition, a paradise for consumers but a nightmare for workers, and with the ever-present risk that dominant businesses will snuff out competition as the mood takes them.

The Microsoft future epitomises the economist John Hicks’s quip: “the best of all monopoly profits is a quiet life”. Microsoft in the 1990s became famous as a once-brilliant company that decided to pull up the drawbridge, locking in consumers and locking out competitors.

In either scenario ordinary people lose out, unless they can enjoy returns from capital as well as returns from working.•

Tags: ,

Elon Musk has made the unilateral decision that Mars will be ruled by direct democracy, and considering how dismal his political record is over the last five months with his bewildering bromance with the orange supremacist, it might be great if he blasted from Earth sooner than later.

Another billionaire of dubious governmental wisdom also believed in direct democracy. That was computer-processing magnate Ross Perot who, in 1969, had a McLuhan-ish dream: an electronic town hall in which interactive television and computer punch cards would allow the masses, rather than elected officials, to decide key American policies. In 1992, he held fast to this goal–one that was perhaps more democratic than any society could survive–when he bankrolled his own populist third-party Presidential campaign. 

The opening of “Perot’s Vision: Consensus By Computer,” a New York Times article from that year by the late Michael Kelly:

WASHINGTON, June 5— Twenty-three years ago, Ross Perot had a simple idea.

The nation was splintered by the great and painful issues of the day. There had been years of disorder and disunity, and lately, terrible riots in Los Angeles and other cities. People talked of an America in crisis. The Government seemed to many to be ineffectual and out of touch.

What this country needed, Mr. Perot thought, was a good, long talk with itself.

The information age was dawning, and Mr. Perot, then building what would become one of the world’s largest computer-processing companies, saw in its glow the answer to everything. One Hour, One Issue

Every week, Mr. Perot proposed, the television networks would broadcast an hourlong program in which one issue would be discussed. Viewers would record their opinions by marking computer cards, which they would mail to regional tabulating centers. Consensus would be reached, and the leaders would know what the people wanted.

Mr. Perot gave his idea a name that draped the old dream of pure democracy with the glossy promise of technology: “the electronic town hall.”

Today, Mr. Perot’s idea, essentially unchanged from 1969, is at the core of his ‘We the People’ drive for the Presidency, and of his theory for governing.

It forms the basis of Mr. Perot’s pitch, in which he presents himself, not as a politician running for President, but as a patriot willing to be drafted ‘as a servant of the people’ to take on the ‘dirty, thankless’ job of rescuing America from “the Establishment,” and running it.

In set speeches and interviews, the Texas billionaire describes the electronic town hall as the principal tool of governance in a Perot Presidency, and he makes grand claims: “If we ever put the people back in charge of this country and make sure they understand the issues, you’ll see the White House and Congress, like a ballet, pirouetting around the stage getting it done in unison.”

Although Mr. Perot has repeatedly said he would not try to use the electronic town hall as a direct decision-making body, he has on other occasions suggested placing a startling degree of power in the hands of the television audience.

He has proposed at least twice — in an interview with David Frost broadcast on April 24 and in a March 18 speech at the National Press Club — passing a constitutional amendment that would strip Congress of its authority to levy taxes, and place that power directly in the hands of the people, in a debate and referendum orchestrated through an electronic town hall.•

In addition to the rampant myopia that would likely blight such a system, most Americans, with jobs and families and TV shows to binge watch, don’t take the time to fully appreciate the nuances of complex policy. The stunning truth is that even in a representative democracy in this information-rich age, we have enough uninformed voters minus critical-thinking abilities to install an obvious con artist into the Oval Office to pick their pockets. 

In a Financial Times column, Tim Harford argues in favor of the professional if imperfect class of technocrats, who get the job done, more or less. An excerpt:

For all its merits, democracy has always had a weakness: on any detailed piece of policy, the typical voter — I include myself here — does not understand what is really at stake and does not care to find out. This is not a slight on voters. It is a recognition of our common sense. Why should we devote hours to studying every policy question that arises? We know the vote of any particular citizen is never decisive. It would be a deluded voter indeed who stayed up all night revising for an election, believing that her vote would be the one to make all the difference.

So voters are not paying close attention to the details. That might seem a fatal flaw in democracy but democracy has coped. The workaround for voter ignorance is to delegate the details to expert technocrats. Technocracy is unfashionable these days; that is a shame.

One advantage of a technocracy is that it constrains politicians who are tempted by narrow or fleeting advantages. Multilateral bodies such as the World Trade Organization and the European Commission have been able to head off popular yet self-harming behaviour, such as handing state protection to which ever business has the best lobbyists.

Meanwhile independent central banks have been the grown-ups of economic policymaking. Once the immediate aftermath of the financial crisis had passed, elected politicians sat on their hands. Technocratic central bankers were — to borrow a phrase from Mohamed El-Erian, economic adviser — “the only game in town” in sustaining a recovery.

A second advantage is that technocrats can offer informed, impartial analysis. Consider the Congressional Budget Office in the US, the Office for Budget Responsibility in the UK, and Nice, the National Institute for Health and Care Excellence.

Technocrats make mistakes, it’s true — many mistakes. Brain surgeons also make mistakes. That does not mean I’d be better off handing the scalpel to Boris Johnson.•


When the idea of online personalization was first presented to me by an Web 1.0 entrepreneur, I was unimpressed and uninterested, saying I preferred newspapers and magazines introducing me to ideas I hadn’t been seeking. You could have your very own tailor-made newspaper or magazine, I was told, but I insisted that wasn’t what I would subscribe to. I naively believed others would feel the same way.

When the very young version of Mark Zuckerberg (who is still young) infamously said “a squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa,” he betrayed himself as stunningly callous at that point in his life but also demonstrated the myopia of personalization. Our new tools can enable us to live in a bubble, but that doesn’t mean they ennoble us.

Targeted information helps maximize online advertising revenue, but it’s lousy for democracy, paradoxically offering us greater freedom while simultaneously undermining it. Whether the Internet is inherently at odds with liberty is a valid question. And considering Fox News has been fake news for more than 20 years, the decentralized media in general has to similarly analyzed.

The opening of Tim Harford’s latest Financial Times column:

“Our goal is to build the perfect personalised newspaper for every person in the world,” said Facebook’s Mark Zuckerberg in 2014. This newspaper would “show you the stuff that’s going to be most interesting to you.”

To many, that statement explains perfectly why Facebook is such a terrible source of news.

A “fake news” story proclaiming that Pope Francis had endorsed Donald Trump was, according to an analysis from BuzzFeed, the single most successful item of news on Facebook in the three months before the US election. If that’s what the site’s algorithms decide is interesting, it’s far from being a “perfect newspaper.”

It’s no wonder that Zuckerberg found himself on the back foot after Trump’s election. Shortly after his victory, Zuckerberg declared: “I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way . . . is a pretty crazy idea.” His comment was greeted with a scornful response.

I should confess my own biases here. I despise Facebook for all the reasons people usually despise Facebook (privacy, market power, distraction, fake-smile social interactions and the rest). And, as a loyal FT columnist, I need hardly point out that the perfect newspaper is the one you’re reading right now.

But, despite this, I’m going to stand up for Zuckerberg, who recently posted a 5,700-word essay defending social media. What he says in the essay feels like it must be wrong. But the data suggest that he’s right. Fake news can stoke isolated incidents of hatred and violence. But neither fake news nor the algorithmically driven “filter bubble” is a major force in the overall media landscape. Not yet.•

From Eli Pariser in the New York Times in 2011:

Like the old gatekeepers, the engineers who write the new gatekeeping code have enormous power to determine what we know about the world. But unlike the best of the old gatekeepers, they don’t see themselves as keepers of the public trust. There is no algorithmic equivalent to journalistic ethics.

Mark Zuckerberg, Facebook’s chief executive, once told colleagues that “a squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” At Facebook, “relevance” is virtually the sole criterion that determines what users see. Focusing on the most personally relevant news — the squirrel — is a great business strategy. But it leaves us staring at our front yard instead of reading about suffering, genocide and revolution.

There’s no going back to the old system of gatekeepers, nor should there be. But if algorithms are taking over the editing function and determining what we see, we need to make sure they weigh variables beyond a narrow “relevance.” They need to show us Afghanistan and Libya as well as Apple and Kanye.

Companies that make use of these algorithms must take this curative responsibility far more seriously than they have to date. They need to give us control over what we see — making it clear when they are personalizing, and allowing us to shape and adjust our own filters. We citizens need to uphold our end, too — developing the “filter literacy” needed to use these tools well and demanding content that broadens our horizons even when it’s uncomfortable.

It is in our collective interest to ensure that the Internet lives up to its potential as a revolutionary connective medium. This won’t happen if we’re all sealed off in our own personalized online worlds.•

Tags: , ,


While it won’t much help those who’ve invested their lives in the value of a taxi medallion, it would be beneficial to a growing population of workers if we could figure out a way to protect employees participating, willingly or otherwise, in the Gig Economy. In his latest Financial Times column, Tim Harford has a suggestion: “Libertarianism with a safety net.” If that sounds oxymoronic, it’s intentional. The writer believes we need to consciously uncouple the welfare and corporate states, providing assurances to all citizens so that they can be unmoored without being at risk.

An excerpt:

Are Uber drivers employees or not?

Uber maintains that they are not. That seems defensible: a driver can switch the app on or off at any time, or work for a competitor such as Lyft on a whim. Few employees who acted in this way would be employed for very long.

Then again, does a driver who puts in 60 or 70 hours a week providing Uber-assigned rides according to Uber-determined rules and rates not deserve some sort of security? Some authorities think so: the company has lost a number of rulings in California as judges and arbitrators have found that, in certain cases, Uber drivers are employees.

Such judgments are likely to vary from case to case and place to place, and the uncertainty helps nobody bar the lawyers. Alan Krueger, former chairman of President Barack Obama’s Council of Economic Advisers, draws a parallel with the emergence of the workers’ compensation system a century ago. Sensible rules were agreed, he says, once lawsuits over industrial accidents became expensive and unpredictable.

But what should the new rules be?•



In the big picture, I’m with the FT‘s Tim Harford on the issue of “economic singularity,” meaning that while I believe things may change more quickly going forward, I don’t believe scarcity will be solved immediately or soon thereafter. Not today and not tomorrow, unless we’re defining that latter term very broadly. 3D printers will be a tremendous boon in terms of material goods (though they will also bring new dangers), but the world isn’t on the verge of unbridled material wealth. That just doesn’t happen overnight.

In his latest column, Harford is skeptical that the moment of unthinkably great production has (almost) arrived, as prophesied by the messiahs of machine utopia: Robin Hanson, whom I mostly know from a rather kooky, sci-fi Ted Talk; and Ray Kurzweil, a brilliant inventor who has reinvented himself as a sage of increasingly outré near-term predictions.

Harford’s opening:

Are we nearing a dramatic moment in economic history? Before humans developed agriculture, the world population — and thus the world economy — doubled in size roughly every 250,000 years. After acquiring the power of agriculture, the world economy doubled in size roughly every 900 years. After the industrial revolution, growth accelerated again, and since the second world war the world economy has been doubling in size roughly every 15 years. These numbers have been collated by Robin Hanson, an economist at George Mason University in Virginia; they are based on educated guesses by various economic historians.

If another step change of a similar scale were to happen, the world economy would double in size between now and Christmas. That is hard to imagine but, before the industrial revolution happened, it too would have been hard to imagine. And a small band of believers, not short on imagination, look forward to an economic “singularity”. Hanson is one of them, and the computer scientist Ray Kurzweil, author of The Singularity Is Near, is perhaps the most famous.

The singularity would be a point at which, rather than humans developing new technologies, the new technologies developed themselves. They would do so at a rate far beyond our comprehension. After the singularity, our civilisation would be in the hands of cyborgs, or brains uploaded into the cloud, or genetically enhanced superbeings, or something else able to make itself smarter at a tremendous rate. The future economy might consist of rapid interactions between artificial intelligences. The idea that it might double in size every few weeks no longer seems quite so unimaginable.

But it is one thing to imagine such a future. It is another thing to have confidence that it is approaching.•

Tags: , ,

Tim Harford’s FT reading of the recent New York Times Amazon exposé is, well, subjective.

Most of us probably recognized a place in which workers are asked to surrender their lives for a corporation, given impossible goals and abused and undermined when they prove human. Making it worse, there would seem to be a thick air of paranoia in the offices because a good percentage of the criticism is rooted in power consolidation, not job performance. It sounds like the Hunger Games with an on-campus farmer’s market, a place where sociopaths thrive. As anyone who’s worked in Internet companies (myself included) knows, these types of outfits are toxic and meant to be avoided, stock options or no stock options.

Harford’s take is different, and to me, puzzling. He sees a culture in Amazon that may actually be refreshingly straightforward, a paragon of things improving through workplace honesty. Perhaps its candor just being misinterpreted as rudeness because the company embodies a rare virtue? If only that were so. That kind of arrangement would be great. 

His opening:

Last month’s Amazon exposé in The New York Times evidently touched a white-collar nerve. Jodi Kantor and David Streitfeld described what might euphemistically be called an “intense” culture at Amazon’s headquarters in a feature article that promptly became the most commented-on story in the newspaper’s website’s history. As Kantor and Streitfeld told it, Amazon reduces grown men to tears and comes down hard on staff whose performance is compromised by distractions such as stillborn children, dying parents or simply having a family. Not for the first time, The Onion was 15 years ahead of the story with a December 2000 headline that bleakly satirised a certain management style: “There’s No ‘My Kid Has Cancer’ In Team.”

Mixed in with the grim anecdotes was a tale of a bracingly honest culture of criticism and self-criticism. (Rival firms, we are told, have been hiring Amazon workers after they’ve quit in exasperation, but are worried that these new hires may have become such aggressive “Amholes” that they won’t fit in anywhere else.)

At Amazon, performance reviews seem alarmingly blunt. One worker’s boss reeled off a litany of unachieved goals and inadequate skills. As the stunned recipient steeled himself to be fired, he was astonished when his superior announced, “Congratulations, you’re being promoted,” and gave him a hug.

It is important to distinguish between a lack of compassion and a lack of tact. It’s astonishing how often we pass up the chance to give or receive useful advice. If Amazon encourages its staff to be straight with each other about what should be fixed, so much the better.•

Tags: ,

Terrible products that fail miserably delight us not only because of the time-tested humor of a spectacular pratfall, but because it’s satisfying to feel now and then that we’re not just a pack of Pavlovian dogs prepared to lap up whatever is fed us, especially if it’s a Colgate Ready Meal and a Crystal Pepsi.

In a really smart Financial Times column, Tim Harford takes a counterintuitive look at how companies can avoid launching surefire duds. The usual manner has been to find out which products representative people want, but he writes of an alternative strategy: Discover what consumers of horrible taste embrace and then bury those products deep in a New Mexico desert alongside Atari’s E.T. video games. Of course, it does say something that companies can’t just identify what’s awful. Why do almost all businesses become echo chambers?

An excerpt:

If savvy influential consumers can help predict a product’s success, might it not be that there are consumers whose clammy embrace spells death for a product? It’s a counter-intuitive idea at first but, on further reflection, there’s a touch of genius about it.

Let’s say that some chap — let’s call him “Herb Inger” — simply adored Clairol’s Touch of Yogurt shampoo. He couldn’t get enough of Frito-Lay’s lemonade (nothing says “thirst-quenching” like salty potato chips, after all). He snapped up Bic’s range of disposable underpants. Knowing this, you get hold of Herb and you let him try out your new product, a zesty Cayenne Pepper eyewash. He loves it. Now you know all you need to know. The product is doomed, and you can quietly kill it while it is still small enough to drown in your bathtub.

A cute idea in theory — does it work in practice? Apparently so. Management professors Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker have studied people, such as Herb, whom they call “Harbingers of Failure.” (Their paper by that name is forthcoming in the Journal of Marketing Research.) They used a data set from a chain of more than 100 convenience stores. The data covered more than 100,000 customers with loyalty cards, more than 10 million transactions and nearly 10,000 new products. Forty per cent of those products were no longer stocked after three years, and were defined as “flops.”•

Tags: , , , ,

One thing about America (and likely many other places) during the age of connectedness is that change happens at a greatly accelerated pace. In 2008, no Presidential candidate, including Barack Obama and Hillary Clinton, would dare support gay marriage, yet here we are less than a decade later with its shift to legal status complete. While there was certainly a long effort by LGBT groups to secure that victory, it’s hard to believe that final push wouldn’t have taken far longer in an earlier era. 

Ideas spread quickly now. Not even science fiction can really keep up with science. Often that will be wonderful and occasionally not. 

It’s the same for corporations as it is for cultural issues. Google realizes it won’t dominate search for decades, that its fortunes will be destabilized much more quickly than Microsoft’s or Hewlett-Packard’s were. That’s why Google X is so important. If Google doesn’t truly become the AI company that Larry Page initially envisioned in the next decade or so, it will be in its dotage, a young adult in a retirement community.

From Tim Harford’s latest FT column, concerning the Alchemist Fallacy:

While alchemists never figured out how to turn lead into gold, other craftsmen did develop a process with much the same economic implications. They worked out how to transform silica sand, one of the most common materials on earth, into the beautiful, versatile material we know as glass. It has an astonishing variety of uses from fibre optics to microscopes to aeroplane fuselages. But while gold remains highly prized, glass is now so cheap that we use it as disposable packaging for water.

When it was possible to restrict access to the secret of glassmaking, the guardians of that knowledge profited. Venetian glassmakers were clustered together on the island of Murano, where sparks from the furnaces would not endanger Venice itself. Venice had less success in preventing the secrets of glassmaking from spreading. Despite being forbidden on pain of death to leave the state of Venice, some of Murano’s glassmakers sought fortunes elsewhere. The wealth that could be earned as a glassmaking monopolist in some distant city must have been worth the risk.

That is the way of new ideas: they have a tendency to spread. Business partners will fall out and set up as rivals. Employees will leave to establish their own businesses. Time-honoured techniques such as industrial espionage or reverse engineering will be deployed. Sometimes innovators are happy to give their ideas away for nothing, whether for noble reasons or commercial ones. But it is very hard to stop ideas spreading entirely.•


There can be no reasonable argument against a living wage from a moral perspective. None. But the economics of the minimum wage are puzzling and often partisan. We’re warned that decent pay will kill jobs–even a philanthropic soul like the mid-life, sweater-clad iteration of Bill Gates holds this position–but is it true? In his latest Financial Times column, Tim Harford suggest there should be fewer opinions and more research. An excerpt:

The UK minimum wage took effect 16 years ago this week, on April 1 1999. As with the Equal Pay Act, economically literate commentators feared trouble, and for much the same reason: the minimum wage would destroy jobs and harm those it was intended to help. We would face the tragic situation of employers who would only wish to hire at a low wage, workers who would rather have poorly paid work than no work at all, and the government outlawing the whole affair.

And yet, the minimum wage does not seem to have destroyed many jobs — or at least, not in a way that can be discerned by slicing up the aggregate data. (One exception: there is some evidence that in care homes, where large numbers of people are paid the minimum wage, employment has been dented.)

The general trend seems a puzzling suspension of the law of supply and demand. One explanation of the puzzle is that higher wages may attract more committed workers, with higher morale, better attendance and lower turnover. On this view, the minimum wage pushed employers into doing something they might have been wise to do anyway. To the extent that it imposed net costs on employers, they were small enough to make little difference to their appetite for hiring.

An alternative response is that the data are noisy and don’t tell us much, so we should stick to basic economic reasoning.•



While it shocks me that test subjects in psychologist Solomon Asch’s experiments on conformity were at all swayed to ridiculous conclusions by groupthink, economist Tim Harford finds a silver lining in the cloud in his latest Financial Times column: Participants were independent more often than influenced. That’s true, but if a few minutes of suggestion can alter beliefs to a significant degree, what can longer term and more subtle social pressures do?

From Harford:

Asch gave his subjects the following task: identify which of three different lines, A, B or C, was the same length as a “standard” line. The task was easy in its own right but there was a twist. Each individual was in a group of seven to nine people, and everyone else in the group was a confederate of Asch’s. For 12 out of 18 questions they had been told to choose, unanimously, a specific incorrect answer. Would the experimental subject respond by fitting in with the group or by contradicting them? Many of us know the answer: we are swayed by group pressure. Offered a choice between speaking the truth and saying something socially convenient, we opt for social convenience every time.

But wait — “every time”? In popular accounts of Asch’s work, conformity tends to be taken for granted. I often describe his research myself in speeches as an example of how easily groupthink can set in and silence dissent. And this is what students of psychology are themselves told by their own textbooks. A survey of these textbooks by three psychologists, Ronald Friend, Yvonne Rafferty and Dana Bramel, found that the texts typically emphasised Asch’s findings of conformity. That was in 1990 but when Friend recently updated his work, he found that today’s textbooks stressed conformity more than ever.

This is odd, because the experiments found something more subtle. It is true that most experimental subjects were somewhat swayed by the group. Fewer than a quarter of experimental subjects resolutely chose the correct line every time. (In a control group, unaffected by social pressure, errors were rare.) However, the experiment found that total conformity was scarcer than total independence. Only six out of 123 subjects conformed on all 12 occasions. More than half of the experimental subjects defied the group and gave the correct answer at least nine times out of 12. A conformity effect certainly existed but it was partial.•


An iteration of the Asch Experiment:

Tags: ,

In his latest Financial Times column, “Man v Machine (Again),” Tim Harford doesn’t readily dismiss Luddites as buffoons of history, arguing the laborers had a point: They didn’t believe weaving machines would disappear all jobs but that those they took would be among the best. Perhaps that’s what’s about to happen in a large-scale way thanks to robots. One puzzlement voiced by Harford: Robotics has yet to increase productivity, so either traditional measurements are failing or no one quite knows what’s going on. An excerpt:

The Luddite anxiety has been dormant for many years but has recently enjoyed a resurgence. This is partly because journalists fear for their own jobs. Technological change has hit us in several ways — by moving attention online, where (so far) it is harder to charge money for subscriptions or advertising; by empowering unpaid writers to reach a large audience through blogging; and even by introducing robo-hacks, algorithms that can and do extract data from corporate reports and turn them into financial journalism written in plain(ish) English. No wonder human journalists have started writing about the economic damage the robots may wreak.

Another reason for the robo-panic is concern about the economic situation in general. Bored of blaming bankers, we blame robots too, and not entirely without reason. Inequality has risen sharply over the past 30 years. Many economists believe that this is partly because technological change has favoured a few highly skilled workers (and perhaps also more mundane trades such as cleaning) at the expense of the middle classes.

Finally, there is the observation that computers continue to develop at an exponential pace and are starting to make inroads in hitherto unexpected places — witness the self-driving car, voice-activated personal assistants and automated language translation. It is a long way from the spinning jenny to Siri.

What are we to make of all this? One view is that this is business as usual. We’ve had dramatic technological change for the past 300 years but it’s fine: we adapt, we still have jobs, we are incomparably richer — and the big headache of modernity isn’t unemployment but climate change.

A second view is that this time is radically different: the robots will, before long, render many people economically valueless — simply incapable of earning a living wage in a market economy. There will be plenty of money around but it will flow to the owners of the machines, and maybe also to the government through taxation. In principle, all could be well in such a future but it would require a radical reimagining of how an economy could work. The state, not the market, would be the arbiter of who gets what. Such a world is probably not imminent but, by 2050, who knows?

 . . . 

The third perspective is what we might call the neo-Luddite view: that technology may not destroy jobs in aggregate but rather changes the demand for skills in ways that are real and troubling. Median incomes in the US have been stagnant for decades. There are many explanations for that, including globalisation and the decline of collective bargaining, but technological change is foremost among them.•


I have far fewer concerns about Net Neutrality than I do about cable providers. We’re warned that innovation in the sector will be stymied now that throttling is illegal, but we seem to get electricity each day just fine. But even those who didn’t necessarily oppose the FCC’s decision can see some clouds in the commission’s bold call. Two such worried opinions follow.


From Tim Harford at the FInancial Times:

This kind of product sabotage is far older than the internet itself. The French engineer and economist Jules Dupuit wrote back in 1849 that third-class railway carriages had no roofs, not to save money but to “prevent the passengers who can pay the second-class fare from travelling third class”. Throttling, 19th-century style.

But imagine that a law was introduced stipulating “railway neutrality” – that all passengers must be treated equally. That might not mean a better deal for poorer passengers. We might hope that everyone would ride in comfort at third-class prices, and that is not impossible. But a train company with a monopoly might prefer to operate only the first-class carriages at first-class prices. Poorer passengers would get no service at all. Product sabotage is infuriating but the alternative – a monopolist who screws every customer equally – is not necessarily preferable.

Fast lanes and slow lanes are a symptom of this market power but the underlying cause is much more important. The US needs more internet service providers, and the obvious way to get them is to force cable companies to unbundle the “last mile” and lease it to new entrants.

Alas, in the celebrated statement announcing a defence of net neutrality, the FCC also specifically ruled out taking that pro-competitive step. The share prices of cable companies? They went up.•


From Alex Pareene at Gawker:

Don’t get me wrong. Regulating broadband as a utility is (in my opinion) the correct policy. This is as close as Washington gets to a victory for the forces of “good.” I would just urge everyone to keep in mind that the forces of good in this instance won not because millions of people made their voices heard, but because the economic interests of a few giant corporations aligned with the position of those millions of people. And I say that not simply to be a killjoy (though I do love being a killjoy), but because if anything is to change, we musn’t convince ourselves that actual victory for the masses is possible in this fundamentally broken system. Please don’t begin to believe that the American political establishment is anything but a corrupt puppet of oligarchy.

American politicians are responsive almost solely to the interests and desires of their rich constituents and interest groups that primarily represent big business. Casual observation of American politics over the last quarter-century or so should make that clear, but if you want supporting evidence, look to the research of Vanderbilt political scientist Larry Bartels, and Princeton’s Martin Gilens and Northwestern’s Benjamin Page. Gilen and Page’s conclusions are easily summed up: “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.”

Political battles are won when the rich favor them. America’s rich have lately become rather progressive on certain social issues, and those issues have rather suddenly gone from political impossibilities to achievable dreams.•


Tags: ,

Christmas is cancelled this year, but what if it were permanently abolished, what would that mean for the economy? At the Financial Times, Tim Harford wonders about such a scenario. An excerpt:

“Imagine that this Christmas day, the Queen, the Pope and even Oprah Winfrey announced that Christmas would be a purely religious occasion from 2015 onwards. There would be no presents and no feasting. If people respected this declaration, about $75bn-$100bn of extra consumer spending in the US alone would simply not materialise next December. What then?

One possibility is that the economy would be just fine. This is the classical view of macroeconomics: nothing significant would change after the abolition of Christmas. We would retain the same labour force and the same skills, the same factories and the same power stations, the same financial sector and the same logistics networks. The capacity of the economy to produce goods and services would be undiminished, and after a period of adjustment, during which tinsel factories would be retooled and Christmas tree plantations replanted, all would be well.

What would replace nearly $100bn of seasonal consumer spending? Nothing noticeable, but the replacement would happen just the same. The productive capacity freed up by the disappearance of Christmas could be turned to other uses; prices would fall just enough to tempt us to spend our money at other times of the year. Indeed, cancelling Christmas might even provide a modest boost to our prosperity in the longer term, as bunching up all that spending into a few short weeks strains factories and supply chains. Smoothing out our spending would be more efficient.

This classical view of how the economy works is also the view taken by Mr Osborne, the UK chancellor, and by Republicans in the US. Their view is that government stimulus spending does not work; cut it back, they argue, and the economy would adjust as the private sector took up the slack.

On the other side of the debate stands Mr [Ed] Balls, the UK’s shadow chancellor, as well as American stimulus proponents such as Mr [Paul] Krugman and Lawrence Summers. Mr Krugman once commented that panic about an attack from aliens would help the economy because it would get the government spending money again. Since aliens are not available, Santa Claus will have to do.

This Keynesian view of how the economy works differs from the classical view in one crucial way: it argues that supply does not always and automatically create demand. When Christmas is abolished (or a financial crisis devastates people’s confidence and their spending power), consumers will plan to spend less. And if consumers plan to spend less, price adjustments may not induce them to change their minds; the price adjustments may not even happen. If Christmas spending disappears, it may take many years for the economy to replace it. Those factories will still be there and the workers will remain available — but they will stand idle.

Who is right?”



A more extreme way of looking at wealth inequality is not within nations but among them. In a Financial Times piece, Tim Harford argues the biggest lottery to win in life isn’t genetics or parentage but a fortunate patch of the map. An excerpt:

“I’ve been a lucky boy. I could start with the ‘boy’ fact. We men enjoy all sorts of privileges, many of them quite subtle these days, but well worth having. I’m white. I’m an Oxford graduate and I am the son of Oxbridge graduates….

Imagine lining up everyone in the world from the poorest to the richest, each standing beside a pile of money that represents his or her annual income. The world is a very unequal place: those in the top 1 per cent have vastly more than those in the bottom 1 per cent – you need about $35,000 after taxes to make that cut-off and be one of the 70 million richest people in the world. If that seems low, it’s $140,000 after taxes for a family of four – and it is also about 100 times more than the world’s poorest people have.

What determines who is at the richer end of that curve is, mostly, living in a rich country. Branko Milanovic, a visiting presidential professor at City University New York and author of The Haves and the Have-Nots, calculates that about 80 per cent of global inequality is the result of inequality between rich nations and poor nations. Only 20 per cent is the result of inequality between rich and poor within nations. The Oxford thing matters, of course. But what matters much more is that I was born in England rather than Bangladesh or Uganda.”


Tags: ,

In a Financial Times essay, economist Tim Harford finds a link no one else was looking for: the scorched-earth strategies which drive both Amazon and contemporary Russia. An excerpt:

“Brad Stone’s excellent book, The Everything Store: Jeff Bezos and the Age of Amazon, paints Amazon’s founder to be a visionary entrepreneur, dedicated to serving his customers. But it also reports that Bezos was willing to take big losses in the hope of weakening competitors. Zappos, the much-loved online shoe retailer, faced competition from an Amazon subsidiary that first offered free shipping and then started paying customers $5 for every pair of shoes they ordered. Quidsi, which ran Diapers.com, was met with a price war from “Amazon Mom.” Industry insiders told Stone that Amazon was losing $1m a day just selling nappies. Both Zappos and Quidsi ended up being bought out by Amazon.

When the weapons of war are low prices, consumers benefit at first. But the long term looks worrying: a future in which nobody dares to compete with Amazon. Apple is a striking contrast: the company’s refusal to compete aggressively on price makes it hugely profitable but has also attracted a swarm of competitors.

Consider a grimmer parallel. Vladimir Putin’s Russia is the chain store. Georgia, Ukraine and many other former Soviet states or satellites must consider whether to seek ties with the west. In each case Putin must decide whether to accommodate or open costly hostilities. The conflict in Ukraine has been disastrous for Russian interests in the short run but it may have bolstered Putin’s personal position. And if his strategy convinces the world that Putin will never share prosperity, his belligerence may yet pay off.

I feel a little guilty comparing Bezos and Putin. My only regret about Bezos’s Amazon is that there aren’t three other companies just like it. I do not feel the same about Putin’s Russia.”


Tags: , , ,

A few more thoughts on autonomous vehicles, these from economist Tim Harford at the Financial Times. One thing in his article I didn’t know about is that Germany has been road-testing robocars for two decades. The opening:

Last Wednesday Vince Cable, the UK business secretary, invited British cities to express their interest in being used as testing grounds for driverless cars. The hope is that the UK will gain an edge in this promising new industry. (German autonomous cars were being tested on German, French and Danish public roads 20 years ago, so the time is surely ripe for the UK to leap into a position of technological leadership.)

On Tuesday, a very different motoring story was in the news. Mark Slater, a lorry driver, was convicted of murdering Trevor Allen. He had lost his temper and deliberately driven a 17 tonne lorry over Mr Allen’s head. It is a striking juxtaposition.

The idea of cars that drive themselves is unsettling, but with drivers like Slater at large, the age of the driverless car cannot come quickly enough.

But the question of how safe robotic cars are, or might become, is rather different from the question of the risks of a computer-guided car are perceived, and how they might be repackaged by regulators, insurers and the courts.

On the first question, it is highly likely that a computer will one day do a better, safer, more courteous job of driving than you can. It is too early to be certain of that, because serious accidents are rare. An early benchmark for Google’s famous driverless car programme was to complete 100,000 miles driving on public roads – but American drivers in general only kill someone every 100m miles.

Still, the safety record so far seems good, and computers have some obvious advantages. They do not get tired, drunk or angry. They are absurdly patient in the face of wobbly cyclists, learner drivers and road hogs.

But there are bound to be hiccups.”



Some people with tremendous struggles are happy and some with uncommon good luck are bitter. A lot in the latter group have egos blocking out the sun.

And expectations also matter. They can often be adjusted as we respond to the stimuli we encounter, as we grow to accept a new normal. But there are some things that make us miserable no matter how we look at them. From Tim Harford’s Financial Times piece about so-called “happynomics”:

“It turns out that we grow accustomed to some conditions, happy or unhappy, but not to all.

The study which sparked the idea that we can get used to almost anything was published by Philip Brickman, Dan Coates and Ronnie Janoff-Bulman in 1978. It compared the happiness of paraplegic and quadriplegic accident victims to that of lottery winners – and discovered that the disabled people were scarcely less happy than the millionaires. Apparently we can bounce back from some awful experiences. (It is sad and troubling that a few years after making this discovery, Brickman killed himself.)

But how exactly is this apparent process of habituation supposed to work? Here’s where happiness economics has the long-run data to help. Consider bereavement: we cope by paying less attention as time goes by. A friend said to me, months after my mother and his father had both died, ‘You don’t get any less sad when you think about them but you think about them less often.’

The same is true, alas, for the nice things in life: we begin to take them for granted too. But there are experiences – unemployment is one of them; an unhappy marriage another – that depress us for as long as they last. What those experiences seem to have in common is the ability to hold our attention. Commuting, although shorter and less serious, is a classic case – annoying but also stimulating enough that we keep noticing the annoyance.

This suggests that we should look for the opposite of commuting: positive new experiences that are engaging enough to keep being noticed.”

Tags: , , ,

At the Financial Times, Tim Harford explains why (almost) nobody saw the financial collapse of 2008 coming and why economic predictions are usually so lacking:

“Why are forecasts so poor? The chief explanation is that the economy is complicated and we don’t understand it well enough to make forecasts. We don’t even fully understand recent economic history.

Ben Chu, economics editor of The Independent, recently took a look at the UK recession of the 1990s in the light of two decades of data revisions. From the vantage point of 1995, the economy in late 1992 was slightly smaller than the economy in early 1988. But today’s best guess is that the economy of late 1992 was almost 6 per cent larger than in early 1988. The Office for National Statistics has substantially revised its view.

Not only is it difficult to forecast the future, then – forecasting the past isn’t straightforward either. What chance does any prognosticator have?

A second explanation for forecasting’s fallibility is that there is little incentive to do better. The kind of institutional chief economist whose pronouncement makes it into Consensus Forecasts will stick to the middle of the road. Most countries, most of the time, are not in recession, so a safe strategy is never to forecast one. Of course there are the mavericks who receive media attention for making provocative predictions and are lionised when they are right. Their incentives are different but it is unclear that their overall track record is any better.”


Everything is quantified and measured and analyzed now–or soon will be–but that wasn’t always the case. The recently deceased economist Gary Becker believed his discipline could be brought to bear on all aspects of life. The opening of a defense of his mindset from fellow economist Tim Harford at the Financial Times:

“Perhaps it was inevitable that there would be something of the knee-jerk about the reaction to the death of the Nobel Prize-winning economist Gary Becker. Published obituaries acknowledged his originality, productivity and influence, of course. But there are many who lament Becker’s economic imperialism – the study of apparently non-economic aspects of life. It is now commonplace for those in the field to consider anything from smoking to parenting to the impact of the contraceptive pill. That is Gary Becker’s influence at work.

Becker makes a convenient bogeyman. It did not help that he could be awkward in discussing emotional issues – despite his influence inside the economics profession, he was not a slick salesman outside it. So it is easy to caricature a man who writes economic models for discrimination, for suicide and for the demand for children. How blinkered such a man must be, the critics say; how intellectually crude and emotionally stunted.

The criticism is unfair. Gary Becker’s economic imperialism was an exercise in soft power. Becker’s view of the world was not that economics was the last word on all human activity. It was that no matter what the subject under consideration, economics would always have something insightful to add. And for many years it fell to Becker to find that insight.”

Tags: ,

Data, no matter how big or small, is only as good as those people–or algorithms–deciphering it. Even when Big Data can give us an answer to a problem, it doesn’t necessarily give us the root of the problem. When it’s read well, it’s a good complement to other methods of research; when read poorly, it can be used to create faulty policy: From Tim Harford’s latest Financial Times piece:

“Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote ‘The End of Theory,’ a provocative essay published in Wired in 2008, ‘with enough data, the numbers speak for themselves.’

Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be ‘complete bollocks. Absolute nonsense.’

Found data underpin the new internet economy as companies such as Google, Facebook and Amazon seek new ways to understand our lives through our data exhaust. Since Edward Snowden’s leaks about the scale and scope of US electronic surveillance it has become apparent that security services are just as fascinated with what they might learn from our data exhaust, too.

Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.

But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.”

Tags: ,

In a new Financial Times article, Tim Harford looks at all angles of behavioral economics, which has reached its ascendancy in the years since Daniel Kahneman’s Nobel Prize win in 2002. In Kahneman’s hands, the discipline seems to have a lot of merit, but all too often with others its feels like shaky narratives supplanting other shaky narratives. There are so many variables in the world that easy answers can obscure complex situations. Did the Broken Windows Theory really lead to a reduction in crime in NYC when other cities that didn’t implement it experienced similar decreases? Is the answer more complicated? Is it not completely knowable? Does just replacing shattered glass make it easier to not address why we’re producing criminals? From Harford:

“In 2010, behavioural economists George Loewenstein and Peter Ubel wrote in The New York Times that ‘behavioural economics is being used as a political expedient, allowing policy makers to avoid painful but more effective solutions rooted in traditional economics.’

For example, in May 2010, just before David Cameron came to power, he sang the praises of behavioural economics in a TED talk. ‘The best way to get someone to cut their electricity bill,’ he said, ‘is to show them their own spending, to show them what their neighbours are spending, and then show what an energy-conscious neighbour is spending.’

But Cameron was mistaken. The single best way to promote energy efficiency is, almost certainly, to raise the price of energy. A carbon tax would be even better, because it not only encourages people to save energy but to switch to lower-carbon sources of energy. The appeal of a behavioural approach is not that it is more effective but that it is less unpopular.

Thaler points to the experience of Cass Sunstein, his Nudge co-author, who spent four years as regulatory tsar in the Obama White House. ‘Cass wanted a tax on petrol but he couldn’t get one, so he pushed for higher fuel economy standards. We all know that’s not as efficient as raising the tax on petrol – but that would be lucky to get a single positive vote in Congress.’

Should we be trying for something more ambitious than behavioural economics?”

Tags: , , ,

At the Browser, economist Tim Harford comments on Charles Perrow’s book Normal Accidents, which suggests that our technological systems growing more complex inevitably leads to greater chaos:

Tim HarfordFor him, at the time he published the first edition of this book, Three Mile Island [the nuclear core meltdown in Pennsylvania in 1979] was the definitive one. It prefigured Chernobyl. And then he revisits the subject at the end of the 1990s. The book goes through awful accidents in complex systems and explores why they happened – the human failings that go into them, the systemic consequences, the fact you could have a very small error that propagates and propagates. It’s quite a technical book, but it’s wonderful and completely compelling.

I originally read the book because I wanted to write about a particular accident. My sister is a qualified safety engineer, and she gave me a bunch of safety engineering books. But as I read Perrow’s book, I realised that it could have been written about the financial crisis. That was really shocking to me – this realisation that these banks and their interconnections were, in many ways, the same kind of system as a nuclear reactor, or at least had very important similarities.

And is there any way of avoiding this kind of disaster in future? Does the book shed any light on that?

Tim Harford: Perrow is, in many ways, a pessimist. He says that if the system is too complicated, you will have accidents. There’s nothing you can do about it. Looking back at the history of financial crises, that’s probably appropriate. But one thing that comes out of the book is the idea that we tend to make systems more complex by adding safety systems on top of them, and that the safety systems themselves create new ways for things to go wrong. That was a key problem in the financial crisis. A lot of banks were taking bets and then insuring themselves with credit default swaps (CDS). Credit default swaps were, basically, insurance contracts that banks wrote, often with [the big insurance company] AIG. Or banks were repackaging sub-prime mortgages into vehicles that were supposed to make risky loans safe. These two innovations – the packages of sub-prime loans and the credit default swaps – were both safety systems. But they were both absolutely crucial in explaining why the system blew up. I think that’s a central and really useful idea, that these safety systems are probably not helpful – and even when they are helpful, they will have unintended consequences.”

Tags: ,