Somebody able to read psychology and situations at will can do amazing things, but there aren’t enough such people to go around. Many of those leading governments, corporations, HR departments, etc., are clueless as fuck, and that has an unfortunate ripple effect on others.
Enter data analysis. Can it consistently–or at least more consistently than us–get the answer right, identifying market inefficiencies and inequalities? In many cases, it couldn’t do much worse, especially when you factor in the unwitting biases within us. In “The Data or the Hunch,” Ian Leslie’s Economist Intelligent Life article, the writer uses the story of legendary record-industry figure John Hammond, who made informed bets on Billie and Basie and Bob without the aid of algorithms, to analyze Moneyball and human error. Could a computer have heard in Dylan’s voice what Hammond did? Will AI be better on average but be prone to missing the black swan? An excerpt:
The old music industry turned many young acts into big stars. But it placed many, many more wagers on acts that didn’t sell enough records to pay back; William Goldman’s axiom, “Nobody knows anything,” applies to music as much as the movies. In the social-media era, big bets on untested talent are rarer. This is partly because there’s less money to spray around. But also because the record companies are using data to lower the risk.
This is the day of the analyst. In education, academics are working their way towards a reliable method of evaluating teachers, by running data on test scores of pupils, controlled for factors such as prior achievement and raw ability. The methodology is imperfect, but research suggests that it’s not as bad as just watching someone teach. A 2011 study led by Michael Strong at the University of California identified a group of teachers who had raised student achievement and a group who had not. They showed videos of the teachers’ lessons to observers and asked them to guess which were in which group. The judges tended to agree on who was effective and ineffective, but, 60% of the time, they were wrong. They would have been better off flipping a coin. This applies even to experts: the Gates Foundation funded a vast study of lesson observations, and found that the judgments of trained inspectors were highly inconsistent.
THE LAST STRONGHOLD of the hunch is the interview. Most employers and some universities use interviews when deciding whom to hire or admit. In a conventional, unstructured interview, the candidate spends half an hour or so in a conversation directed at the whim of the interviewer. If you’re the one deciding, this is a reassuring practice: you feel as if you get a richer impression of the person than from the bare facts on their résumé, and that this enables you to make a better decision. The first theory may be true; the second is not.
Decades of scientific evidence suggest that the interview is close to useless as a tool for predicting how someone will do a job. Study after study has found that organisations make better decisions when they go by objective data, like the candidate’s qualifications, track record and performance in tests. “The assumption is, ‘if I meet them, I’ll know’,” says Jason Dana, of Yale School of Management, one of many scholars who have looked into the interview’s effectiveness. “People are wildly over-confident in their ability to do this, from a short meeting.” When employers adopt a holistic approach, combining the data with hunches formed in interviews, they make worse decisions than they do going on facts alone.
The interview isn’t just unreliable, it is unjust, because it offers a back door for prejudice.•