Paul Taylor

You are currently browsing articles tagged Paul Taylor.

disneymaskscarfire

For the rest of this century (at least), it’s more likely machines will come or our jobs than our lives. The threat of extinction of human beings by superintelligence isn’t in sight, thankfully, but while automation could lead to post-scarcity, if we’re not careful and wise, it might instead stoke societal chaos.

In his London Review of Books essay about machine learning, Paul Taylor writes about the potential of this field, while acknowledging it’s not certain how well it will all work out despite early promise. The short-term issue, he believes, may be between haves and have-nots as it applies to computing power. An excerpt:

The solving of problems that until recently seemed insuperable might give the impression that these machines are acquiring capacities usually thought distinctively human. But although what happens in a large recurrent neural network better resembles what takes place in a brain than conventional software does, the similarity is still limited. There is no close analogy between the way neural networks are trained and what we know about the way human learning takes place. It is too early to say whether scaling up networks like Inception will enable computers to identify not only a cat’s face but also the general concept ‘cat’, or even more abstract ideas such as ‘two’ or ‘authenticity’. And powerful though Google’s networks are, the features they derive from sequences of words are not built from the experience of human interaction in the way our use of language is: we don’t know whether or not they will eventually be able to use language as humans do.

In 2006 Ray Kurzweil wrote a book about what he called the Singularity, the idea that once computers are able to generate improvements to their own intelligence, the rate at which their intelligence improves will accelerate exponentially. Others have aired similar anxieties. The philosopher Nick Bostrom wrote a bestseller, Superintelligence (2014), examining the risks associated with uncontrolled artificial intelligence. Stephen Hawking has suggested that building machines more intelligent than we are could lead to the end of the human race. Elon Musk has said much the same. But such dystopian fantasies aren’t worth worrying about yet. If there is something to be worried about today, it is the social consequences of the economic transformation computers might bring about – that, and the growing dominance of the small number of corporations that have access to the mammoth quantities of computing power and data the technology requires.•

Tags: