“Building An Artificial Brain…Is Possible In Principle”

The Singularity isn’t near, not really. It isn’t theoretically impossible if humans continue to exist for eons more, but it’s not rapidly approaching. Immortality isn’t around the corner, either, nor a-mortality. Almost everyone reading this–and writing this, gulp–will die at some point in the 21st century.

Weak AI is causing disruption right now and that will probably increase to uncomfortable levels in the coming decades, which may create a huge societal challenge, a shock to our economic system, but these won’t be thinking machines capable of posing an existential risk. We should be considering the ramifications of conscious AI but not living in fear of it.

From an Economist article about Deep Learning and neural networks:

Better smartphones, fancier robots and bringing the internet to the illiterate would all be good things. But do they justify the existential worries of Mr Musk and others? Might pattern-recognising, self-programming computers be an early, but crucial, step on the road to machines that are more intelligent than their creators?

The doom-mongers have one important fact on their side. There is no result from decades of neuroscientific research to suggest that the brain is anything other than a machine, made of ordinary atoms, employing ordinary forces and obeying the ordinary laws of nature. There is no mysterious “vital spark,” in other words, that is necessary to make it go. This suggests that building an artificial brain—or even a machine that looks different from a brain but does the same sort of thing—is possible in principle.

But doing something in principle and doing it in fact are not remotely the same thing. Part of the problem, says Rodney Brooks, who was one of AI’s pioneers and who now works at Rethink Robotics, a firm in Boston, is a confusion around the word “intelligence.” Computers can now do some narrowly defined tasks which only human brains could manage in the past (the original “computers,” after all, were humans, usually women, employed to do the sort of tricky arithmetic that the digital sort find trivially easy). An image classifier may be spookily accurate, but it has no goals, no motivations, and is no more conscious of its own existence than is a spreadsheet or a climate model. Nor, if you were trying to recreate a brain’s workings, would you necessarily start by doing the things AI does at the moment in the way that it now does them. AI uses a lot of brute force to get intelligent-seeming responses from systems that, though bigger and more powerful now than before, are no more like minds than they ever were. It does not seek to build systems that resemble biological minds. As Edsger Dijkstra, another pioneer of AI, once remarked, asking whether a computer can think is a bit like asking “whether submarines can swim.”•

Tags: