Some neuroscientists disagree, but there doesn’t seem to be anything that’s theoretically impossible about creating intelligent AI, especially if we’re talking about humans being here to tinker 1,000 or 10,000 or 100,000 or 1,000,000 years from now. Most things will be possible given enough time, if it should pass with us still here.
In a lively Conversation piece, a raft of experts answers question about AI, from intelligent machines to technological unemployment. The opening:
Question:
How plausible is human-like artificial intelligence?
Toby Walsh, Professor of AI:
It is 100% plausible that we’ll have human-like artificial intelligence.
I say this even though the human brain is the most complex system in the universe that we know of. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities.
Kevin Korb, Reader in Computer Science
Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn’t address is: when will it be plausible?
Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.
What I find less plausible than the AI in fiction is the emotional and moral lives of robots. They seem to be either unrealistically empty, such as the emotion-less Data in Star Trek, or unrealistically human-identical or superior, such as the AI in Spike Jonze’s Her.
All three – emotion, ethics and intelligence – travel together, and are not genuinely possible in some form without the others, but fiction writers tend to treat them as separate. Plato’s Socrates made a similar mistake.•