In his NYRB review of Daniel Dennett’s From Bacteria to Bach and Back: The Evolution of Minds, Thomas Nagel is largely laudatory even though he believes his fellow philosopher ultimately guilty of “maintaining a thesis at all costs,” writing that:
Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them.
Nagel draws an analogy between Dennett’s ideas and the Behaviorism of B.F. Skinner and other mid-century psychologists, a theory that never was truly satisfactory in explaining the human mind. Dennett’s belief that we’re more machine-like than we want to believe is probably accurate, though his assertion that all consciousness is illusory–if that’s what he’s arguing–seems off.
Dennett’s life work about consciousness and evolution has certainly crested at right moment, as we’re beginning to wonder in earnest about AI and non-human-consciousness, which seems possible at some point if not on the immediate horizon. In a Financial Times interview conducted by John Thornhill, Dennett speaks to the nature and future of robotics.
AI experts tend to draw a sharp distinction between machine intelligence and human consciousness. Dennett is not so sure. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sorta robots”.
“I’ve been arguing for years that, yes, in principle it’s possible for human consciousness to be realised in a machine. After all, that’s what we are,” he says. “We’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.” …
Dennett has long been a follower of the latest research in AI. The final chapter of his book focuses on the subject. There has been much talk recently about the dangers posed by the emergence of a superintelligence, when a computer might one day outstrip human intelligence and assume agency. Although Dennett accepts that such a superintelligence is logically possible, he argues that it is a “pernicious fantasy” that is distracting us from far more pressing technological problems. In particular, he worries about our “deeply embedded and generous” tendency to attribute far more understanding to intelligent systems than they possess. Giving digital assistants names and cutesy personas worsens the confusion.
“All we’re going to see in our own lifetimes are intelligent tools, not colleagues. Don’t think of them as colleagues, don’t try to make them colleagues and, above all, don’t kid yourself that they’re colleagues,” he says.
Dennett adds that if he could lay down the law he would insist that the users of such AI systems were licensed and bonded, forcing them to assume liability for their actions. Insurance companies would then ensure that manufacturers divulged all of their products’ known weaknesses, just as pharmaceutical companies reel off all their drugs’ suspected side-effects. “We want to ensure that anything we build is going to be a systemological wonderbox, not an agency. It’s not responsible. You can unplug it any time you want. And we should keep it that way,” he says.•