Superintelligent machines may be the death of us, but far-less-smart AI can also lead to disasters, even cascading ones. In a Japan Times op-ed, philosopher Peter Singer thinks that AlphaGo’s stunning recent victory and the progress of driverless cars should spur an earnest discussion of the moral code of microchips and sensors and such. “It is not too soon to ask whether we can program a machine to act ethically,” he writes.
We shouldn’t set rules governing AI that will bind people deep into a future that will present different realities than our own, but laying a foundation for constantly assessing and reassessing the prowess of machine intelligence is vital.
An excerpt:
Eric Schmidt, executive chairman of Google’s parent company, the owner of AlphaGo, is enthusiastic about what artificial intelligence means for humanity. Speaking before the match between Lee and AlphaGo, he said that humanity would be the winner, whatever the outcome, because advances in AI will make every human being smarter, more capable and “just better human beings.”
Will it? Around the same time as AlphaGo’s triumph, Microsoft’s “chatbot” — software named Taylor that was designed to respond to messages from people aged 18 to 24 — was having a chastening experience. Tay, as she called herself, was supposed to be able to learn from the messages she received and gradually improve her ability to conduct engaging conversations. Unfortunately, within 24 hours, people were teaching Tay racist and sexist ideas. When she starting saying positive things about Hitler, Microsoft turned her off and deleted her most offensive messages.
I do not know whether the people who turned Tay into a racist were themselves racists, or just thought it would be fun to undermine Microsoft’s new toy. Either way, the juxtaposition of AlphaGo’s victory and Taylor’s defeat serves as a warning. It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.
Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, argues in his book “Superintelligence” that it will not always be as easy to turn off an intelligent machine as it was to turn off Tay.•
Tags: Eric Schmidt, Peter Singer