“How Do We Build Machines That Will Make The World Better, Even When They Start Running Themselves?”

One tricky point about designing autonomous machines is that if we embed in them our current moral codes, we’ll unwittingly stunt progress. Our morality has a lot of room to develop, so theirs needs to as well. I don’t think Strong AI is arriving anytime soon, but it’s a question worth pondering. From Adrienne LaFrance at the Atlantic:

How do we build machines that will make the world better, even when they start running themselves? And, perhaps the bigger question therein, what does a better world actually look like? Because if we teach machines to reflect on their actions based on today’s human value systems, they may soon be outdated themselves. Here’s how MIRI researchers Luke Muehlhauser and Nick Bostrom explained it in a paper last year:

Suppose that the ancient Greeks had been the ones to face the transition from human to machine control, and they coded their own values as the machines’ final goal. From our perspective, this would have resulted in tragedy, for we tend to believe we have seen moral progress since the Ancient Greeks (e.g. the prohibition of slavery). But presumably we are still far from perfection.

We therefore need to allow for continued moral progress. One proposed solution is to give machines an algorithm for figuring out what our values would be if we knew more, were wiser, were more the people we wished to be, and so on. Philosophers have wrestled with this approach to the theory of values for decades, and it may be a productive solution for machine ethics.•

Tags: , ,