Luke Muehlhauser

You are currently browsing articles tagged Luke Muehlhauser.

One tricky point about designing autonomous machines is that if we embed in them our current moral codes, we’ll unwittingly stunt progress. Our morality has a lot of room to develop, so theirs needs to as well. I don’t think Strong AI is arriving anytime soon, but it’s a question worth pondering. From Adrienne LaFrance at the Atlantic:

How do we build machines that will make the world better, even when they start running themselves? And, perhaps the bigger question therein, what does a better world actually look like? Because if we teach machines to reflect on their actions based on today’s human value systems, they may soon be outdated themselves. Here’s how MIRI researchers Luke Muehlhauser and Nick Bostrom explained it in a paper last year:

Suppose that the ancient Greeks had been the ones to face the transition from human to machine control, and they coded their own values as the machines’ final goal. From our perspective, this would have resulted in tragedy, for we tend to believe we have seen moral progress since the Ancient Greeks (e.g. the prohibition of slavery). But presumably we are still far from perfection.

We therefore need to allow for continued moral progress. One proposed solution is to give machines an algorithm for figuring out what our values would be if we knew more, were wiser, were more the people we wished to be, and so on. Philosophers have wrestled with this approach to the theory of values for decades, and it may be a productive solution for machine ethics.•

Tags: , ,

A couple of segments from a new Ask Me Anything on Reddit that was conducted by Singularity Institute CEO Luke Muehlhauser.

_______________________________________

Question:

Given the rate of technological development, what age do you believe people that are young (20 and under) today will live to?

Luke Muehlhauser:

That one is too hard to predict for me to bother trying.

I will note that it’s possible that the post-rock band Tortoise was right that “millions now living will never die” (awesome album, btw). If we invest in the research required to make AI do good things for humanity rather than accidentally catastrophic things, one thing that superhuman AI (and thus a rapid acceleration of scientific progress) could produce is the capacity for radical life extension, and then later the capacity for whole brain emulation, which would enable people to make backups of themselves and live for millions of years. (As it turns out, the things we call “people” are particular computations that currently run in human wetware but don’t need to be running on such a fragile substrate. 

_______________________________________

Question:

I’ve had one major question/concern since I heard about the singularity.

At the point when computers outstrip human intelligence in all or most areas, won’t computers then take over doing most of the interesting and meaningful work? All decisions that take any sort of thinking will then be done by computers, since they will make better decisions. Politics, economics, business, teaching. They’ll even make better art, as they can better understand how to create emotionally moving objects/films/etc.

While we will have unprecedented levels of material wealth, won’t we have a severe crisis of meaning, since all major projects (personal and public) will be run by our smarter silicon counterparts? Will humans be reduced to manual labor, as that’s the only role that makes economic sense?

Will the singularity foment an existential crisis for humanity?

Luke Muehlhauser:

At the point when computers outstrip human intelligence in all or most areas, won’t computers then take over doing most of the interesting and meaningful work?

Yes.

Will humans be reduced to manual labor, as that’s the only role that makes economic sense?

No, robots will be better than humans at manual labor, too.

While we will have unprecedented levels of material wealth, won’t we have a severe crisis of meaning… Will the singularity foment an existential crisis for humanity?

Its a good question. The major worry is that the singularity causes an “existential crisis” in the sense that it causes a human extinction event. If we manage to do the math research required to get superhuman AIs to be working in our favor, and we “merely” have to deal with an emotional/philosophical crisis, I’ll be quite relieved.

One exploration of what we could do and care about when most projects are handled by machines is (rather cheekily) called fun theory.” I’ll let you read up on it.

Tags: