“An Artificial Intelligence Can Be Far Less Humanlike In Its Motivations Than A Green Scaly Space Alien”

robo

A piece of Superintelligence that Nick Bostrom adapted for Slate which stresses that AI doesn’t need be like humans to surpass us:

“An artificial intelligence can be far less humanlike in its motivations than a green scaly space alien. The extraterrestrial (let us assume) is a biological creature that has arisen through an evolutionary process and can therefore be expected to have the kinds of motivation typical of evolved creatures. It would not be hugely surprising, for example, to find that some random intelligent alien would have motives related to one or more items like food, air, temperature, energy expenditure, occurrence or threat of bodily injury, disease, predation, sex, or progeny. A member of an intelligent social species might also have motivations related to cooperation and competition: Like us, it might show in-group loyalty, resentment of free riders, perhaps even a vain concern with reputation and appearance.

An AI, by contrast, need not care intrinsically about any of those things. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paper clips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a humanlike set of values and dispositions. Compare how easy it is to write a program that measures how many digits of pi have been calculated and stored in memory with how difficult it would be to create a program that reliably measures the degree of realization of some more meaningful goal—human flourishing, say, or global justice.”

Tags: