“Super-Intelligence May Be As Much Of A Curse As A Blessing”

New knowledge that awakens us from a collective stupor can be initially disquieting, whether it’s in the area of art or politics or science or anything else. It drives us from comfort. But we have a way of adapting, and eventually what shocks us is put in a museum. I would think that superintelligence, should we ever create it, would adapt to fresh–and sometimes disappointing–information as well or better than we would. But in “Will Super-intelligences Experience Philosophical Distress?” an h+ post, philosopher and computer scientist John G. Messerly wonders if this is so. An excerpt:

Will super-intelligences be troubled by philosophical conundrums? Consider classic philosophical questions such as: 1) What is real? 2) What is valuable? 3) Are we free? We currently don’t know the answer to such questions. We might not think much about them, or we may accept common answers—this world is real; happiness is valuable; we are free.
 
But our superintelligent descendents may not be satisfied with these answers, and they may possess the intelligence to find out the real answers. Now suppose they discover that they live in a simulation, or in a simulation of a simulation. Suppose they find out that happiness is unsatisfactory? Suppose they realize that free will is an illusion? Perhaps they won’t like such answers.
 
So super-intelligence may be as much of a curse as a blessing. For example, if we learn to run ancestor simulations, we may increase worries about already living in them. We might program AIs to pursue happiness, and find out that happiness isn’t worthwhile. Or programming AIs may increase our concern that we are programmed. So superintelligence might work against us—our post-human descendants may be more troubled by philosophical questions than we are.•

Tags: