“If You Hear A Scenario About The World In 2050 And It Does Not Sound Like Science Fiction, It Is Certainly Wrong”

A hammer is a tool or a weapon depending on how you swing it, and the more powerful the tool, the more powerful the weapon.

Technology that excels at data-collection and surveillance will be used to those ends in the best of times and will be employed in a harsh, even tyrannical, manner in the worst of times. The competing agendas among individuals, corporations and states almost demand it. I’m not suggesting Digital Leninism is the only possible future in our increasingly algorithmic world, but I do think determinism is embedded to some degree in technology, which can lead as well as follow. And there will be no plugs to pull if things to don’t go as planned, and even if there were, yanking them from the wall would be the end of us as surely as it would our machines. 

Yuval Noah Harari dissents from that view in a recent Guardian review of Max Tegmark’s Life 3.0, asserting that technology is what we make it. Even if that is true, take one good look at us and worry. The opening:

Artificial intelligence will probably be the most important agent of change in the 21st century. It will transform our economy, our culture, our politics and even our own bodies and minds in ways most people can hardly imagine. If you hear a scenario about the world in 2050 and it sounds like science fiction, it is probably wrong; but if you hear a scenario about the world in 2050 and it does not sound like science fiction, it is certainly wrong.

Technology is never deterministic: it can be used to create very different kinds of society. In the 20th century, trains, electricity and radio were used to fashion Nazi and communist dictatorships, but also to foster liberal democracies and free markets. In the 21st century, AI will open up an even wider spectrum of possibilities. Deciding which of these to realise may well be the most important choice humankind will have to make in the coming decades.

This choice is not a matter of engineering or science. It is a matter of politics. Hence it is not something we can leave to Silicon Valley – it should be among the most important items on our political agenda. Unfortunately, AI has so far hardly registered on our political radar. It has not been a major subject in any election campaign, and most parties, politicians and voters seem to have no opinion about it. This is largely because most people have only a very dim and limited understanding of machine learning, neural networks and artificial intelligence. (Most generally held ideas about AI come from SF movies such as The Terminator and The Matrix.) Without a better understanding of the field, we cannot comprehend the dilemmas we are facing: when science becomes politics, scientific ignorance becomes a recipe for political disaster.

Max Tegmark’s Life 3.0 tries to rectify the situation. Written in an accessible and engaging style, and aimed at the general public, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political  systems.

Life 3.0 does a good job of clarifying basic terms and key debates, and in dispelling common myths. While science fiction has caused many people to worry about evil robots, for instance, Tegmark rightly emphasises that the real problem is with the unforeseen consequences of developing highly competent AI. Artificial intelligence need not be evil and need not be encased in a robotic frame in order to wreak havoc. In Tegmark’s words, “the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”•

Tags: ,