Cecilia Tilli

You are currently browsing articles tagged Cecilia Tilli.

Weak AI and Strong AI (or Narrow AI and Artificial General Intelligence, if you prefer) can both help and hurt us, if on a different order of magnitude. The former can mow our lawns, disrupt the gardening industry and perhaps run down a cricket that you or I would have swerved from (though you and I haven’t been angels to the creatures, either). The latter is probably necessary if we are to avoid human extinction–although it may cause the same. In her Slate essay “Striking the Balance on Artificial Intelligence,” philosopher and neuroscientist Cecilia Tilli calmly assesses the situation. An excerpt:

The benefits of narrow A.I. systems are clear: They free up time by automatically completing tasks that are time-consuming for humans. They are not completely autonomous, but many require only minimal human intervention—the better the system, the less we need to do. A.I.s can also do other useful things that humans can’t, like proving certain mathematical theorems or uncovering hidden patterns in data.

Like other technologies, however, current A.I. systems can cause harm if they fail or are badly designed. They can also be dangerous if they are intentionally misused (e.g., a driverless car carrying bombs or a drone carrying drugs). There are also legal and ethical concerns that need to be addressed as narrow A.I. becomes smarterWho is liable for damages caused by autonomous cars? Should armed drones be allowed total autonomy?

Special consideration must be given to economic risks. The automation of jobs is on the rise. According to a study by Carl Frey and Michael Osborne (who are my colleagues at the University of Oxford), 47 percent of current U.S. jobs have a high probability of being automated by 2050, and a further 23 percent have a medium risk. Although the consequences are uncertain, some fear that increased job automation will lead to increased unemployment and inequality.

Given the already widespread use of narrow A.I., it’s easy to imagine the benefits of strong A.I. (also known as artificial general intelligence, or AGI). AGI should allow us to further automate work, amplify our ability to perform difficult tasks, and maybe even replace humans in some fields. (Think of what a fully autonomous, artificial surgeon could achieve.) More importantly, strong A.I. may help us finally solve long-standing problems—even deeply entrenched challenges like eradicating poverty and disease.  

But there are also important risks, and humanity’s extinction is only the most radical. More intermediate risks include general societal problems due to lack of work, extreme wealth inequality, and unbalanced global power.

Given even the remote possibility of such catastrophic outcomes, why are some people so unwilling to consider them? Why do people’s attitudes toward AGI risk vary so widely? The main reason is that two forecasts get confused. One concerns the possibility of achieving AGI in the foreseeable future; the other concerns its possible benefits. These are two different scenarios, but many people confuse them: “This is not happening any time soon” becomes “AGI presents no risks.”

In contrast, for many of us AGI is an actual possibility within the next 100 years. In that case, unless we prepare ourselves for the challenge, AGI could present serious difficulties for humanity, the most extreme being extinction. Again, these worries might just be precautionary: We don’t know when AGI is coming and what its impact will be. But that’s why we need to investigate the matter: Assuming that nothing bad will happen is just negligent wishful thinking.•

Tags: