Jaan Tallinn

You are currently browsing articles tagged Jaan Tallinn.

Ted Greenwald of the Wall Street Journal presents a sober, clear-headed assessment of the threats posed to us by both Weak AI and Strong AI, with the help of Skype co-founder Jaan Tallinn, IBM cognitive-computing expert Guruduth S. Banavar and computer science professor Francesca Rossi. One exchange:

WSJ:

Some experts believe that AI is already taking jobs away from people. Do you agree?

Jaan Tallinn:

Technology has always had the tendency to make jobs obsolete. I’m reminded of an Uber driver whose services I used a while ago. His seat was surrounded by numerous gadgets, and he demonstrated enthusiastically how he could dictate my destination address to a tablet and receive driving instructions. I pointed out to him that, in a few years, maybe the gadgets themselves would do the driving. To which he gleefully replied that then he could sit back and relax—leaving me to quietly shake my head in the back seat. I do believe the main effect of self-driving cars will come not from their convenience but from the massive impact they will have on the job market.

In the long run, we should think about how to organize society around something other than near-universal employment.•

Tags: , , ,

The next thousand years or so are sort of important for human beings. At the conclusion of that time period, if we survive, there will probably only be vestigial elements remaining of who we are today, but we will have created the next life forms. And I do mean create, as genetic engineering, nanotechnology and space colonization will put evolution in our hands.

Or we could all die. Climate change, plague, asteroid impact, superintelligence or some other calamity could wipe out the lot of us before we have the opportunity to spread out among the stars. One person who’s working on this global-scale risk management is Jaan Tallinn, a Skype founder who co-created the Centre for the Study of Existential Risk at Cambridge. At Edge.org, he’s interviewed about his work in this area, which might seem marginal to some but is chasing our biggest ghosts. An excerpt:

Over the last six years or so there has been an interesting evolution of the existential risk arguments and perception of those arguments. While it is true, especially in the beginning, that these kinds of arguments tend to attract cranks, there is an important scientific argument there, which is basically saying that technology is getting more and more powerful. Technology is neutral. The only reason why we see technology being good is that there is a feedback mechanism between technology and the market. If you develop technology that’s aligned with human values, the market rewards you. However, once technology gets more and more powerful, or if it’s developed outside of market context, for example in the military, then you cannot automatically rely on this market mechanism to steer the course of technology. You have to think ahead. This is a general argument that can apply to both synthetic biology, artificial intelligence, nanotechnology, and so on.

One good example is the report LA-602, that was developed by the Manhattan Project. During the Manhattan project, it was six months before the first nuclear test. They did a scientific analysis of what is the probability, what are the chances of creating a runaway process in the atmosphere that would burn up the atmosphere and thus destroy the earth? It’s the first solid example of existential risk research that humanity has done.                                 

Really, what I am trying to advance is more reports like that. Nuclear technology is not the last potentially disastrous technology that humans are going to invent. In my view, it’s very, very dangerous when people say, “Oh, these people are cranks.” You’re basically lumping together those Manhattan Project scientists who developed solid scientific analysis that’s clearly beneficial for humanity, and some people who are just clearly crazy and are predicting the end of the world for no reason at all.

It’s too early to tell right now what kind of societal structures we need to contain the technology once the market mechanism is no longer powerful enough to contain them. At this stage, we need more research. There’s a research agenda coming out pretty soon that represents a consensus between the AI safety community and the AI research community, of things that are not necessarily commercially motivated research, but the research that needs to be done if you want to steer the course, if you want to make sure that the technology is beneficial in the sense that it’s aligned with human values, and thus giving us a better future the way we think the future should be. The AI should also be robust in the sense that it wouldn’t accidentally create situations where, even though we developed it with the best intentions, it would still veer off the course and give us a disaster.

There are several technological existential risks. An example was the nuclear weapons before the first nuclear test was done. It wasn’t clear whether this was something safe to do on this planet or not. Similarly, as we get more and more powerful technology, we want to think about the potentially catastrophic side effects. It’s fairly easy for everyone to imagine that once we get synthetic biology, it becomes much easier to construct organisms or viruses that might be much more robust against human defenses.

I was just talking about technological existential risks in general. One of those technological existential risks could be potentially, artificial intelligence.•

Tags: