Bill Joy

You are currently browsing articles tagged Bill Joy.

“We need to put everything online,” Bill Joy tells Steven Levy in an excellent Backchannel interview, and I’m afraid that’s what we’re going to do. It’s an ominous statement in a mostly optimistic piece about the inventor’s advances in batteries, which could be a boon in creating clean energy.

Of course, Joy doesn’t mean his sentiment to be unnerving. He looks at sensors, cameras and computers achieving ubiquity as a means to help with logistics of urban life. But they’re also fascistic in the wrong hands–and eventually that’s where they’ll land. These tools can help the trains run on time, and they can also enable a Mussolini.

Progress and regress have always existed in the same moment, but these movements have become amplified as cheap, widely available tools have become far more powerful in our time. So we have widespread governmental and corporate surveillance of citizens, while individuals and militias are armed with weapons more powerful than anything the local police possesses. This seems to be where we’re headed in America: Everyone is armed in one way or another in a very dangerous game. 

When Joy is questioned about the downsides of AI, he acknowledges “I don’t know how to slow the thing down.” No one really seems to.

An excerpt:

Steven Levy:

In the 1990s you were promoting a technology called Jini that anticipated mobile tech and the Internet of Things. Does the current progress reflect what you were thinking all those years ago?

Bill Joy:

Exactly. I have some slides from 25 years ago where I said, “Everyone’s going to be carrying around mobile devices.” I said, “They’re all going to be interconnected. And there are 50 million cars and trucks a year, and those are going to be computerized.” Those are the big things on the internet, right?

Steven Levy:

What’s next?

Bill Joy:

We’re heading toward the kind of environment that David Gelernter talked about in his book, Mirror Worlds, when he said, “The city becomes a simulation of itself.” It’s not so interesting just to identify what’s out there statically. What you want to do is have some notion of how that affects things in the time domain. We need to put everything online, with all the sensors and other things providing information, so we can move from static granular models to real simulations. It’s one thing to look at a traffic map that shows where the traffic is green and red. But that’s actually backward-looking. A simulation would tell me where it’s going to be green and where it’s going to be red.

This is where AI fits in. If I’m looking at the world I have to have a model of what’s out there, whether it’s trained in a neural net or something else. Sure, I can image-recognize a child and a ball on this sidewalk. The important thing is to recognize that, in a given time domain, they may run into the street, right? We’re starting to get the computing power to do a great demo of this. Whether it all hangs together is a whole other thing.

Steven Levy:

Which one of the big companies will tie it together?

Bill Joy:

Google seems to be in the lead, because they’ve been hiring these kind of people for so long. And if there’s a difficult problem, Larry [Page, Google’s CEO] wants to solve it. Microsoft has also hired a lot of people, as well as Facebook and even Amazon. In these early days, this requires an enormous amount of computing power. Having a really, really big computer is kind of like a time warp, in that you can do things that aren’t economical now but will be economically [feasible] maybe a decade from now. Those large companies have the resources to give someone like Demis [Hassabis, head of Google’s DeepMind AI division] $100 million, or even $500 million a year, for computer time, to allow him to do things that maybe will be done by your cell phone 10 years later.

Steven Levy:

Where do you weigh in on the controversy about whether AI is a threat to humanity?

Bill Joy:

Funny, I wrote about that a long time ago.

Steven Levy:

Yes, in your essay The Future Doesn’t Need Us.” But where are you now on that?

Bill Joy:

I think at this point the really dangerous nanotech is genetic, because it’s compatible with our biology and therefore it can be contagious. With CRISPR-Cas9 and variants thereof, we have a tool that’s almost shockingly powerful. But there are clearly ethical risks and danger in AI. I’m at a distance from this, working on the clean-tech stuff. I don’t know how to slow the thing down, so I decided to spend my time trying to create the things we need as opposed to preventing [what threatens us]. I’m not fundamentally a politician. I’m better at inventing stuff than lobbying.•

Tags: ,

The Ray Kurzweil writing that spurred Bill Joy to pen his famous 2000 Wired article, “Why the Future Doesn’t Need Us,” in which he worried about a utopia that seemed to him dystopic:

THE NEW LUDDITE CHALLENGE

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite – just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes ‘treatment’ to cure his ‘problem.’ Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them ‘sublimate’ their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.”

Tags: ,

From “How to Make Almost Anything,” Neil Gershenfeld’s new Foreign Affairs piece about the coming revolution of 3D printers, replicating machines that can replicate even themselves:

“Are there dangers to this sort of technology? In 1986, the engineer Eric Drexler, whose doctoral thesis at MIT was the first in molecular nanotechnology, wrote about what he called ‘gray goo,’ a doomsday scenario in which a self-reproducing system multiplies out of control, spreads over the earth, and consumes all its resources. In 2000, Bill Joy, a computing pioneer, wrote in Wired magazine about the threat of extremists building self-reproducing weapons of mass destruction. He concluded that there are some areas of research that humans should not pursue. In 2003, a worried Prince Charles asked the Royal Society, the United Kingdom’s fellowship of eminent scientists, to assess the risks of nanotechnology and self-replicating systems.

Although alarming, Drexler’s scenario does not apply to the self-reproducing assemblers that are now under development: these require an external source of power and the input of nonnatural materials. Although biological warfare is a serious concern, it is not a new one; there has been an arms race in biology going on since the dawn of evolution.

A more immediate threat is that digital fabrication could be used to produce weapons of individual destruction. An amateur gunsmith has already used a 3-D printer to make the lower receiver of a semiautomatic rifle, the AR-15. This heavily regulated part holds the bullets and carries the gun’s serial number. A German hacker made 3-D copies of tightly controlled police handcuff keys. Two of my own students, Will Langford and Matt Keeter, made master keys, without access to the originals, for luggage padlocks approved by the U.S. Transportation Security Administration. They x-rayed the locks with a CT scanner in our lab, used the data to build a 3-D computer model of the locks, worked out what the master key was, and then produced working keys with three different processes: numerically controlled milling, 3-D printing, and molding and casting.

These kinds of anecdotes have led to calls to regulate 3-D printers.” (Thanks Browser.)

Tags: , ,

Computer scientist Bill Joy despised the violence of the Unabomber as any sane person would, so he felt great disquiet when he read a passage written by Ted Kaczynski and agreed with the domestic terrorist’s concerns for the future of humankind. In his famous 2000 Wired article, “Why the Future Doesn’t Need Us,” Joy meditates on the unease caused by his sympathies for the ideas of a madman. An excerpt:

“Part of the answer certainly lies in our attitude toward the new – in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies – robotics, genetic engineering, and nanotechnology – pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once – but one bot can become many, and quickly get out of control.

Much of my work over the past 25 years has been on computer networking, where the sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk of substantial damage in the physical world.”

Tags: ,