Jerry Kaplan

You are currently browsing articles tagged Jerry Kaplan.

The Economist has a good if brief review of three recent titles about Artificial Intelligence and what it means for humans, John Markoff’s Machines of Loving GracePedro Domingos’ The Master Algorithm and Jerry Kaplan’s Humans Need Not Apply.

I quote the opening of the piece below because I think it gets at an error in judgement some people make about technological progress, in regards to both Weak AI and Strong AI. There’s the idea that humans are in charge and can regulate machine progress, igniting and controlling it as we do fire. I don’t believe that’s ultimately so even if it’s our goal.

Such decisions aren’t made in cool, sober ways inside a vacuum but in a messy world full of competition and differing priorities. If the United States decided to ban robots or gene editing but China used them and prospered from the use, we would have to also enter the race. It’s similar to how America was largely a non-militaristic country before WWII but since then has been armed to the teeth.

The only thing that halts technological progress is a lack of knowledge. Once attained, it will be used because that makes us feel clever and proud. And it gives us a sense of safety, even when it makes things more dangerous. That’s human nature as applied to Artificial Intelligence.

An excerpt:

ARTIFICIAL INTELLIGENCE (AI) is quietly everywhere, powering Google’s search engine, Amazon’s recommendations and Facebook’s facial recognition. It is how post offices decipher handwriting and banks read cheques. But several books in recent years have spewed fire and brimstone, claiming that algorithms are poised to obliterate white-collar knowledge-work in the 21st century, just as automation displaced blue-collar manufacturing work in the 20th. Some people go further, arguing that artificial intelligence threatens the human race. Elon Musk, an American entrepreneur, says that developing the technology is “summoning the demon.”

Now several new books serve as replies. In Machines of Loving Grace, John Markoff of the New York Times focuses on whether researchers should build true artificial intelligence that replaces people, or aim for “intelligence augmentation” (IA), in which the computers make people more effective. This tension has been there from the start. In the 1960s, at one bit of Stanford University John McCarthy, a pioneer of the field, was gunning for AI (which he had named in 1955), while across campus Douglas Engelbart, the inventor of the computer mouse, aimed at IA. Today, some Google engineers try to improve search engines so that people can find information better, while others develop self-driving cars to eliminate drivers altogether.•

Tags: , ,

There is a fascinating premise underpinning Steven Levy’s Backchannel interview with Jerry Kaplan, the provocatively titled, “Can You Rape a Robot?”: AI won’t need become conscious for us to treat it as such, for the new machines to require a very evolved sense of morality. Kaplan, the author of Humans Need Not Apply, believes that autonomous machines will be granted agency if they can only mirror our behaviors. Simulacrum on an advanced level will be enough. The author thinks AI can vastly improve the world, but only if we’re careful to make morality part of the programming.

An exchange:

Steven Levy:

Well by the end of your book, you’re pretty much saying we will have robot overlords — call them “mechanical minders.”

Jerry Kaplan:

It is plausible that certain things can [happen]… the consequences are very real. Allowing robots to own assets has severe consequences and I stand by that and I will back it up. Do I have the thing about your daughter marrying a robot in there?

Steven Levy:

No.

Jerry Kaplan:

That’s a different book. [Kaplan has a sequel ready.] I’m out in the far future here, but it’s plausible that people will have a different attitude about these things because it’s very difficult to not have an emotional reaction to these things. As they become more a part of our lives people may very well start to inappropriately imbue them with certain points of view.•

Tags: ,

Jerry Kaplan, author of Humans Need Not Apply, thinks technology may make warfare safer (well, relatively). Perhaps, but that’s not the goal of all combatants. He uses the landmine as an example, arguing that a “smarter” explosive could be made to only detonate if enemy military happened across it. But any nation or rogue state using landmines does so precisely because of the terror that transcends the usual rules of engagement. They would want to use new tools to escalate that threat. The internationally sanctioned standards Kaplan hopes we attain will likely never be truly universal. As the implements of war grow cheaper, smaller and more out of control, that issue becomes more ominous.

In theory, robotized weapons could make war less lethal or far more so, but that will depend on the intentions of the users, and both scenarios will probably play out. 

From Kaplan in the New York Times:

Consider the lowly land mine. Those horrific and indiscriminate weapons detonate when stepped on, causing injury, death or damage to anyone or anything that happens upon them. They make a simple-minded “decision” whether to detonate by sensing their environment — and often continue to do so, long after the fighting has stopped.

Now imagine such a weapon enhanced by an A.I. technology less sophisticated than what is found in most smartphones. An inexpensive camera, in conjunction with other sensors, could discriminate among adults, children and animals; observe whether a person in its vicinity is wearing a uniform or carrying a weapon; or target only military vehicles, instead of civilian cars.

This would be a substantial improvement over the current state of the art, yet such a device would qualify as an offensive autonomous weapon of the sort the open letter proposes to ban.

Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.•

Tags:

Robots may relieve us of much of the work currently monopolizing our time, which sounds great. I mean, life is too short. Unfortunately, the U.S. and many other patches on the globe don’t have economic systems capable of supporting a populace in which near-total employment isn’t the goal. Martin Ford and Andrew McAfee and Eric Brynjolfsson have written that the future is arriving too quickly and, unlike in the ’50s and ’60s, automation leading to massive technological unemployment is a real possibility. 

Add computer scientist and entrepreneur Jerry Kaplan, author of Humans Need Not Apply, to that list. In a lively Ask Me Anything at Reddit, Kaplan lays out his argument that a scary storm is gathering. A few exchanges follow.

____________________________

Question:

Do you feel like people are too fearful of artificial intelligence?

Jerry Kaplan:

The problem is that they are fearing the wrong thing. The robot apocalypse will be economic, not ‘military‘!

____________________________

Question:

What is the minimum wage of an average robot? How cost-effective are they (R&D+Maintenance+Hydro etc…/X hrs. wk.)?

Jerry Kaplan:

Ha interesting way to put the question. You don’t “pay” robots, of course. They are simply machines, like any others, so the question is whether the machine can perform some task in an economically advantageous way. This is a simply buy-vs-hire decision in most cases.

In my experience, it’s almost always better to use the machines, if you can afford it. Go forth and automate, my children!

____________________________

Question:

With the growing increase of machines taking over manual jobs do you feel that the workplace will be made up almost entirely of machines and people will then become less focused on work and more on leisure?

Jerry Kaplan:

What counts as work has shifted over the past centuries. What we do now would be considered optional “leisure” during the agrarian economy 200 years ago. They would think that our farms are made up almost entirely of machines today, and would wonder why on earth we aren’t living more simply and just enjoying ourselves!

But the desire to work is human nature. I think it’s a myth that most people just want to goof off and have fun … they’d rather work and own a fancier car!

____________________________

Question:

How far do you think we are away from living in a world with a ton AI in day to day life?

Jerry Kaplan:

You already are, you just don’t realize it. (Read my book and it will really scare you about what’s going on!)

Amazon, for one, is little more than a giant machine learning algorithm that arbitrages purchase and sale transactions. It watches your every move and decides exactly what is necessary to get you to buy. That’s why you see weird changes in the prices of things in your Cart, just for starters.

The ads you see online are another amazing example of how AI crafts things to get you to act in other people’s interests! I detail this in my book, it’s really unbelievable what happens when you load a web page, as AIs research everything about you in milliseconds, then an auction is performed, and the highest bidder gets to show their ad.

____________________________

Question:

Do you support Basic Income?

What are the machine-replaced workers suppose to do to feed their families?

Jerry Kaplan:

Basic income is a good thing, it will spur innovation. In principle machines make society wealthier — the question is who gets the wealth. We need to ensure that new wealth is distributed more fairly.

Food used to consume more than 50% of the average worker’s income. Now it’s under 10%. That’s real progress!

____________________________

Question:

Does the amount of money that the military invests in AI scare you or excite you?

Jerry Kaplan:

Well the military invests in AI for two reasons:

(1) To ensure that we have a ‘reserve’ of new technology that can both benefit society and is available in times of military threat.

(2) So we have the biggest bat in the league.

The challenge is now to achieve these two goals without bankrupting society or spurring continual arms races. Unfortunately this doesn’t lend itself to simple sound-bite answers. The military types I talk to (and I do have friends in DARPA, among other places) are not war-mongers at all, quite the contrary they want to try to keep us safe with minimum damage to life and property. We don’t always get this balance right, but it’s a hard (and mostly thankless) job.

____________________________

Question:

What makes a futurist? Are there specific credentials and methods?

Jerry Kaplan:

Nope – you just have to believe your own nonsense and talk about it persuasively, as if you were on Fox News.

Just get yourself a crystal ball and one of those weird turbans. LSD works well too (or so I hear?).

Seriously, it’s a ball. Give it a try.•

Tags:

Two videos about the changing nature of work in America.

The first is a PBS Newshour report by Paul Solman about technological unemployment, featuring some very dire predictions by Humans Need Not Apply author Jerry Kaplan, who believes robot caddies and delivery bots will lead to displaced, starving workers who’ll die in the streets if serious measures aren’t taken. Well, that could happen, though there’s no reason it need to.

The second is a Financial Times piece by Anna Nicolaou about the coworking startup WeWork, which leases monthly space to telecommuters who long to tether–a “capitalist kibbutz” as its called by founder Adam Neumann. My reaction to the company is that it seems especially prone to a bad financial downturn, but I would bet Neumann would argue the reverse, that short-term leases would be more attractive at such a time. At any rate, it’s an interesting look into the dynamic of the modern office space.

Tags: , , ,

Like most Atlantic readers, I go to the site for the nonstop Shell ads but stay for the articles. 

Jerry Kaplan, author of Humans Need Not Apply, has written a piece for the publication which argues that women will fare much better than men if technological unemployment becomes widespread and entrenched, the gender biases among jobs and careers favoring them. I half agree with him. 

Take for instance his argument that autonomous cars will decimate America’s three million truck drivers (overwhelmingly men) but not disrupt the nation’s three million secretaries (overwhelmingly women). That’s not exactly right. The trucking industry, when you account for support work, is estimated to provide eight million jobs, including secretarial positions. Truckers spend cash at diners and coffee shops and such, providing jobs that are still more often filled by females. And just because autonomous trucks won’t eliminate secretarial positions, that doesn’t mean other technologies won’t. That effort to displace office-support staff has been a serious goal for at least four decades, and the technology is probably ready to do so now.

This, of course, also doesn’t account for the many women who’ve entered into white-collar professions long dominated by men, many of which are under threat. But I think Kaplan is correct in saying that the middle-class American male is a particularly endangered species if this new reality takes hold, and there won’t likely be any organic solution coming from within our current economic arrangement.

Kaplan’s opening:

Many economists and technologists believe the world is on the brink of a new industrial revolution, in which advances in the field of artificial intelligence will obsolete human labor at an unforgiving pace. Two Oxford researchers recently analyzed the skills required for more than 700 different occupations to determine how many of them would be susceptible to automation in the near future, and the news was not good: They concluded that machines are likely to take over 47 percent of today’s jobs within a few decades.

This is a dire prediction, but one whose consequences will not fall upon society evenly. A close look at the data reveals a surprising pattern: The jobs performed primarily by women are relatively safe, while those typically performed by men are at risk.

It should come as no surprise that despite progress on equality in the labor force, many common professions exhibit a high degree of gender bias. For instance, of the 3 million truck drivers in the U.S., more than 95 percent are men; of the nearly 3 million secretaries and administrative assistants, more than 95 percent are women. Autonomous vehicles are a not-too-distant possibility, and when they arrive, those drivers’ jobs will evaporate; office-support workers suffer no such imminent threat.•

 

Tags:

In Jerry Kaplan’s excellent WSJ essay about ethical robots, which is adapted from his forthcoming book, Humans Need Not Apply, the author demonstrates it will be difficult to come up with consistent standards for our silicon sisters, and even if we do, machines following rules 100% of the time will not make for a perfect world. The opening:

As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?

Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.

How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?•

 

Tags: