John Markoff

You are currently browsing articles tagged John Markoff.

America desperately needs to win the race in AI, robotics, driverless, supercomputers, solar and other next-level sectors if the nation is to maintain its place in the world. If a powerful and wealthy democracy were to invest wisely and boldly, it would have a great advantage in such competitions with an autocracy like China. Unfortunately, we’ve never had a government less-equipped or less willing to pull off this feat. Trump wants to make coal great again, and Mnuchin can’t see AI on his radar.

If the U.S. and the European states are lose in these areas to China, infamous only a decade ago for its knockoff Apple Stores, the latter nation’s technological might and soft power will increase, further imperiling liberty.

The opening of a New York Times piece by Paul Mozur and John Markoff:

HONG KONG — Soren Schwertfeger finished his postdoctorate research on autonomous robots in Germany , and seemed set to go to Europe or the US, where artificial intelligence was pioneered and established.

Instead, he went to China.

“You couldn’t have started a lab like mine elsewhere,” Schwertfeger said.

The balance of power in technology is shifting. China, which for years watched enviously as the west invented the software and the chips powering today’s digital age, has become a major player in artificial intelligence, what some think may be the most important technology of the future. Experts widely believe China is only a step behind the US.

China’s ambitions mingle the most far-out sci-fi ideas with the needs of an authoritarian state: Philip K Dick meets George Orwell. There are plans to use it to predict crimes, lend money, track people on the country’s ubiquitous closed-circuit cameras, alleviate traffic jams, create self-guided missiles and censor the internet.

Beijing is backing its artificial intelligence push with vast sums of money. Having already spent billions on research programs, China is readying a new multibillion-dollar initiative to fund moonshot projects, start-ups and academic research, all with the aim of growing China’s A.I. capabilities, according to two professors who consulted with the government on the plan.•

Tags: ,

pickensslimstrangelove

landscape-1430748096-drone

Below are four unsettling if not unexpected paragraphs from an excellent report by Matthew Rosenberg and John Markoff of the New York Times about the American military’s transition from nuclear secrets to software codes, as billions spent on placing us on the bleeding edge of AI warfare are enabling weapons systems with automated capacity. A human is said to remain in the loop at all times, the machines unable to make their own decisions, but as other nations catch up in Artificial Intelligence as they have in traditional battle networks, will rational decisions still rule the day among numerous states with differing priorities, especially since fleets of such weapons will ultimately become relatively cheap and widely available?

For now, freestyle chess, which teams human and computers, is the model of the Department of Defense, a strategy it’s termed “centaur warfighting.” The future is far more cloudy.  As the journalists write, “the debate within the military is no longer about whether to build autonomous weapons but how much independence to give them.”

An excerpt:

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.

The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans.

“If Stanley Kubrick directed Dr. Strangelove again, it would be about the issue of autonomous weapons,” said Michael Schrage, a research fellow at the Massachusetts Institute of Technology Sloan School of Management.

Defense officials say the weapons are needed for the United States to maintain its military edge over China, Russia and other rivals, who are also pouring money into similar research (as are allies, such as Britain and Israel). The Pentagon’s latest budget outlined $18 billion to be spent over three years on technologies that included those needed for autonomous weapons.•

Tags: ,

roguesgallery678

Allan Pinkerton had an idea in 1857: Why not use the new technology of photography to create a pictorial file of repeat offenders causing the majority of the crime? That way the police could become familiar with broken noses and twisted smiles, making it easier to round up the usual suspects. The word “database” wouldn’t be coined for another century, but that’s essentially what the Rogues’ Gallery was. The subjects may have been uncooperative at times, but one way or another they were made to pose.

Now we’re all rogues, or at least suspected of such behavior. Here’s the opening two sentences from a recent Ars Technica article by David Kravets:

Half of American adults are in a face-recognition database, according to a Georgetown University study released Tuesday. That means there’s about 117 million adults in a law enforcement facial-recognition database, the study by Georgetown’s Center on Privacy & Technology says.

While such files can be viewed as an invasion of privacy by police, an invitation for us all to be pre-criminalized, they also pose another problem: As tools improve, these images may be used to steal identities. And it’s not just limited to faces–our voices may also be stolen right out of the air.

From John Markoff at the New York Times:

Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.

Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.

It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.

Such a situation is still science fiction — but just barely. It is also the future of crime.

The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”

The irony, of course, is that this year the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognition techniques will improve the woeful state of computer security.

But there is a downside.

“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of Future Crimes.

Tags: ,

lowesrobot8765

Mere consumerism can’t explain the whole of innovation, as I’ve mentioned. Cool gadgets you can slide into your pocket are good things, but they’re not everything. 

Silicon Valley’s investment in smartphones and social media is tapering, as Artificial Intelligence, from driverless cars to robot workers, has, urged on by Deep Learning, come into fashion, writes John Markoff of the New York Times. Is it all a bubble? Probably somewhat, but a lot of actual foundation can be laid during such a time of exuberance. 

In Markoff’s book Machine of Loving Grace, he held that we could make a conscious choice between A.I. and Intelligence Augmentation, but when different companies and countries are competing with so much on the line, such questions often answer themselves. 

An excerpt:

In the most recent shift, the A.I. idea emerged first in Canada in the work of cognitive scientists and computer scientists like Geoffrey Hinton, Yoshua Bengio and Yann LeCun during the previous decade. The three helped pioneer a new approach to deep learning, a machine learning method that is highly effective for pattern recognition challenges like vision and speech. Modeled on a general understanding of how the human brain works, it has helped technologists make rapid progress in a wide range of A.I. fields.

How far the A.I. boom will go is hotly debated. For some technologists, today’s technical advances are laying the groundwork for truly brilliant machines that will soon have human-level intelligence.

Yet Silicon Valley has faced false starts with A.I. before. During the 1980s, an earlier generation of entrepreneurs also believed that artificial intelligence was the wave of the future, leading to a flurry of start-ups. Their products offered little business value at the time, and so the commercial enthusiasm ended in disappointment, leading to a period now referred to as the “A.I. Winter.”

The current resurgence will not fall short this time, said several investors, who believe that the economic potential in terms of new efficiency and new applications is strong.

“There is no chance of a new winter,” said Shivon Zilis, an investor at Bloomberg Beta who specializes in machine intelligence start-ups.•

Tags: , , , ,

321horsegasmask

Just one last thing I wanted to mention about John Markoff’s Machines of Loving Grace:The Quest for Common Ground Between Humans and Robots, which I read earlier this year and enjoyed, even though I have a sharp disagreement with the book’s underlying principle. 

The writer is concerned that as Artificial Intelligence and Intelligence Augmentation battle for our research dollars, we may ultimately head down a path that sees humans replaced rather than fortified. It’s noble that Markoff wants us to question the technologists of today about tomorrow’s machines, but believing we can cooly and soberly choose between these two outcomes seems farfetched to me. Humans consistently make perplexing choices, as exemplified by our glacial transition from fossil fuels when the large majority of us accept that their use could doom us. 

Three points:

  1. Competition for machine dominance doesn’t occur in a vacuum, and the race for the future will occur within companies and among companies, within countries and among countries. If China or the U.S. or some other state develops an A.I. which would give it a sizable edge economically or militarily, other players would try to replicate.
  2. You can’t discount the human need to discover answers, to work a puzzle to completion, even one that results in an endgame for us. In our search for greater intelligence, it’s possible we’re clever enough to finish ourselves. Humans are commanded by many non-rational forces.
  3. Negatives aren’t always known at the outset. When the internal-combustion engine made electric- and steam-powered vehicles obsolete, nobody thought someday a remarkably useful conveyance being powered by fossil fuels might doom humanity. We won’t always know about the next unintended consequences when working on AI and IA.

To the book’s end, Markoff maintains these decisions will be conscious ones, though a late passage asks a confounding question that (somewhat) undermines his theory. The excerpt:

In 2013, when Google acquired DeepMind, a British artificial intelligence firm that specializes in machine learning, popular belief held that roboticists were close to building completely autonomous robots. The tiny start-up had produced a demonstration that showed its software playing video games, in some cases better than human players. Reports of the acquisition were also accompanied by the claim that Google would set up an “ethics panel” because of concerns about potential uses and abuses of the technology. Shane Legg, one of the cofounders of DeepMind, acknowledged that the technology would ultimately have dark consequences for the human race. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this.” For an artificial intelligence researcher who had just reaped hundreds of millions of dollars, it was an odd position to take. If someone believes that technology will likely evolve to destroy humankind, what could motivate them to continue developing that same technology?•

Tags:

brooks-1

Speaking of psychodrama, the theatrical therapy is mentioned briefly in John Markoff’s Machines of Loving Grace, in a passage about the nascent career of of roboticist Rodney Brooks, who became widely known from the Errol Morris documentary Fast, Cheap & Out of Control. Even though the connection in this case is glancing, it’s a good metaphor for how low- and high-tech attempts to understand consciousness overlap historically.

The passage:

Hans Moravec, an eccentric young graduate student, was camping in the attic of SAIL, while working on the Stanford Cart, an early four-wheeled mobile robot. A sauna had been installed in the basement, and psychodrama groups shared a lab space in the evenings. Available computer terminals displayed the message “Take me, I’m yours.” “The Prancing Pony”–a fictional wayfarer’s inn in Tolkien’s Lord of the Rings–was a mainframe-connected vending machine selling food suitable for discerning hackers. Visitors were greeted in a small lobby decorated with an ungainly “You Are Here” mural echoing the famous Leo Steinberg New Yorker cover depicting a relativistic view of the most important place in the United States. The SAIL map was based on a simple view of the laboratory and the Stanford campus, but lots of people had added their own perspectives to the map, ranging from placing the visitor at the center of the human brain to placing the laboratory near an obscure star somewhere out on the arm of an average-sized spiral galaxy.

It provided a captivating welcome for Rodney Brooks, another new Stanford graduate student. A math prodigy from Adelaide, Australia, raised by working-class parents, Brooks had grown up far from the can-do hacker culture in the United States. However, in 1969–along with millions of others around the world–he saw Kubrick’s 2001: A Space Odyssey. Like Jerry Kaplan, Brooks was not inspired to train like an astronaut but was instead seduced by HAL, the paranoid (or perhaps justifiably suspicious) AI.

Brooks puzzled about how he might create his own AI, and arriving at college, he had his first opportunity. On Sundays he had solo access to the school’s mainframe for the entire day. There, he created his own AI-oriented programming language and designed an interactive interface on the mainframe display. Brooks now went to writing theorem proofs, thus unwittingly working in the formal, McCarthy-inspired artificial intelligence tradition. Building an artificial intelligence was what he wanted to do with his life.•

Tags: ,

OlegRobot (1)

John Markoff doesn’t think technology is prone to the work of a blind watchmaker, but I’m not so sure. It would be great if rational thinking governed this area, but technology seems to pull us as much as we push it. Competition, contrasting priorities and simple curiosity can drive us in directions that may not be best for us, even if they are best for progress in a larger sense. The progress of intelligence, I mean. We’re not moths to a flame, but it’s difficult for a mere human being to look away from an inferno.

In his latest New York Times article, Markoff argues that superintelligence is not upon us, that most if not all of us will not live to see the Singularity. On this point, I agree. Perhaps there’ll emerge a clever workaround that allows Moore’s Law to continue apace, but I don’t think that guarantees superintelligence in a few decades. Anyone alive in 2016 who’s planning their day around conscious machines or radical life extension, twin dreams of the Singularitarians, will likely wind up sorely disappointed.

An excerpt:

Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.

What has not been shown, however, is scientific evidence for such an event. Indeed, the idea has been treated more skeptically by neuroscientists and a vast majority of artificial intelligence researchers.

For starters, biologists acknowledge that the basic mechanisms for biological intelligence are still not completely understood, and as a result there is not a good model of human intelligence for computers to simulate.

Indeed, the field of artificial intelligence has a long history of over-promising and under-delivering. John McCarthy, the mathematician and computer scientist who coined the term artificial intelligence, told his Pentagon funders in the early 1960s that building a machine with human levels of intelligence would take just a decade. Even earlier, in 1958 The New York Times reported that the Navy was planning to build a “thinking machine” based on the neural network research of the psychologist Frank Rosenblatt. The article forecast that it would take about a year to build the machine and cost about $100,000.

The notion of the Singularity is predicated on Moore’s Law, the 1965 observation by the Intel co-founder Gordon Moore, that the number of transistors that can be etched onto a sliver of silicon doubles at roughly two year intervals. This has fostered the notion of exponential change, in which technology advances slowly at first and then with increasing rapidity with each succeeding technological generation.

At this stage Moore’s Law seems to be on the verge of stalling.•

Tags:

r2d2c3po890 (3)

Adrienne LaFrance’s Atlantic article “What Is a Robot?” is one of my favorite pieces thus far in 2016. As the title suggests, the writer tries to define what qualities earns a machine the name “robot,” a term perhaps not as slippery as “existential” but one that’s nebulous nonetheless. The piece does much more, presenting a survey of robotics from ancient to contemporary times and asking many questions about where the sector’s current boom may be leading us.

Two points about the article:

  • It quotes numerous roboticists and those analyzing the field who hold the opinion that a robot must be encased, embodied. I think this is a dangerous position. A robot to me is anything that is given instructions and then completes a task. It’s increasingly coming to mean anything that can receive those basic instructions and then grow and learn on its own, not requiring more input. I don’t think it matters if that machine has an anthropomorphic body like C-3PO or if it’s completely invisible. If we spend too much time counting fingers and toes, we may miss the bigger picture.
  • Early on, there’s discussion about the master-slave relationship humans now enjoy with their machines, which will only increase in the short term–and may eventually be flipped. The following paragraph speaks to this dynamic: “In the philosopher Georg Wilhelm Friedrich Hegel’s 1807 opus, The Phenomenology of Spirit, there is a passage known as the master-slave dialectic. In it, Hegel argues, among other things, that holding a slave ultimately dehumanizes the master. And though he could not have known it at the time, Hegel was describing our world, too, and aspects of the human relationship with robots.” I believe this statement is true should machines gain consciousness, but it will remain a little hyperbolic as long as they’re not. Holding sway over Weak AI that does our bidding certainly changes the meaning of us and will present dicey ethical questions, but they are very different ones than provoked by actual slavery. Further, the human mission being altered doesn’t necessarily mean we’re being degraded.

From LaFrance:

Making robots appear innocuous is a way of reinforcing the sense that humans are in control—but, as Richards and Smart explain, it’s also a path toward losing it. Which is why so many roboticists say it’s ultimately not important to focus on what a robot is. (Nevertheless, Richards and Smart propose a useful definition: “A robot is a constructed system that displays both physical and mental agency, but is not alive in the biological sense.”)

“I don’t think it really matters if you get the words right,” said Andrew Moore, the dean of the School of Computer Science at Carnegie Mellon. “To me, the most important distinction is whether a technology is designed primarily to be autonomous. To really take care of itself without much guidance from anybody else… The second question—of whether this thing, whatever it is, happens to have legs or eyes or a body—is less important.”

What matters, in other words, is who is in control—and how well humans understand that autonomy occurs along a gradient. Increasingly, people are turning over everyday tasks to machines without necessarily realizing it. “People who are between 20 and 35, basically they’re surrounded by a soup of algorithms telling them everything from where to get Korean barbecue to who to date,” Markoff told me. “That’s a very subtle form of shifting control. It’s sort of soft fascism in a way, all watched over by these machines of loving grace. Why should we trust them to work in our interest? Are they working in our interest? No one thinks about that.”

“A society-wide discussion about autonomy is essential,” he added.•

Tags: , , , ,

fischerswimmingchess (1)

The Deep Learning defeat of a human Go champion has coincided with AI research booming in the U.S. like never before, with millions being thrown at freshly minted Ph.D.s in the field and small startups announcing the very immodest goal of “capturing all human knowledge.” Everyone is making a big bet on the sector’s future, and, of course, almost all will go bust. The competition, however, will lead to progress. 

The opening of a NYT article by John Markoff and Steve Lohr:

SAN FRANCISCO — The resounding win by a Google artificial intelligence program over a champion in the complex board game Go this month was a statement — not so much to professional game players as to Google’s competitors.

Many of the tech industry’s biggest companies, like Amazon, Google, IBM and Microsoft, are jockeying to become the go-to company for A.I. In the industry’s lingo, the companies are engaged in a “platform war.”

A platform, in technology, is essentially a piece of software that other companies build on and that consumers cannot do without. Become the platform and huge profits will follow. Microsoft dominated personal computers because its Windows software became the center of the consumer software world. Google has come to dominate the Internet through its ubiquitous search bar.

If true believers in A.I. are correct that this long-promised technology is ready for the mainstream, the company that controls A.I. could steer the tech industry for years to come.

“Whoever wins this race will dominate the next stage of the information age,” said Pedro Domingos, a machine learning specialist and the author of The Master Algorithm, a 2015 book that contends that A.I. and big-data technology will remake the world.•

Tags: ,

_88650791_matchroom8

The psychologist Gary Marcus urged caution when Google AI recently defeated a good, but not champion, Go player. Most of qualifications still pertain still pertain, but DeepMind just deep-sixed Lee Se-dol, one of the world’s best players. The human competitor noticed the psychological component of the game was noticeably absent, even disconcerting. “It’s like playing the game alone,” he said.

Below is 

____________________________

From Choe and Markoff:

SEOUL, South Korea — Computer, one. Human, zero.

A Google computer program stunned one of the world’s top players on Wednesday in a round of Go, which is believed to be the most complex board game ever created.

The match — between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol — was viewed as an important test of how far research into artificial intelligence has come in its quest to create machines smarter than humans.

“I am very surprised because I have never thought I would lose,” Mr. Lee said at a news conference in Seoul. “I didn’t know that AlphaGo would play such a perfect Go.”

Mr. Lee acknowledged defeat after three and a half hours of play.

Demis Hassabis, the founder and chief executive of Google’s artificial intelligence team DeepMind, the creator of AlphaGo, called the program’s victory a “historic moment.”•

____________________________

Deep-Blue

Garry Kasparov held off machines but only for so long. He defeated Deep Thought in 1989 and believed a computer could never best him. But by 1997 Deep Blue turned him–and humanity–into an also-ran in some key ways. The chess master couldn’t believe it at first–he assumed his opponent was manipulated by humans behind the scene, like the Mechanical Turk, the faux chess-playing machine from the 18th century. But no sleight of hand was needed.

Below are the openings of three Bruce Weber New York Times articles written during the Kasparov-Deep Blue matchup which chart the rise of the machines.

Responding to defeat with the pride and tenacity of a champion, the I.B.M. computer Deep Blue drew even yesterday in its match against Garry Kasparov, the world’s best human chess player, winning the second of their six games and stunning many chess experts with its strategy.

Joel Benjamin, the grandmaster who works with the Deep Blue team, declared breathlessly: “This was not a computer-style game. This was real chess!”

He was seconded by others.

“Nice style!” said Susan Polgar, the women’s world champion. “Really impressive. The computer played a champion’s style, like Karpov,” she continued, referring to Anatoly Karpov, a former world champion who is widely regarded as second in strength only to Mr. Kasparov. “Deep Blue made many moves that were based on understanding chess, on feeling the position. We all thought computers couldn’t do that.”•

Garry Kasparov, the world chess champion, opened the third game of his six-game match against the I.B.M. computer Deep Blue yesterday in peculiar fashion, by moving his queen’s pawn forward a single square. Huh?

“I think we have a new opening move,” said Yasser Seirawan, a grandmaster providing live commentary on the match. “What should we call it?”

Mike Valvo, an international master who is a commentator, said, “The computer has caused Garry to act in strange ways.”

Indeed it has. Mr. Kasparov, who swiftly became more conventional and subtle in his play, went on to a draw with Deep Blue, leaving the score of Man vs. Machine at 1 1/2 apiece. (A draw is worth half a point to each player.) But it is clear that after his loss in Game 2 on Sunday, in which he resigned after 45 moves, Mr. Kasparov does not yet have a handle on Deep Blue’s predilections, and that he is still struggling to elicit them.•

In brisk and brutal fashion, the I.B.M. computer Deep Blue unseated humanity, at least temporarily, as the finest chess playing entity on the planet yesterday, when Garry Kasparov, the world chess champion, resigned the sixth and final game of the match after just 19 moves, saying, “I lost my fighting spirit.”

The unexpectedly swift denouement to the bitterly fought contest came as a surprise, because until yesterday Mr. Kasparov had been able to summon the wherewithal to match Deep Blue gambit for gambit.

The manner of the conclusion overshadowed the debate over the meaning of the computer’s success. Grandmasters and computer experts alike went from praising the match as a great experiment, invaluable to both science and chess (if a temporary blow to the collective ego of the human race) to smacking their foreheads in amazement at the champion’s abrupt crumpling.

“It had the impact of a Greek tragedy,” said Monty Newborn, chairman of the chess committee for the Association for Computing, which was responsible for officiating the match.

It was the second victory of the match for the computer — there were three draws — making the final score 3 1/2 to 2 1/2, the first time any chess champion has been beaten by a machine in a traditional match. Mr. Kasparov, 34, retains his title, which he has held since 1985, but the loss was nonetheless unprecedented in his career; he has never before lost a multigame match against an individual opponent.

Afterward, he was both bitter at what he perceived to be unfair advantages enjoyed by the computer and, in his word, ashamed of his poor performance yesterday.

“I was not in the mood of playing at all,” he said, adding that after Game 5 on Saturday, he had become so dispirited that he felt the match was already over. Asked why, he said: “I’m a human being. When I see something that is well beyond my understanding, I’m afraid.”•

Tuerkischer_schachspieler_windisch4

 

Tags: ,

WesleyAClark_LINC-f6904e895c5fe698

Long before John Lilly used Apple IIs to attempt to speak to dolphins, the LINC, the first modern personal computer, was his tool of choice in trying to coax conversation from the marine mammals. That was in the 1960s, the decade in which physicist Wesley A. Clark, realizing that microchips would progressively get much smaller and cheaper, led a team that built the not-quite-yet-portable PC, which ran counter to the popular idea of computers as shared instruments. It retailed at $43,000. 

Clark just died at 88. From John Markoff’s NYT obituary of the scientist: 

He achieved his breakthroughs working with a small group of scientists and engineers at the Lincoln Laboratory of the Massachusetts Institute of Technology in the late 1950s and early ’60s. Early on they had the insight that the cost of computing would fall inexorably and lead to computers that were then unimaginable.

Severo Ornstein, who as a young engineer also worked at Lincoln in the 1960s, recalled Mr. Clark as one of the first to have a clear understanding of the consequences of the falling cost and shrinking size of computers.

“Wes saw the future 15 years before anyone else,” he said.

Mr. Clark also had the insight as a young researcher that the giant metal cabinets that held the computers of the late 1950s and early ’60s would one day vanish as microelectronics technologies evolved and circuit sizes shrank.

Each LINC had a tiny screen and keyboard and comprised four metal modules. Together they were about as big as two television sets, set side by side and tilted back slightly. The machine, a 12-bit computer, included a one-half megahertz processor. (By contrast, an iPhone 6s is thousands of times faster and has 16 million times as much memory.)

A LINC sold for about $43,000 — a bargain at the time — and Digital Equipment, the first minicomputer company, ultimately built them commercially, producing 50 of the original design.

The influence of the LINC was far-reaching. For example, as a Stanford undergraduate, Larry Tesler, who would go on to become an early advocate of personal computing and who helped design the Lisa and Macintosh at Apple Computer, programmed a LINC in the laboratory of the molecular biologist Joshua Lederberg.•

Tags: ,

fab1

“It is not yet possible to create a computerized voice that is indistinguishable from a human one for anything longer than short phrases,” writes John Markoff in his latest probing NYT article about technology, this one about “conversational agents.” 

The dream of giving voices like ours to contraptions were realized with varying degrees of success by 19th-century inventors like Joseph Faber and Thomas Edison, who awed their audiences, but the modern attempt is to replace marvel with mundanity, a post-Siri scenario in which the interaction no longer seems novel.

Machines that can listen are the ones that cause the most paranoia, but talking ones that could pass for human would pose a challenge as well. As Markoff notes in the above quote, a truly conversational computer isn’t currently achievable, but it will be eventually. At first we might give such devices a verbal tell to inform people of their non-carbon chat partner, but won’t we ultimately make the conversation seamless? 

In his piece, Markoff surveys the many people trying to make that seamlessness a reality. The opening:

When computers speak, how human should they sound?

This was a question that a team of six IBM linguists, engineers and marketers faced in 2009, when they began designing a function that turned text into speech for Watson, the company’s “Jeopardy!”-playing artificial intelligence program.

Eighteen months later, a carefully crafted voice — sounding not quite human but also not quite like HAL 9000 from the movie 2001: A Space Odyssey — expressed Watson’s synthetic character in a highly publicized match in which the program defeated two of the best human Jeopardy! players.

The challenge of creating a computer “personality” is now one that a growing number of software designers are grappling with as computers become portable and users with busy hands and eyes increasingly use voice interaction.

Machines are listening, understanding and speaking, and not just computers and smartphones. Voices have been added to a wide range of everyday objects like cars and toys, as well as household information “appliances” like the home-companion robots Pepper and Jibo, and Alexa, the voice of the Amazon Echo speaker device.

A new design science is emerging in the pursuit of building what are called “conversational agents,” software programs that understand natural language and speech and can respond to human voice commands.

However, the creation of such systems, led by researchers in a field known as human-computer interaction design, is still as much an art as it is a science.

It is not yet possible to create a computerized voice that is indistinguishable from a human one for anything longer than short phrases that might be used for weather forecasts or communicating driving directions.

Most software designers acknowledge that they are still faced with crossing the “uncanny valley,” in which voices that are almost human-sounding are actually disturbing or jarring.•

Tags:

Overpromising is cruel.

In technology and science, you see it especially in the area of life extension. The fountain of youth has been with us ever since people had time to stop and ponder, but the irrational rhetoric has grown louder since gerontologist Aubrey de Grey said in 2004 that “the first person to live to 1,000 might be 60 already.” What nonsense. I’m all in favor of working toward longer and healthier lives, but there’s no need to overheat the subject.

When it comes to a Singularitarian paradise of conscious machines, Ray Kurzweil’s pronouncements have ranged further and further into science fiction, promising superintelligence in a couple of decades. That’s not happening. Again, working toward such goals is worthwhile, but thinking that tomorrow is today is a sure way to disappoint.

Weak AI (non-conscious machines capable of programmed tasks) is the immediate challenge, with robots primed to devour jobs long handled by humans. That doesn’t mean we endure mass technological unemployment, but it could mean that. In a Nature review of three recent books on the topic (titles by John Markoff + Martin FordDavid A. Mindell), Ken Goldberg takes a skeptical look at our machine overlords. An excerpt:

Rise of the Robots by software entrepreneur Martin Ford proclaims that AI and robots are about to eliminate most jobs, blue- and white-collar. A close reading reveals the evidence as extremely sketchy. Ford has swallowed the rhetoric of futurist Ray Kurzweil, and repeatedly asserts that we are on the brink of vastly accelerating advances based on Moore’s law, which posits that computing power increases exponentially with time. Yet some computer scientists rue this exponential fallacy, arguing that the success of integrated circuits has raised expectations of progress far beyond what historians of technology recognize as an inevitable flattening of the growth curve.

Nor do historical trends support the Luddite fallacy, which assumes that there is a fixed lump of work and that technology inexorably creates unemployment. Such reasoning fails to consider compensation effects that create new jobs, or myriad relevant factors such as globalization and the democratization of the workforce. Ford describes software systems that attempt to do the work of attorneys, project managers, journalists, computer programmers, inventors and musicians. But his evidence that these will soon be perfected and force massive lay-offs consists mostly of popular magazine articles and, in one case, a conversation with the marketing director of a start-up.•

Tags: , ,

The Economist has a good if brief review of three recent titles about Artificial Intelligence and what it means for humans, John Markoff’s Machines of Loving GracePedro Domingos’ The Master Algorithm and Jerry Kaplan’s Humans Need Not Apply.

I quote the opening of the piece below because I think it gets at an error in judgement some people make about technological progress, in regards to both Weak AI and Strong AI. There’s the idea that humans are in charge and can regulate machine progress, igniting and controlling it as we do fire. I don’t believe that’s ultimately so even if it’s our goal.

Such decisions aren’t made in cool, sober ways inside a vacuum but in a messy world full of competition and differing priorities. If the United States decided to ban robots or gene editing but China used them and prospered from the use, we would have to also enter the race. It’s similar to how America was largely a non-militaristic country before WWII but since then has been armed to the teeth.

The only thing that halts technological progress is a lack of knowledge. Once attained, it will be used because that makes us feel clever and proud. And it gives us a sense of safety, even when it makes things more dangerous. That’s human nature as applied to Artificial Intelligence.

An excerpt:

ARTIFICIAL INTELLIGENCE (AI) is quietly everywhere, powering Google’s search engine, Amazon’s recommendations and Facebook’s facial recognition. It is how post offices decipher handwriting and banks read cheques. But several books in recent years have spewed fire and brimstone, claiming that algorithms are poised to obliterate white-collar knowledge-work in the 21st century, just as automation displaced blue-collar manufacturing work in the 20th. Some people go further, arguing that artificial intelligence threatens the human race. Elon Musk, an American entrepreneur, says that developing the technology is “summoning the demon.”

Now several new books serve as replies. In Machines of Loving Grace, John Markoff of the New York Times focuses on whether researchers should build true artificial intelligence that replaces people, or aim for “intelligence augmentation” (IA), in which the computers make people more effective. This tension has been there from the start. In the 1960s, at one bit of Stanford University John McCarthy, a pioneer of the field, was gunning for AI (which he had named in 1955), while across campus Douglas Engelbart, the inventor of the computer mouse, aimed at IA. Today, some Google engineers try to improve search engines so that people can find information better, while others develop self-driving cars to eliminate drivers altogether.•

Tags: , ,

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

Very much looking forward to the forthcoming book Machines of Loving Grace, an attempt by the New York Times journalist John Markoff to make sense of our automated future. 

In an Edge.org interview, Markoff argues that Moore’s Law has flattened out, perhaps for now or maybe for the long run, a slowdown that isn’t being acknowledged by technologists. Markoff still believes we’re headed for a highly automated future, one he senses will be slower to develop than expected. Those greatly worried about technological unemployment, the writer argues, are alarmists, since he thinks technology taking jobs is a necessity, the human population likely being unable in the future to keep pace with required production. Of course, he doesn’t have to be wrong by very much for great societal upheaval to occur and political solutions to be required.

From Markoff:

We’re at that stage, where our expectations have outrun the reality of the technology.

I’ve been thinking a lot about the current physical location of Silicon Valley. The Valley has moved. About a year ago, Richard Florida did a fascinating piece of analysis where he geo-located all the current venture capital investments. Once upon a time, the center of Silicon Valley was in Santa Clara. Now it’s moved fifty miles north, and the current center of Silicon Valley by current investment is at the foot of Potrero Hill in San Francisco. Living in San Francisco, you see that. Manufacturing, which is what Silicon Valley once was, has largely moved to Asia. Now it’s this marketing and design center. It’s a very different beast than it was.                                 

I’ve been thinking about Silicon Valley at a plateau, and maybe the end of the line. I just spent about three or four years reporting about robotics. I’ve been writing about it since 2004, even longer, when the first autonomous vehicle grand challenge happened. I watched the rapid acceleration in robotics. We’re at this point where over the last three or four years there’s been a growing debate in our society about the role of automation, largely forced by the falling cost of computing and sensors and the fact that there’s a new round of automation in society, particularly in American society. We’re now not only displacing blue-collar tasks, which has happened forever, but we’re replacing lawyers and doctors. We’re starting to nibble at the top of the pyramid.

I played a role in creating this new debate. The automation debate comes around in America at regular intervals. The last time it happened in America was during the 1960s and it ended prematurely because of the Vietnam War. There was this discussion and then the war swept away any discussion. Now it’s come back with a vengeance. I began writing articles about white-collar automation in 2010, 2011. 

There’s been a deluge of books such as The Rise of the Robots, The Second Machine Age, The Lights in the Tunnel, all saying that there will be no more jobs, that the automation is going to accelerate and by 2045 machines will be able to do everything that humans can do. I was at dinner with you a couple years ago and I was ranting about this to Danny Kahneman, the psychologist, particularly with respect to China, and making the argument that this new wave of manufacturing automation is coming to China. Kahneman said to me, “You just don’t get it.” And I said, “What?” And he said, “In China, the robots are going to come just in time.”

_____________________________

 

“All Watched Over
by Machines of Loving Grace”

I’d like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal brothers and sisters,
and all watched over
by machines of loving grace.

Tags:

The DARPA Robotics Challenge this weekend was about as unimpressive as the 2004 Grand Challenge for driverless vehicles–the “Debacle in the Desert.” It seemed like robocars were decades away from reality, but in the aftermath of that competition autonomous vehicles showed marked improvement in a stunningly short time. The requirements of unplugged robots are greater than that of a driverless car, but the money being currently poured into this research is also far more substantial. 

What does the future hold? I’ll quote the technophobe Andre Gregory from the end of the long conversation that makes up most of My Dinner with Andre: “A baby holds your hands, and then suddenly, there’s this huge man lifting you off the ground, and then he’s gone. Where’s that son?”

From a report on the competition from John Markoff at the New York Times:

Despite clear progress since a trial event in Florida in 2013, the robots remain decades away from the science-fiction feats seen in movies like Ex Machina and Chappie.

Instead, the robots seemed more like an array of electronic and hydraulic contraptions that, in some cases, walked in a lumbering fashion on two or four legs and, in other cases, rolled on tracks or wheels. Some of the machines weighed more than 400 pounds. They were equipped with sensors and cameras to permit remote control.

On Friday, the first day of the Robotics Challenge, it took until 2:30 in the afternoon for the first robot to successfully complete the course, seven and a half hours after the competition began. Frequently, the machines would stand motionless for minutes at a time while they waited for wireless connections with their controllers to improve. Darpa degraded the wireless links on purpose to create the uneven communications that would simulate a crisis situation.

Reporters were once again left grasping for appropriate metaphors to describe the slow-motion calisthenics performed by the menagerie of battery-powered machines. Most agreed that “like watching grass grow” was no longer the best description, and Gill Pratt, the Darpa official in charge of the competition, suggested that it had risen to the level of “watching a golf match.”•

Tags:

AI has not traditionally excelled at pattern recognition, capable of recognizing only single objects but unable to decipher the meaning of actions or interactions. Such an advance would make driverless cars and other oh-so-close wonders a reality. Stanford and Google have just announced breakthroughs. From John Markoff at the New York Times:

“During the past 15 years, video cameras have been placed in a vast number of public and private spaces. In the future, the software operating the cameras will not only be able to identify particular humans via facial recognition, experts say, but also identify certain types of behavior, perhaps even automatically alerting authorities.

Two years ago Google researchers created image-recognition software and presented it with 10 million images taken from YouTube videos. Without human guidance, the program trained itself to recognize cats — a testament to the number of cat videos on YouTube.

Current artificial intelligence programs in new cars already can identify pedestrians and bicyclists from cameras positioned atop the windshield and can stop the car automatically if the driver does not take action to avoid a collision.

But ‘just single object recognition is not very beneficial,’ said Ali Farhadi, a computer scientist at the University of Washington who has published research on software that generates sentences from digital pictures. ‘We’ve focused on objects, and we’ve ignored verbs,’ he said, adding that these programs do not grasp what is going on in an image.

Both the Google and Stanford groups tackled the problem by refining software programs known as neural networks, inspired by our understanding of how the brain works. Neural networks can ‘train’ themselves to discover similarities and patterns in data, even when their human creators do not know the patterns exist.”

___________________________

“Why not devote your powers to discerning patterns?”

Tags:

Smartphones are embedded with improved technologies which will be useful in autonomous cars which will create other technologies useful in other souped-up tools, as the algorithms popularized on the Internet escape through the screen. The objects grow smarter whether or not we do. From “The Rapid Advance of Artificial Intelligence,” by John Markoff in the New York Times:

“The enormous amount of data being generated by inexpensive sensors has been a significant factor in altering the center of gravity of the computing world, he said, making it possible to use centralized computers in data centers — referred to as the cloud — to take artificial intelligence technologies like machine-learning and spread computer intelligence far beyond desktop computers.

Apple was the most successful early innovator in popularizing what is today described as ubiquitous computing. The idea, first proposed by Mark Weiser, a computer scientist with Xerox, involves embedding powerful microprocessor chips in everyday objects.

Steve Jobs, during his second tenure at Apple, was quick to understand the implications of the falling cost of computer intelligence. Taking advantage of it, he first created a digital music player, the iPod, and then transformed mobile communication with the iPhone. Now such innovation is rapidly accelerating into all consumer products.

‘The most important new computer maker in Silicon Valley isn’t a computer maker at all, it’s Tesla,’ the electric car manufacturer, said Paul Saffo, a managing director at Discern Analytics, a research firm based in San Francisco. ‘The car has become a node in the network and a computer in its own right. It’s a primitive robot that wraps around you.’ “

Tags: , ,

Moving automated grading beyond multiple choice ovals and No. 2 pencils, a new software has been developed that is said to be capable of grading essays. This can’t be good. From John Markoff in the New York Times:

Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the ‘send’ button when you are done and receiving a grade back instantly, your essay scored by a software program.

And then, instead of being done with that exam, imagine that the system would immediately let you rewrite the test to try to improve your grade.EdX, the nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology to offer courses on the Internet, has just introduced such a system and will make its automated software available free on the Web to any institution that wants to use it. The software uses artificial intelligence to grade student essays and short written answers, freeing professors for other tasks. 

The new service will bring the educational consortium into a growing conflict over the role of automation in education.”

Tags:

British researchers will spend the next decade figuring out if Charles Babbage is truly the father of the programmable computer. From a John Markoff article in the New York Times:

“Researchers in Britain are about to embark on a 10-year, multimillion-dollar project to build a computer — but their goal is neither dazzling analytical power nor lightning speed.

Indeed, if they succeed, their machine will have only a tiny fraction of the computing power of today’s microprocessors. It will rely not on software and silicon but on metal gears and a primitive version of the quaint old I.B.M. punch card.

What it may do, though, is answer a question that has tantalized historians for decades: Did an eccentric mathematician named Charles Babbage conceive of the first programmable computer in the 1830s, a hundred years before the idea was put forth in its modern form by Alan Turing?”

Tags: ,

The heft of Google’s wealth and influence is squarely behind the proliferation of self-driving cars, a concept which has been around since the 1950s and may be coming to Nevada roads in the near future. John Markoff reports in the New York Times:

“Google, a pioneer of self-driving cars, is quietly lobbying for legislation that would make Nevada the first state where they could be legally operated on public roads.

And yes, the proposed legislation would include an exemption from the ban on distracted driving to allow occupants to send text messages while sitting behind the wheel.

The two bills, which have received little attention outside Nevada’s Capitol, are being introduced less than a year after the giant search engine company acknowledged that it was developing cars that could be safely driven without human intervention.

Last year, in response to a reporter’s query about its then-secret research and development program, Google said it had test-driven robotic hybrid vehicles more than 140,000 miles on California roads — including Highway 1 between Los Angeles and San Francisco.

More than 1,000 miles had been driven entirely autonomously at that point; one of the company’s engineers was testing some of the car’s autonomous features on his 50-mile commute from Berkeley to Google’s headquarters in Mountain View.”

Tags: