Tom Simonite

You are currently browsing articles tagged Tom Simonite.

Was looking at the wonderful The Browser earlier today, and the quote of the day was from William Carlos Williams: “That which is possible is inevitable.” So true, especially when that sentiment is applied to technology. That doesn’t mean all hope is lost and we should just idly allow the creeping — and sometimes leaping — advances of tech to roll over us, but it does speak to the competition among corporations and states that often moves forward agendas for reasons that have nothing to do with common sense or public good. 

It’s dubious we’ll come to some global consensus on inviolate rules governing genetic modifications of life forms or autonomous weapons systems. Of the two, there’s more hope for the latter than the former, considering the costs involved, though neither seems particularly promising. It won’t take a great deal of resources soon enough to rework the genome, with terrorist organizations as well as educational institutions in the game. Eventually, even startups in garages and “lone gunmen” will be able to create and destroy in this manner. This field will be, in time, decentralized.

Killer robots, conversely, aren’t going to be fast, cheap and out of control for the foreseeable future, though that doesn’t mean they won’t be developed. In fact, it’s plausible they will, even if the barrier of entry is much higher. There are currently reasons for America, China, Russia and other players to shy away from these weapons that guide themselves, but all it will take is for one major competitor to blink for everyone to rush into the future. And all sides have to keep gradually moving toward such capacity in the meantime in order to respond rapidly should a competing nation jump across the divide. Ultimately, everyone will probably blink.

“Once this Pandora’s box is opened, it will be hard to close,” advised an open letter from leading AI and robotics experts to the UN, encouraging the intergovernmental body to urgently address the matter of autonomous weapons. That’s certainly accurate, though the box opening seems more likely than a total ban succeeding.

· · ·

From “Sorry, Banning ‘Killer Robots’ Just Isn’t Practical,” a smart Wired piece by Tom Simonite, which speaks to how the nebulous definition of “autonomous weapons” will aid in their development:

LATE SUNDAY, 116 entrepreneurs, including Elon Musk, released a letter to the United Nations warning of the dangerous “Pandora’s Box” presented by weapons that make their own decisions about when to kill. Publications including The Guardian and The Washington Post ran headlines saying Musk and his cosigners had called for a “ban” on “killer robots.”

Those headlines were misleading. The letter doesn’t explicitly call for a ban, although one of the organizers has suggested it does. Rather, it offers technical advice to a UN committee on autonomous weapons formed in December. The group’s warning that autonomous machines “can be weapons of terror” makes sense. But trying to ban them outright is probably a waste of time.

That’s not because it’s impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. An international agreement blocking use of laser weapons intended to cause permanent blindness is holding up nicely.

Weapons systems that make their own decisions are a very different, and much broader, category. The line between weapons controlled by humans and those that fire autonomously is blurry, and many nations—including the US—have begun the process of crossing it. Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independence—including to kill—irresistible.•


I’m not entirely convinced Elon Musk doesn’t have more in common with Donald Trump in regard to politics than we know. Not saying that he is a raging Libertarian monster like his pal Peter Thiel, but it’s not likely he’s the lovable billionaire that Iron Man cameos would have us believe.

Now that his harebrained attempt to “stage manage” the orange supremacist is happily over, the entrepreneur has fully returned to his normal chores, which are, of course, abnormal. There are two different Musks at work.

Good Elon creates gigafactories and gives people the opportunity to power their homes with solar. As these tools spread, through his efforts and those of his competitors, the Silicon Valley magnate will have made a major contribution to potentially saving our species from the existential threat of climate change. 

Bad Elon is a sort of lower-case Nikola Tesla, whose name he borrowed, of course, for his EV company. And it’s the worst of the Serbian-American inventor that he emulates: grandiose, egotistical, desperate to awe with brilliance even when the logic doesn’t quite cohere. Like Tesla’s final patented invention, the Flivver Plane, which would never have been able to fly even if it was built, Musk often concentrates his attention where it’s not most needed on things that won’t happen.

Much of this baffling overconfidence can be seen in his near-term plan to become a Martian. Some of it is also on view in his deathly fear of killer robots, a stance he developed after going on a Bostrom bender. Intelligent machines are a very-long-term risk for our species (if we’re not first done in by our own dimness or perhaps a solar flare), but they shouldn’t be a primary concern to anyone presently. Not when children even in a wealthy country like America still drink lead-contaminated water, relatively dumb AI can cause employment within industries to collapse and new technological tools are exacerbating wealth inequality.

In a Wired piece, Tom Simonite contextualizes Musk’s foolhardy sci-fi AI fears as well as anyone has. The opening:

IMAGINE YOU HAD a chance to tell 50 of the most powerful politicians in America what urgent problem you think needs prompt government action. Elon Musk had that chance this past weekend at the National Governors Association Summer Meeting in Rhode Island. He chose to recommend the gubernatorial assembly get serious about preventing artificial intelligence from wiping out humanity.

“AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that,” Musk said. He asked the governors to consider a hypothetical scenario in which a stock-trading program orchestrated the 2014 missile strike that downed a Malaysian airliner over Ukraine—just to boost its portfolio. And he called for the establishment of a new government regulator that would force companies building artificial intelligence technology to slow down. “When the regulator’s convinced it’s safe to proceed then you can go, but otherwise slow down,” he said.

Musk’s remarks made for an enlivening few minutes on a day otherwise concerned with more quotidian matters such as healthcare and education. But Musk’s call to action was something of a missed opportunity. People who spend more time working on artificial intelligence than the car, space, and solar entrepreneur say his eschatological scenarios risk distracting from more pressing concerns as artificial intelligence technology percolates into every industry.

Pedro Domingos, a professor who works on machine learning at the University of Washington, summed up his response to Musk’s talk on Twitter with a single word: Sigh. “Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent,” Domingos says. America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies, he says. Iyad Rahwan, who works on matters of AI and society at MIT, agrees. Rather than worrying about trading bots eventually becoming smart enough to start wars as an investment strategy, we should consider how humans might today use dumb bots to spread misinformation online, he says.

Rahwan doesn’t deny that Musk’s nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.”•

Tags: ,

Developing visual recognition in machines is helpful in performing visual tasks, of course, but this ability has the potential to unfold Artificial Intelligence in much broader and significant ways, providing AI with a context from which to more accurately “comprehend” the world. (I’m not even sure if the quotation marks in the previous sentence are necessary.)

In an interview conducted by Tom Simonite of Technology Review, Director of AI Research at Facebook’s AI research director Yann LeCun explains that exposing machines to video will hopefully enable them to learn through observation as small children do. “That’s what would allow them to acquire common sense, in the end,” he says.

An excerpt:


Babies learn a lot about the world without explicit instruction, though.

Yann LeCun:

One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels. That’s what would allow them to acquire common sense, in the end. These are things that animals and babies learn in the first few months of life—you learn a ridiculously large amount about the world just by observation. There are a lot of ways that machines are currently fooled easily because they have very narrow knowledge of the world.


What progress is being made on getting software to learn by observation?

Yann LeCun:

We are very interested in the idea that a learning system should be able to predict the future. You show it a few frames of video and it tries to predict what’s going to happen next. If we can train a system to do this we think we’ll have developed techniques at the root of an unsupervised learning system. That is where, in my opinion, a lot of interesting things are likely to happen. The applications for this are not necessarily in vision—it’s a big part of our effort in making progress in AI.•

Tags: ,

Your new robot coworkers are darling–and so efficient! They’ll relieve you of so many responsibilities. And, eventually, maybe all of them. For now, factory robots will reduce jobs only somewhat, as we work alongside them. But eventually the band will be broken up, the machines going solo. Even the workers manufacturing the robots will soon enough be robots.

In a Technology Review article, Tom Simonite takes a smart look at this transitional phase, as robots begin to gradually commandeer the warehouse. He focuses on Fetch, a company that makes robots versatile enough to be introduced into preexisting factories. An excerpt:

Freight is designed to help shelf pickers, who walk around warehouses pulling items off shelves to do things like fulfilling online shopping orders. As workers walk around gathering items from shelves, they can toss items into the crate carried by the robot. When an order is complete, a tap on a smartphone commands the robot to scoot its load off to its next destination.

Wise says that robot colleagues like these could make work easier for shelf pickers, who walk as much as 15 miles a day in some large warehouses. Turnover in such jobs is high, and warehouse operators struggle to fill positions, she says. “We can reduce that burden on people and have them focus on the things that humans are good at, like taking things off shelves,” says Wise.

However, Wise’s company is also working on a second robot designed to be good at that, too. It has a long, jointed arm with a gripper, is mounted on top of a wheeled base, and has a moving “head” with a depth camera similar to that found in the Kinect games controller. This robot, named Fetch, is intended to rove around a particular area of shelving, taking items down and dropping them into a crate carried by a Freight robot.

Tags: ,

A moonshot launched from an outhouse is a pretty apt description of the cratered Hewlett-Packard’s unlikely attempt to reimagine the computer. A semi-secret project called “the Machine” may be the company’s best shot–albeit, a long shot–to recreate itself and our most used tools all at once, increasing memory manifold with the aid of a fundamentally new operating system. From Tom Simonite at MIT Technology Review:

In the midst of this potentially existential crisis, HP Enterprise is working on a risky research project in hopes of driving a remarkable comeback. Nearly three-quarters of the people in HP’s research division are now dedicated to a single project: a powerful new kind of computer known as “the Machine.” It would fundamentally redesign the way computers function, making them simpler and more powerful. If it works, the project could dramatically upgrade everything from servers to smartphones—and save HP itself.

“People are going to be able to solve problems they can’t solve today,” says Martin Fink, HP’s chief technology officer and the instigator of the project. The Machine would give companies the power to tackle data sets many times larger and more complex than those they can handle today, he says, and perform existing analyses perhaps hundreds of times faster. That could lead to leaps forward in all kinds of areas where analyzing information is important, such as genomic medicine, where faster gene-sequencing machines are producing a glut of new data. The Machine will require far less electricity than existing computers, says Fink, making it possible to slash the large energy bills run up by the warehouses of computers behind Internet services. HP’s new model for computing is also intended to apply to smaller gadgets, letting laptops and phones last much longer on a single charge.

It would be surprising for any company to reinvent the basic design of computers, but especially for HP to do it. It cut research jobs as part of downsizing efforts a decade ago and spends much less on research and development than its competitors: $3.4 billion in 2014, 3 percent of revenue. In comparison, IBM spent $5.4 billion—6 percent of revenue—and has a much longer tradition of the kind of basic research in physics and computer science that creating the new type of computer will require. For Fink’s Machine dream to be fully realized, HP’s engineers need to create systems of lasers that fit inside -fingertip-size computer chips, invent a new kind of operating system, and perfect an electronic device for storing data that has never before been used in computers.

Pulling it off would be a virtuoso feat of both computer and corporate engineering.•

Tags: ,

Google believes its driverless cars are already safer than human drivers. Even if that’s currently too ambitious a statement, it’s really only a matter of time. From Tom Simonite at the Technology Review:

“Data gathered from Google’s self-driving Prius and Lexus cars shows that they are safer and smoother when steering themselves than when a human takes the wheel, according to the leader of Google’s autonomous-car project.

Chris Urmson made those claims today at a robotics conference in Santa Clara, California. He presented results from two studies of data from the hundreds of thousands of miles Google’s vehicles have logged on public roads in California and Nevada.

One of those analyses showed that when a human was behind the wheel, Google’s cars accelerated and braked significantly more sharply than they did when piloting themselves. Another showed that the cars’ software was much better at maintaining a safe distance from the vehicle ahead than the human drivers were.

‘We’re spending less time in near-collision states,’ said Urmson. ‘Our car is driving more smoothly and more safely than our trained professional drivers.'”

Tags: ,

The opening of Tom Simonite’s new Technology Review piece, “The Decline of Wikipedia,” which asserts that the remarkable crowd-sourced encyclopedia, which I don’t go a day without consulting, is threatened for a myriad of reasons but none more than entrenched bureaucracy:

“The sixth most widely used website in the world is not run anything like the others in the top 10. It is not operated by a sophisticated corporation but by a leaderless collection of volunteers who generally work under pseudonyms and habitually bicker with each other. It rarely tries new things in the hope of luring visitors; in fact, it has changed little in a decade. And yet every month 10 billion pages are viewed on the English version of Wikipedia alone. When a major news event takes place, such as the Boston Marathon bombings, complex, widely sourced entries spring up within hours and evolve by the minute. Because there is no other free information source like it, many online services rely on Wikipedia. Look something up on Google or ask Siri a question on your iPhone, and you’ll often get back tidbits of information pulled from the encyclopedia and delivered as straight-up facts.

Yet Wikipedia and its stated ambition to “compile the sum of all human knowledge” are in trouble. The volunteer workforce that built the project’s flagship, the English-language Wikipedia—and must defend it against vandalism, hoaxes, and manipulation—has shrunk by more than a third since 2007 and is still shrinking. Those participants left seem incapable of fixing the flaws that keep Wikipedia from becoming a high-quality encyclopedia by any standard, including the project’s own. Among the significant problems that aren’t getting resolved is the site’s skewed coverage: its entries on Pokemon and female porn stars are comprehensive, but its pages on female novelists or places in sub-Saharan Africa are sketchy. Authoritative entries remain elusive. Of the 1,000 articles that the project’s own volunteers have tagged as forming the core of a good encyclopedia, most don’t earn even Wikipedia’s own middle-­ranking quality scores.

The main source of those problems is not mysterious. The loose collective running the site today, estimated to be 90 percent male, operates a crushing bureaucracy with an often abrasive atmosphere that deters newcomers who might increase participation in Wikipedia and broaden its coverage.”


From “What Facebook Knows,” Tom Simonite’s interesting MIT Technology Review article about the myriad of unexpected ways that the voluminous data Zuckerberg and friends have collected allows the social network to do social science:

“One of [Cameron] Marlow’s researchers has developed a way to calculate a country’s ‘gross national happiness’ from its Facebook activity by logging the occurrence of words and phrases that signal positive or negative emotion. Gross national happiness fluctuates in a way that suggests the measure is accurate: it jumps during holidays and dips when popular public figures die. After a major earthquake in Chile in February 2010, the country’s score plummeted and took many months to return to normal. That event seemed to make the country as a whole more sympathetic when Japan suffered its own big earthquake and subsequent tsunami in March 2011; while Chile’s gross national happiness dipped, the figure didn’t waver in any other countries tracked (Japan wasn’t among them). Adam Kramer, who created the index, says he intended it to show that Facebook’s data could provide cheap and accurate ways to track social trends—methods that could be useful to economists and other researchers.”

Tags: ,