John Thornhill

You are currently browsing articles tagged John Thornhill.

Terrorism developed as a way for weak factions to disrupt war, to hack what had been a very centralized activity since the formation of discrete states. The thing is, such ad hoc chicanery hardly ever works, at least ultimately. Eventually the element of surprise is identified, neutralized.

Our new tech tools, however, have begun to level the playing field. Sure, ISIS still can’t hack its way to heaven in 2017, but a fully formed terror state like Russia managed for a very reasonable sum to successfully wage “memetic warfare” against America, a much wealthier and militarily superior nation, albeit, with what would appear to be aid and comfort from a cabal of traitors. As the world grows ever more computerized, perhaps eventually we’ll all have an army to do our bidding and every target, real and virtual, will be made vulnerable.

From John Thornhill of the Financial Times:

Most defense spending in NATO countries still goes on crazily expensive metal boxes that you can drive, steer, or fly. But, as in so many other areas of our digital world, military capability is rapidly shifting from the visible to the invisible, from hardware to software, from atoms to bits. And that shift is drastically changing the equation when it comes to the costs, possibilities and vulnerabilities of deploying force. Compare the expense of a B-2 bomber with the negligible costs of a terrorist hijacker or a state-sponsored hacker, capable of causing periodic havoc to another country’s banks or transport infrastructure — or even democratic elections.

The US has partly recognized this changing reality and in 2014 outlined a third offset strategy, declaring that it must retain supremacy in next-generation technologies, such as robotics and artificial intelligence. The only other country that might rival the US in these fields is China, which has been pouring money into such technologies too.

But the third offset strategy only counters part of the threat in the age of asymmetrical conflict. In the virtual world, there are few rules of the game, little way of assessing your opponent’s intentions and capabilities, and no real clues about whether you are winning or losing. Related article Donald Trump is the odd man out with Putin and Xi China and Russia co-operate in areas posing a challenge to western interests Such murkiness is perfect for those keen to subvert the west’s military strength.

China and Russia appear to understand this new world disorder far better than others — and are adept at turning the west’s own vulnerabilities against it. Chinese strategists were among the first to map out this new terrain. In 1999 two officers in the People’s Liberation Army wrote Unrestricted Warfare in which they argued that the three indispensable “hardware elements of any war” — namely soldiers, weapons and a battlefield — had changed beyond recognition. Soldiers included hackers, financiers and terrorists. Their weapons could range from civilian aeroplanes to net browsers to computer viruses, while the battlefield would be “everywhere.”

Russian strategic thinkers have also widened their conception of force.•

Tags:

In his NYRB review of Daniel Dennett’s From Bacteria to Bach and Back: The Evolution of Minds, Thomas Nagel is largely laudatory even though he believes his fellow philosopher ultimately guilty of “maintaining a thesis at all costs,” writing that:

Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them.

Nagel draws an analogy between Dennett’s ideas and the Behaviorism of B.F. Skinner and other mid-century psychologists, a theory that never was truly satisfactory in explaining the human mind. Dennett’s belief that we’re more machine-like than we want to believe is probably accurate, though his assertion that all consciousness is illusory–if that’s what he’s arguing–seems off.

Dennett’s life work about consciousness and evolution has certainly crested at right moment, as we’re beginning to wonder in earnest about AI and non-human-consciousness, which seems possible at some point if not on the immediate horizon. In a Financial Times interview conducted by John Thornhill, Dennett speaks to the nature and future of robotics.

An excerpt:

AI experts tend to draw a sharp distinction between machine intelligence and human consciousness. Dennett is not so sure. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sorta robots”.

“I’ve been arguing for years that, yes, in principle it’s possible for human consciousness to be realised in a machine. After all, that’s what we are,” he says. “We’re robots made of robots made of robots. We’re incredibly complex, trillions of moving parts. But they’re all non-miraculous robotic parts.” …

Dennett has long been a follower of the latest research in AI. The final chapter of his book focuses on the subject. There has been much talk recently about the dangers posed by the emergence of a superintelligence, when a computer might one day outstrip human intelligence and assume agency. Although Dennett accepts that such a superintelligence is logically possible, he argues that it is a “pernicious fantasy” that is distracting us from far more pressing technological problems. In particular, he worries about our “deeply embedded and generous” tendency to attribute far more understanding to intelligent systems than they possess. Giving digital assistants names and cutesy personas worsens the confusion.

“All we’re going to see in our own lifetimes are intelligent tools, not colleagues. Don’t think of them as colleagues, don’t try to make them colleagues and, above all, don’t kid yourself that they’re colleagues,” he says.

Dennett adds that if he could lay down the law he would insist that the users of such AI systems were licensed and bonded, forcing them to assume liability for their actions. Insurance companies would then ensure that manufacturers divulged all of their products’ known weaknesses, just as pharmaceutical companies reel off all their drugs’ suspected side-effects. “We want to ensure that anything we build is going to be a systemological wonderbox, not an agency. It’s not responsible. You can unplug it any time you want. And we should keep it that way,” he says.•

Tags: , ,

American schoolchildren are taught that Dutch settlers purchased Manhattan island for roughly $24 in costume jewelry. That isn’t exactly so, but even if it were, the Native people would have struck a better bargain than Internet Age denizens have, as we’ve traded content and privacy for a piffling amount of flattery, convenience and connectivity.

Data Capitalism has commodified us in myriad ways, and soon with the Internet of Things, with Alexa listening and toothbrushes and refrigerators “smartened up,” the process will be ambient, almost undetectable. “We are already becoming tiny chips inside a giant system that nobody really understands,” Yuval Noah Harari wrote last year, and we’ve only just begun the process. This is prelude.

It’s possible as we grow more aware of what’s happening we could turn away from this Faustian bargain, as John Thornhill suggests in a Financial Times column, but that would take wisdom and collective will, and it’s not clear we’re in possession of those things.

An excerpt about the underlying importance of “smart” products:

The primary effect of these consumer tech products seems limited — but we will need to pay increasing attention to the secondary consequences of these connected devices. They are just the most visible manifestation of a fundamental transformation that is likely to shape our societies far more than Brexit, Donald Trump or squabbles over the South China Sea. It concerns who collects, owns and uses data.

The subject of data is so antiseptic that it seldom generates excitement. To make it sound sexy, some have described data as the “new oil,” fuelling our digital economies. In reality, it is likely to prove far more significant than that. Data are increasingly determining economic value, reshaping the practice of power and intruding into the innermost areas of our lives.

Some commentators have suggested that this transformation is so profound that we are moving from an era of financial capitalism into one of data capitalism. The Israeli historian Yuval Noah Harari even argues that Dataism, as he calls it, can be compared with the birth of a religion, given the claims of its most fervent disciples to provide universal solutions. 

The speed and scale at which this data revolution is unfolding is certainly striking.•

Tags:

driverlesscar345678987654

Some argue, as John Thornhill does in a new Financial Times column, that technology may not be the main impediment to the proliferation of driverless cars. I doubt that’s true. If you could magically make available today relatively safe and highly functioning autonomous vehicles, ones that operated on a level superior to humans, then hearts, minds and legislation would soon favor the transition. I do think driving as recreation and sport would continue, but much of commerce and transport would shift to our robot friends.

Earlier in the development of driverless, I wondered if Americans would hand over the wheel any sooner than they’d turn in their guns, but I’ve since been convinced we (largely) will. We may have a macro fear of robots, but we hand over control to them with shocking alacrity. A shift to driverless wouldn’t be much different.

An excerpt from Thornhill in which he lists the main challenges, technological and otherwise, facing the sector:

First, there is the instinctive human resistance to handing over control to a robot, especially given fears of cyber-hacking. Second, for many drivers cars are an extension of their identity, a mechanical symbol of independence, control and freedom. They will not abandon them lightly.

Third, robots will always be held to far higher safety standards than humans. They will inevitably cause accidents. They will also have to be programmed to make a calculation that could kill their passengers or bystanders to minimise overall loss of life. This will create a fascinating philosophical sub-school of algorithmic morality. “Many of us are afraid that one reckless act will cause an accident that causes a backlash and shuts down the industry for a decade,” says the Silicon Valley engineer. “That would be tragic if you could have saved tens of thousands of lives a year.”

Fourth, the deployment of autonomous vehicles could destroy millions of jobs. Their rapid introduction is certain to provoke resistance. There are 3.5m professional lorry drivers in the US.

Fifth, the insurance industry and legal community have to wrap their heads around some tricky liability issues. In what circumstances is the owner, car manufacturer or software developer responsible for damage?•

Tags:

avedonharpersbazaar1965 (2)

I’m always surprised economies, for all their failings, work as well as they do. Similarly, I’m continually shocked more people don’t commit murder. I suppose I’m a pessimist.

Life on Earth is complicated, and it likewise will be out there when we start traveling regularly in space, attempting to set up colonies and mine asteroids. There’ll be crushing mishaps and perhaps direct democracy and trillionaires. What a brave new world that will be.

In the Financial Times, John Thornhill writes of far-flung finances, thinking about the need for regulation when we fan out among the stars. He’s not hopeful we’ll do better on the moon or Mars. An excerpt:

To stimulate fresh thinking, Nasa challenged economists, including the Nobel Prize-winning Eric Maskin and Mariana Mazzucato, to examine the economic development of low earth orbit, or “commercial space”. Their suggestions were published this month.

The critical question is how the public sector best interacts with the private sector. In 2011, Nasa set up the Center for the Advancement of Science in Space to encourage public institutions and commercial enterprises to use the ISS as a platform for innovation. The economists have several good ideas for this. Comprehensive databases could be created to record space research. Smarter insurance could help entice thinly capitalised start-up companies. Biotech firms could be incentivised to exploit a microgravity environment.

But based on most of the contributions to Nasa, it looks as if the space economy will end up pretty much like the one on earth, where the cash-strapped public sector remains in thrall to the private sector. The worry is that the infrastructure costs will be socialised while the profits are privatised.

That would be a shame.•

Tags:

Lo-And-Behold-Poster_1200_1781_s

Oxford philosopher Nick Bostrom believes “superintelligence”–machines dwarfing our intellect–is the leading existential threat of our era to humans. He’s either wrong and not alarmed enough by, say, climate change, or correct and warning us of the biggest peril we’ll ever face. Most likely, such a scenario will be a real challenge in the long run, though it’s probably not currently the most paramount one.

In John Thornhill’s Financial Times article about Bostrom, the writer pays some mind to those pushing back at what they feel is needless alarmism attending the academic’s work. An excerpt:

Some AI experts have accused Bostrom of alarmism, suggesting that we remain several breakthroughs short of ever making a machine that “thinks”, let alone surpasses human intelligence. A sceptical fellow academic at Oxford, who has worked with Bostrom but doesn’t want to be publicly critical of his work, says: “If I were ranking the existential threats facing us, then runaway ‘superintelligence’ would not even be in the top 10. It is a second half of the 21st century problem.”

But other leading scientists and tech entrepreneurs have echoed Bostrom’s concerns. Britain’s most famous scientist, Stephen Hawking, whose synthetic voice is facilitated by a basic form of AI, has been among the most strident. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.

Elon Musk, the billionaire entrepreneur behind Tesla Motors and an active investor in AI research, tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

Although Bostrom has a reputation as an AI doomster, he starts our discussion by emphasising the extraordinary promise of machine intelligence, in both the short and long term. “I’m very excited about AI and I think it would be a tragedy if this kind of superintelligence were never developed.” He says his main aim, both modest and messianic, is to help ensure that this epochal transition goes smoothly, given that humankind only has one chance to get it right.

“So much is at stake that it’s really worth doing everything we can to maximise the chances of a good outcome,” he says.•

Tags: ,

classroomschoollearning

MOOCs feel mostly like misses for the moment but so did the phonograph and automobile originally. Outsize ambitions still abound, with Sebastian Thrun of Udacity having recently said, “If I could double the world’s GDP, it would be very gratifying to me.” Yes, that would be nice. In John Thornhill’s smart Financial Times piece about online education, the former Google driverless guru has a more sober quote: “It is not clear that the existing universities are the right places to create education.”

Higher education’s endless layers of administration, insane sticker prices and pauper professors have left an opening for MOOCs, but this nouveau learning industry will likely be only as successful as its products are good. Thornhill opens his piece about EdTEch with a story about French education innovator Xavier Niel:

With no teachers, timetables, or exams, Ecole 42 is a strange kind of educational institution, open 24 hours a day, seven days a week. Students shuffle into this tech-enabled school whenever they want and work as hard as they need.

Is this the future of education?

Xavier Niel, the French internet and telecoms billionaire who founded the coding school for young adults in Paris in 2013, certainly thinks so. He chose the school’s number for a reason. As fans of Hitchhikers’ Guide to the Galaxy know, 42 is the answer to the ultimate question of life, the universe and everything.

So sure is Mr. Niel that he has found the answer that he has committed himself to funding Ecole 42 for the next decade and is spending a further $100m on a new school in San Francisco. There are several ironies in a French entrepreneur teaching Silicon Valley geeks how to code.

Mr. Niel argues that smartly designed online courses are more effective than traditional classroom teaching methods. Students learn best by pursuing online projects by themselves and by interacting with each other. Peer-to-peer lending may be going through a rough patch, but peer-to-peer learning may be on the rise. “We are preparing people to learn together,” he says.•

Tags: , ,

futureworld_sage

The American military has thus far refused to consider using autonomous weapons system, which is good, but is it our choice alone to make? If one world power (or a smaller, rogue nation aspiring to be one) were to deploy such machines, how would others resist? The technology is trending toward faster, cheaper and more out of control, so it’s not difficult to imagine such a scenario. I think in the long run these systems are inevitable, but hopefully there will be much more time to prepare for what they’ll mean.

In a Financial Times column, John Thornhill writes of fears of LAWS (Lethal Autonomous Weapons Systems), which could fall into the wrong hands, like warlords or tyrants. Of course, it’s easy to make the argument that all hands are the wrong ones. The opening:

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.

Who would not think that a good use of technology?

As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.

In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”•

Tags:

journalistcar (1)

Here are 50 ungated pieces of wonderful journalism from 2015, alphabetized by author name, which made me consider something new or reconsider old beliefs or just delighted me. (Some selections are from gated publications that allow a number of free articles per month.) If your excellent work isn’t on the list, that’s more my fault than yours.

  • Who Runs the Streets of New Orleans?” (David Amsden, The New York Times Magazine) As private and public sector missions increasingly overlap, here’s an engaging look at the privatization of some policing in the French Quarter.
  • In the Beginning” (Ross Andersen, Aeon) A bold and epic essay about the elusive search for the origins of the universe.
  • Ask Me Anything (Anonymous, Reddit) A 92-year-old German woman who was born into Nazism (and participated in it) sadly absolves herself of all blame while answering questions about that horrible time.
  • Rethinking Extinction” (Stewart Brand, Aeon) The Whole Earth Catalog founder thinks the chance of climate-change catastrophe overrated, arguing we should utilize biotech to repopulate dwindling species.
  • Anchorman: The Legend of Don Lemon” (Taffy Brodesser-Akner, GQ) A deeply entertaining look into the perplexing facehole of Jeff Zucker’s most gormless word-sayer and, by extension, the larger cable-news zeitgeist.
  • How Social Media Is Ruining Politics(Nicholas Carr, Politico) A lament that our shiny new tools have provided provocative trolls far more credibility than a centralized media ever allowed for.
  • Clans of the Cathode” (Tom Carson, The Baffler) One of our best culture critics looks at the meaning of various American sitcom families through the medium’s history.
  • The Black Family in the Age of Mass Incarceration” (Ta-Nehisi Coates, The Atlantic) The author examines the tragedy of the African-American community being turned into a penal colony, explaining the origins of the catastrophic policy failure.
  • Perfect Genetic Knowledge” (Dawn Field, Aeon) The essayist thinks about a future in which we’ve achieved “perfect knowledge” of whole-planet genetics.
  • A Strangely Funny Russian Genius” (Ian Frazier, The New York Review of Books) Daniil Kharms was a very funny writer, if you appreciate slapstick that ends in a body count.
  • Tomorrow’s Advance Man” (Tad Friend, The New Yorker) Profile of Silicon Valley strongman Marc Andreessen and his milieu, an enchanted land in which adults dream of riding unicorns.
  • Build-a-Brain” (Michael Graziano, Aeon) The neuroscientist’s ambitious thought experiment about machine intelligence is a piece I thought about continuously throughout the year.
  • Ask Me Anything (Stephen Hawking, Reddit) Among other things, the physicist warns that the real threat of superintelligent machines isn’t malice but relentless competence.
  • Engineering Humans for War” (Annie Jacobsen, The Atlantic) War is inhuman, it’s been said, and the Pentagon wants to make it more so by employing bleeding-edge biology and technology to create super soldiers.
  • The Wrong Head” (Mike Jay, London Review of Books) A look at insanity in 1840s France, which demonstrates that mental illness is often expressed in terms of the era in which it’s experienced.
  • Death Is Optional” (Daniel Kahneman and Noah Yuval Harari, Edge) Two of my favorite big thinkers discuss the road ahead, a highly automated tomorrow in which medicine, even mortality, may not be an egalitarian affair.
  • Where the Bodies Are Buried,” (Patrick Radden Keefe, The New Yorker) Ceasefires, even treaties, don’t completely conclude wars, as evidenced by this haunting revisitation of the heartbreaking IRA era.
  • Porntopia” (Molly Lambert, Grantland) The annual Adult Video News Awards in Las Vegas, the Oscars of oral, allows the writer to look into a funhouse-mirror reflection of America.
  • The Robots Are Coming” (John Lanchester, London Review of Books) A remarkably lucid explanation of how quickly AI may remake our lives and labor in the coming decades.
  • Last Girl in Larchmont” (Emily Nussbaum, The New Yorker) The great TV critic provides a postmortem of Joan Rivers and her singular (and sometimes disquieting) brand of feminism.
  • “President Obama & Marilynne Robinson: A Conversation, Part 1 & Part 2” (Barack Obama and Marilynne Robinson, New York Review of Books) Two monumental Americans discuss the state of the novel and the state of the union.
  • Ask Me Anything (Elizabeth Parrish, Reddit) The CEO of BioViva announces she’s patient zero for the company’s experimental age-reversing gene therapies. Strangest thing I read all year.
  • Why Alien Life Will Be Robotic” (Sir Martin Rees, Nautilus) The astronomer argues that ETs in our inhospitable universe have likely already transitioned into conscious machines.
  • Ask Me Anything (Anders Sandberg, Reddit) Heady conversation about existential risks, Transhumanism, economics, space travel and future technologies conducted by the Oxford researcher. 
  • Alien Rights” (Lizzie Wade, Aeon) Manifest Destiny will, sooner or later, became a space odyssey. What ethics should govern exploration of the final frontier?
  • Peeling Back the Layers of a Born Salesman’s Life” (Michael Wilson, The New York Times) The paper’s gifted crime writer pens a posthumous profile of a protean con man, a Zelig on the make who crossed paths with Abbie Hoffman, Otto Preminger and Annie Leibovitz, among others.
  • The Pop Star and the Prophet” (Sam York, BBC Magazine) Philosopher Jacques Attali, who predicted, back in the ’70s, the downfall of the music business, tells the writer he now foresees similar turbulence for manufacturing.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Anyone who’s studied Silicon Valley for about five minutes knows that community’s shocking success is a hybrid of public-private investment, not just some free-market dream realized. Before the Y Combinator, there’s often an X factor, namely a government incubator like DARPA which births and nurtures ideas until they can crawl into the arms of loving venture capitalists. The Internet, of course, is the most obvious example. Even the transistor itself sprang from Bell Labs, which was essentially a government-sanctioned monopoly.

The economist Mariana Mazzucato hasn’t been shy about shooting down the excesses of the sector’s mythologizing, which boasts that brilliant upstarts with startups simply think (ideate!) their way into billions. Not quite. These lone creators don’t only lack the funds to develop an Internet or transistor, Mazzucato doesn’t believe they have the time or stomachs for such risks, either. The market demands corporations opt for safer short-term gain or the shareholders will revolt. (Look at the blowback Google’s received for its moonshot investments, perhaps one reason it reorganized itself into Alphabet this week.) The companies aren’t, then, caged lions held back by regulation, but, as Mazzucato sees it, usually kittens unable to roar on their own.

From John Thornhill at the Financial Times:

Even Silicon Valley’s much-fabled tech entrepreneurs are not as smart as they like to think. Although Mazzucato lavishes praise on the entrepreneurial genius of the likes of Steve Jobs and Elon Musk, she says their brilliance tells only part of the story. Many of the key technologies used by Apple were first developed by public-sector agencies. Most of the key technologies that do the clever stuff inside your iPhone — including its geo-positioning system, the Siri voice-recognition service and multi-touch screen — were the offspring of state-funded research. “Government has invested in basic research, it has invested in applied research, it has invested in concrete companies [such as Tesla] all the way downstream, doing what venture capital should be doing if it was really playing the role it says it plays,” she says. “It is an incredibly active, mission-oriented role.”

One of the original engines of Silicon Valley’s creativity, she argues, was the Defense Advanced Research Projects Agency (Darpa), founded by President Dwight Eisenhower in 1958 following the alarm caused by the Soviet Union’s launch of the Sputnik rocket. Darpa, run by the US Department of Defense, has since pumped billions of dollars into cutting-edge research and was instrumental in developing the internet. According to Mazzucato, the publicly funded National Institutes of Health has played a similar role in nurturing the US pharmaceuticals industry. The Advanced Research Projects Agency-Energy (Arpa-E), set up by President Barack Obama and run by the US Department of Energy, is designed to stimulate green technology.

Mazzucato points to the critical role played by government agencies in other economies, such as China, Brazil, Germany, Denmark, and Israel, where the state is not just acting as a market regulator, it is actively creating and shaping markets. For instance, the Yozma programme in Israel that provided the funding and expertise to create the so-called “start-up nation”. “My whole point to business is, ‘Hello, if you want to make profits in the future, you had better understand where the profits are coming from’. This is a pro-business story. This is not about socialism,” she says.

Her arguments stray into more radical territory as we discuss how the fruits of this technological innovation should be distributed. If you accept that the state is part responsible for the success of many private sector enterprises, she says, should it not share in more of their economic gains?•

 

Tags: ,