You are currently browsing the archive for the Excerpts category.


Not that long ago, it was considered bold–foolhardy, even–to predict the arrival of a more technological future 25 years down the road. That was the gambit the Los Angeles Times made in 1988 when it published “L.A. 2013,” a feature that imagined the next-level life of a family of four and their robots.

We may not yet be at the point when “whole epochs will pass, cultures rise and fall, between a telephone call and a reply“–I mean, who talks on the phone anymore?–but it doesn’t require a quarter of a century for the new to shock us now. In the spirit of our age, Alissa Walker of the Curbed LA works from the new report “Urban Mobility in the Digital Age” when imagining the city’s remade transportation system in just five years. Regardless of what Elon Musk promises, I’ll bet the over on autonomous cars arriving in a handful of years, but it will be transformative when it does materialize, and it’ll probably happen sooner than later.

The opening:

It’s 2021, and you’re making your way home from work. You jump off the Expo line (which now travels from Santa Monica to Downtown in 20 minutes flat), and your smartwatch presents you with options for the final two miles to your apartment. You could hop on Metro’s bike share, but you decide on a tiny, self-driving bus that’s waiting nearby. As you board, it calculates a custom route for you and the handful of other passengers, then drops you off at your doorstep in a matter of minutes. You walk through your building’s old parking lot—converted into a vegetable garden a few years ago—and walk inside in time to put your daughter to bed.

That’s the vision for Los Angeles painted in Urban Mobility in the Digital Age, a new report that provides a roadmap for the city’s transportation future. The report, which was shared with Curbed LA and has been posted online, addresses the city’s plan to combine self-driving vehicles (buses included) with on-demand sharing services to create a suite of smarter, more efficient transit options.

But it’s not just the way that we commute that will change, according to the report. Simply being smarter about how Angelenos move from one place to another brings additional benefits: alleviating vehicular congestion, potentially eliminating traffic deaths, and tackling climate change—where transportation is now the fastest-growing contributor to greenhouse gases. And it will also impact the way the city looks, namely by reclaiming the streets and parking lots devoted to the driving and storing of cars that sit motionless 95 percent of the time.

The report is groundbreaking because it makes LA the first U.S. city to specifically address policies around self-driving cars.•



We should always err on the side of whistleblowers like Edward Snowden because they’ve traditionally served an important function in our democracy, but that doesn’t mean the former NSA employee has changed America for the better–or much at all.

In the wake of 9/11, most in the country wanted to feel safe and were A-OK with the government taking liberties (figuratively and literally). Big Brother became the favorite sibling. The White House position and policy has shifted somewhat since Snowden went rogue, but I believe from here on in we’re locked in a cat-and-mouse game among government, corporations and citizens, with surveillance and leaks a permanent part of the landscape. The technology we have–and the even more powerful tools we’ll have in the future–almost demands such an arrangement. We’re all increasingly inside a machine now, one that moves much faster than legislation. That’s the new abnormal.

The Financial Times set up an interview with Snowden conducted by Alan Rusbridger, former EIC of the Guardian, the publication that broke the story. The subject is unsurprisingly much more hopeful about the impact of his actions than I am. An excerpt:

Alan Rusbridger:

It’s now, what, three years since the revelations?

Edward Snowden:

It’s been more than three years. June 2013.

Alan Rusbridger:

Tell me first how the world has changed since then. What’s changed as a result of what you did, from your perspective? Not from your personal life, but the story you revealed.

Edward Snowden:

The main thing is that our culture has changed, right? There are many different ways of approaching this. One is we look at the structural changes, we look at the policy changes, we look at the fact that the day the Guardian published the story, for example, the entire establishment leaped out of their chairs and basically said ‘This is untrue, it’s not right, there’s nothing to see here’. You know, ‘Nobody’s listening to your phone calls’, as the president said very early on. Do you remember? I think he sort of spoke with the voice of the establishment in all of these different countries here, saying, ‘I think we’ve drawn the right balance’.

Then you move on to later in the same year when the very first court verdicts began to come forward and they found that these programmes were ‘unlawful, likely unconstitutional’ — that’s a direct quote — and ‘Orwellian in their scope’ — again a quote. And this trend continued in many different courts. The government realising that these programmes could not be legally sustained and would have to be amended if they were to keep any of these powers at all. And to avoid a precedent that they would consider damaging, which is that the Supreme Court basically locks the power of mass surveillance away from them forever, they need a pretty substantial pivot, whereby January of 2014 the president of the US said that, well, of course you could never condone what I did. He believes that this has made us stronger as a nation and that he was going to be recommending changes to a law of Congress, which then later, again this is Congress, they don’t do anything quickly, they actually did amend the law.

Now, they would not likely have made these changes to law on their own without the involvement of the Courts. But these are all three branches of government in the US completely changing their position. In March of 2013, the Supreme Court flushed the case, right, saying that this is a state secret, we can’t talk about it and you can’t prove that you were spied on. Then suddenly when everyone can prove that they had been spied on, we see that the law changed. So that’s sort of the policy side of looking at that. And people can look at the substance there and say, ‘This is significant’. Even though it didn’t solve the problem, it’s a start and, more importantly, it empowers people, it empowers the public; it shows that, for the first time in four years, we can actually start to impose more oversight on intelligence agencies, on spies, rather than giving them a free pass to do whatever, simply because we’re scared, which is understandable but clearly not ethical.

As online threats race up national security agendas and governments look at ways of protecting their national infrastructures a cyber arms race is causing concern to the developed world. Then there’s the other way of looking at it, which is in terms of public awareness.•

Tags: ,


Right now the intrusion of Digital Age surveillance is still (mostly) external to our bodies, though computers have shrunk small enough to slide into our pockets. If past is prologue, the future progression would move this hardware inside ourselves, the way pacemakers for the heart were originally exterior machines until they could fit in our chests. Even if no such mechanisms were necessary and we manipulated health, longevity, appearance and longevity though biological means, the thornier ethical questions would probably remain. 

A month ago, I published a post about Eve Herold’s new book, Beyond Human, when the opening was excerpted in Vice. Here’s a piece from “Transhumanism Is Inevitable,” Ronald Bailey’s review of the title in the Libertarian magazine Reason:

Herold thinks these technological revolutions will be a good thing, but that doesn’t mean she’s a Pollyanna. Throughout the book, she worries about how becoming ever more dependent on our technologies will affect us. She foresees a world populated by robots at our beck and call for nearly any task. Social robots will monitor our health, clean our houses, entertain us, and satisfy our sexual desires. Isolated users of perfectly subservient robots could, Herold cautions, “lose important social skills such as unselfishness and the respect for the rights of others.” She further asks, “Will we still need each other when robots become our nannies, friends, servants, and lovers?”

There is also the question of how centralized institutions, as opposed to empowered individuals, might use the new tech. Behind a lot of the coming enhancements you’ll find the U.S. military, which funds research to protect its warriors and make them more effective at fighting. As Herold reports, the Defense Advance Research Projects Agency (DARPA) is funding research on a drug that would keep people awake and alert for a week. DARPA is also behind work on brain implants designed to alter emotions. While that technology could help people struggling with psychological problems, it might also be used to eliminate fear or guilt in soldiers. Manipulating soldiers’ emotions so they will more heedlessly follow orders is ethically problematic, to say the least.

Similar issues haunt Herold’s discussion of the technologies, such as neuro-enhancing drugs and implants, that may help us build better brains. Throughout history, the ultimate realm of privacy has been our unspoken thoughts. The proliferation of brain sensors and implants might open up our thoughts to inspection by our physicians, friends, and family—and also government officials and corporate marketers.

Yet Herold effectively rebuts bioconservative arguments against the pursuit and adoption of human enhancement.•

Tags: ,

A big problem with data analysis is that when it goes really deep, it’s not so easy to know why it’s working, if it’s working. Algorithms can be skewed consciously or not to favor some and keep us in separate silos, and the findings of artificial neural networks can be mysterious to even machine-learning professionals. We already base so much on silicon crunching numbers and are set to bet the foundations of our society on these operations, so that’s a huge issue. Another one: The efficacy of neural nets may be inhibited by more transparent approaches. Two pieces on the topic follow.

The opening of Aaron M. Bornstein’s Nautilus essay “Is Artificial Intelligence Permanently Inscrutable?“:

Dmitry Malioutov can’t say much about what he built.

As a research scientist at IBM, Malioutov spends part of his time building machine learning systems that solve difficult problems faced by IBM’s corporate clients. One such program was meant for a large insurance corporation. It was a challenging assignment, requiring a sophisticated algorithm. When it came time to describe the results to his client, though, there was a wrinkle. “We couldn’t explain the model to them because they didn’t have the training in machine learning.”

In fact, it may not have helped even if they were machine learning experts. That’s because the model was an artificial neural network, a program that takes in a given type of data—in this case, the insurance company’s customer records—and finds patterns in them. These networks have been in practical use for over half a century, but lately they’ve seen a resurgence, powering breakthroughs in everything from speech recognition and language translation to Go-playing robots and self-driving cars.

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.•

From Rana Foroohar’s Time article about mathematician and author Cathy O’Neil:

O’Neil sees plenty of parallels between the usage of Big Data today and the predatory lending practices of the subprime crisis. In both cases, the effects are hard to track, even for insiders. Like the dark financial arts employed in the run up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. “The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,” says O’Neil.

The effects are just as pernicious. Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.

In higher education, the use of algorithmic models that rank colleges has led to an educational arms race where schools offer more and more merit rather than need based aid to students who’ll make their numbers (thus rankings) look better. At the same time, for-profit universities can troll for data on economically or socially vulnerable would be students and find their “pain points,” as a recruiting manual for one for-profit university, Vatterott, describes it, in any number of online questionnaires or surveys they may have unwittingly filled out. The schools can then use this info to funnel ads to welfare mothers, recently divorced and out of work people, those who’ve been incarcerated or even those who’ve suffered injury or a death in the family.

Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since “they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.” Whereas the poor engage more with faceless educators and employers, “the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.”•

Tags: , , ,


You go to war with the whole world and you lose. Just ask Germany.

The hit-and-run thuggery of ISIS differed from Al-Qaeda and other terrorist organizations in one important way: It actually put stakes in the ground, conquering cities and laying claim to land. It sought permanence in a material way, aspiring to become a state.

It didn’t work out. Once other nations processed what was happening and started pushing back, ISIS began what will likely be a permanent retreat. Having grown more desperate, the terrorist organization began striking abroad, offering a few wild haymakers before the end of a losing fight.

When ISIS as a fledgling nation is permanently disabled, will the land it ruled, if briefly, in Iraq and Syria become a breeding ground for new terror groups and sectarian violence? Given the recent and distant history of the region, what’s past may be prologue. In a well-written Wall Street JournalSaturday Essay,” Yaroslav Trofimov believes trouble won’t end when ISIS is extinguished, with factions that fought together against it perhaps turning on one another. An excerpt:

It is easy to think that Islamic State is still on the march. It isn’t. Over the past year, the territory under its control—once roughly the size of the U.K.—has shrunk rapidly in both Iraq and Syria. Islamic State has lost the Iraqi cities of Ramadi and Fallujah, the ancient Syrian city of Palmyra and the northern Syrian countryside bordering on Turkey. Its militants in Libya were ousted in recent weeks from their headquarters in Sirte. In coming months, the group will face a battle that it is unlikely to win for its two most important remaining centers—Mosul in Iraq and Raqqa in Syria.

It may be tempting fate to ask the question, but it must be asked all the same: What happens once Islamic State falls? The future of the Middle East may well depend on who fills the void that it leaves behind both on the ground and, perhaps more important, in the imagination of jihadists around the world.

As we mark the 15th anniversary this weekend of the terrorist attacks of 9/11, one likely consequence of the demise of ISIS (as Islamic State in Iraq and Syria is often known) will be to revive its ideological rival, al Qaeda, which opposed Mr. Baghdadi’s ambitions from the start. Al Qaeda may yet unleash a fresh wave of terrorist attacks in the West and elsewhere—as may the remnants of Islamic State, eager to show that they still matter.

“Simply having ISIS go away doesn’t mean that the jihadist problem goes away,” said Daniel Benjamin of Dartmouth College, who served as the State Department’s counterterrorism coordinator during the Obama administration. “Eliminating the caliphate will be an achievement—but more likely, it will be just the end of the beginning rather than the beginning of the end.”•




When Silicon Valley stalwart Marc Andreessen directs mocking comments or tweets at those who fear the Second Machine Age could lead to mass unemployment, even societal upheaval, he usually depicts them as something akin to Luddites, pointing out that the Industrial Revolution allowed for the creation of more and better jobs. Of course, history doesn’t necessarily repeat itself. 

In a Bloomberg View column, Noah Smith says that while the statistics say a robot revolution hasn’t yet arrived and may or may not emerge, relying on the past to predict the future isn’t sound strategy. An excerpt:

Predicting whether machines will make the bulk of humans useless is beyond my capability. The future of technology is much too hard to predict. But I can say this: one of the main arguments often used to rule out this worrisome possibility is very shaky. If you think that history proves that humans can’t be replaced, think again.

I see this argument all the time. Because humans have never been replaced before, people say, it can’t happen in the future. Many cite the example of the Luddites, British textile workers in the early 19th century who protested against the introduction of technologies that could do their jobs more cheaply. In retrospect, the Luddites look foolish. As industrial technology improved, skilled workers were not impoverished — instead, they found ever-more-lucrative jobs that made use of new tools. As a result, “Luddite” is now a term of derision for those who doubt the power of technology to improve the world.

A more sophisticated version of this argument is offered by John Lewis of the Bank of England, in arecent blog post. Reviewing economic history, he shows what most people intuitively understand — new technology has complemented human labor rather than replacing it. Indeed, as Lewis points out, most macroeconomic models assume that the relationship between technology and humans is basically fixed.

That’s the problem, though — economic assumptions are right, until they’re not. The future isn’t always like the past. Sometimes it breaks in radical ways.•




If you like your human beings to come with fingers and toes, you may be disquieted by the Reddit Ask Me Anything conducted by Andrew Hessel, a futurist and a “biotechnology catalyst” at Autodesk. It’s undeniably one of the smartest and headiest AMAs I’ve ever read, with the researcher fielding questions about a variety of flaws and illnesses plaguing people that biotech may be able to address, even eliminate. Of course, depending on your perspective, humanness itself can be seen as a failing, something to be “cured.” A few exchanges follow.


Four questions, all on the same theme:

1) What is the probable timeframe for when we’ll see treatments that will slow senescence?

2) Do we have a realistic estimate, in years or decades (or bigger, fingers crossed!), on the life extension potential of such treatments?

3) Is it realistic that such treatments would also include senescence reversal (de-ageing)?

4) Is there any indication at present as to what kind of form these treatments will take, particularly with regards to their invasiveness?

Andrew Hessel:

1) We are already seeing some interesting results here — probably the most compelling I’ve seen is in programming individually senescent cells to die. More work needs to be done. 2) In humans, no. We are already long-lived. Experiments that lead to longer life can’t be rushed — the results come at the end! 3) TBD — but I can’t see why not 4) Again, TBD, but I think it will involve tech like viruses and nanoparticles that can target cells / tissue with precision.

Overall, trying to extend our bodies may be throwing good effort at a bad idea. In some ways, the important thing is to be able to extract and transfer experience and memory (data). We do this when we upgrade our phones, computers, etc.


Can Cas9/Crispr edit any gene that control physical appearance in an adult human? say for example it it’s the gene that controls the growth of a tail? will reactivating it actually cause a tail to grow in an already mature human?

Andrew Hessel:

It’s a powerful editing technology that could potentially allow changing appearance. The problem is editing a fully developed organism is new territory. Also, there’s the challenges of reprogramming millions or billions of cells! But it’s only a 4 year old technology, lots of room to explore and learn.


I’m an artist who’s curious about using democratized genetic engineering techniques (i.e. CRISPR) to make new and aesthetically interesting plant life, like roses the size of sunflowers or lillies and irises in shapes and colors nobody has ever scene. Is this something that is doable by an non-scientist with the tools and understanding available today? I know there are people inserting phosphorescence into plant genes – I’d like to go one further and actually start designing flower, or at least mucking around with the code to see what kinds of (hopefully) pretty things emerge. I’d love your thoughts on this… Thanks!

Andrew Hessel:

I think it’s totally reasonable to start thinking about this. CRISPR allows for edits of genomes and using this to explore size/shape/color etc of plants is fascinating. As genome engineering (including whole genome synthesis) tech becomes cheaper and faster, doing more extensive design work will be within reach. The costs need to drop dramatically though — unless you’re a very rich artists. :-) As for training, biodesign is already so complicated that you need software tools to help. The software tools are going to improve a lot in the future, allowing designers to focus more on what they want to make, rather than the low level details of how to make it. But we still have a ways to go on this front. We still don’t have great programming tools for single cells, let alone more complex organisms. But they WILL come.


So my question is, do you think there will be a “biological singularity,” similar to Ray Kurzweil’s “technological singularity?”

Will there be a time in the near future where the exponential changes in genetic engineering (synthetic biology, dna synthesis, genome sequencing, etc.) will have such a profound impact on human civilization that it is difficult to predict what the future will be like?

Andrew Hessel:

I think it’s already hard to figure out where the future is going. Seriously. Who would have predicted politics to play out this way this year? But yes, I think Kurzweil calls it right that the combination of accelerating computation, biotech, etc creates a technological future that is hard to imagine. This said, I don’t think civilization will change that quickly. Computers haven’t changed the fundamentals of life, just the details of how we go about our days. Biotech changes can be no less profound, but they take longer to code, test, and implement., Overall, though, I think we come of this century with a lot more capabilities than we brought into it!•



Desperation sounds funny when expressed in words. A scream would probably be more coherent.

Nobody really knows how to remake newspapers and magazines of a bygone era to be profitable in this one, and the great utility they provided–the Fourth Branch of Government the traditional media was called–is not so slowly slipping away. What’s replaced much of it online has been relatively thin gruel, with the important and unglamorous work of covering local politics unattractive in a viral, big-picture machine.

All I know is when Condé Nast is using IBM’s Watson to help advertisers “activate” the “right influencers” for their “brands,” we’re all in trouble.

From The Drum:

With top titles like Vogue, Vanity Fair, Glamour and GQ, Conde Nast’s partnership heralds a key step merging targeted influencer marketing and artificial intelligence in the fashion and lifestyle industry. The platform will be used to help advertiser clients improve how they connect with audiences over social media and gain measurable insights into how their campaigns resonate.

“Partnering with Influential to leverage Watson’s cognitive capabilities to identify the right influencers and activate them on the right campaigns gives our clients an advantage and increases our performance, which is paramount in today’s distributed content world,” said Matt Starker, general manager, digital strategy and initiatives at Condé Nast. “We engage our audiences in innovative ways, across all platforms, and this partnership is another step in that innovation.”

By analyzing unstructured data from an influencer’s social media feed and identifying key characteristics that resonate with a target demographic, the Influential platform uses IBM’s personality insights into, for example, a beauty brand that focuses on self-enhancement, imagination and trust. This analysis helps advertisers identify the right influencers by homing in on previously hard-to-measure metrics–like how they are perceived by their followers, and how well their specific personality fits the personality of the brand.•


In a Literary Review piece about Kyle Arnold’s new title, The Divine Madness Of Philip K. Dick, Mike Jay, who knows a thing or two about the delusions that bedevil us, writes about the insane inner world of the speed-typing, speed-taking visionary who lived during the latter stages of his life, quite appropriately, near the quasi-totalitarian theme park Disneyland, a land where mice talk and corporate propaganda is endlessly broadcast. Dick was a hypochondriac about the contents of his head, and it’s no surprise his life was littered with amphetamines, anorexia and anxiety, which drove his brilliance and abbreviated it.

The opening:

Across dozens of novels and well over a hundred short stories, Philip K Dick worried away at one theme above all others: the world is not as it seems. He worked through every imaginable scenario: consensus reality was variously a set of implanted memories, a drug-induced hallucination, a time slip, a covert military simulation, an illusion projected by mega-corporations or extraterrestrials, or a test set by God. His typical protagonist was conspired against, drugged, hypnotised, paranoid, schizophrenic – or, possibly, the only person in possession of the truth.

The preoccupation all too clearly reflected the author’s life. Dick was a chronic doubter, tormented, like René Descartes, by the suspicion that the world was the creation of an evil demon ‘who has directed his entire effort to misleading me’. But cogito ergo sum was not enough to rescue someone who in 1972, during one of his frequent bouts of persecution mania, called the police to confess to being an android. Dick took scepticism to a level that he made his own. It became his brand, and since his death it has been franchised across popular culture. He isn’t credited on Hollywood blockbusters such as The Matrix (in which reality is a simulation created by machines from the future) or The Truman Show (about a reality TV programme in which all but the protagonist are complicit), but their mind-bending plot twists are his in all but name.

As Kyle Arnold acknowledges early in his lucid and accessible study, it would be impossible to investigate the roots of Dick’s cosmic doubt more doggedly than he did himself. He was ‘his own best psychobiographer”…

Tags: ,

Some sort of survival mechanism allows us to forget the full horror of a tragedy, and that’s a good thing. That fading of facts makes it possible for us to go on. But it’s dangerous to be completely amnesiac about disaster.

Case in point: In 2014, Barry Diller announced plans to build a lavish park off Manhattan at the pier where Titanic survivors came to shore. Dial back just a little over two years ago to another waterlogged disaster, when Hurricane Sandy struck the city, and imagine such an island scheme even being suggested then. The wonder at that point was whether Manhattan was long for this world. Diller’s designs don’t sound much different than the captain of the supposed unsinkable ship ordering a swimming pool built on the deck after the ship hit an iceberg.

In New York magazine, Andrew Rice provides an excellent profile of scientist Klaus Joseph, who believes NYC, as we know it, has no future. The academic could be wrong, but if he isn’t, his words about the effects of Irene and Sandy are chilling: “God forbid what’s next.”

The opening:

Klaus Jacob, a German professor affiliated with Columbia’s University’s Lamont-Doherty Earth Observatory, is a geophysicist by profession and a doomsayer by disposition. I’ve gotten to know him over the past few years, as I’ve sought to understand the greatest threat to life in New York as we know it. Jacob has a white beard and a ponderous accent: Imagine if Werner Herzog happened to be a renowned expert on disaster risk. Jacob believes most people live in an irrational state of “risk denial,” and he takes delight in dispelling their blissful ignorance. “If you want to survive an earthquake, don’t buy a brownstone,” he once cautioned me, citing the catastrophic potential of a long-dormant fault line that runs under the city. When Mayor Bloomberg announced nine years ago an initiative to plant a million trees, Jacob thought, That’s nice — but what about tornadoes?

For the past 15 years or so, Jacob has been primarily preoccupied with a more existential danger: the rising sea. The latest scientific findings suggest that a child born today in this island metropolis may live to see the waters around it swell by six feet, as the previously hypothetical consequences of global warming take on an escalating — and unstoppable — force. “I have made it my mission,” Jacob says, “to think long term.” The life span of a city is measured in centuries, and New York, which is approaching its fifth, probably doesn’t have another five to go, at least in any presently recognizable form. Instead, Jacob has said, the city will become a “gradual Atlantis.”

The deluge will begin slowly, and irregularly, and so it will confound human perceptions of change. Areas that never had flash floods will start to experience them, in part because global warming will also increase precipitation. High tides will spill over old bulkheads when there is a full moon. People will start carrying galoshes to work. All the commercial skyscrapers, housing, cultural institutions that currently sit near the waterline will be forced to contend with routine inundation. And cataclysmic floods will become more common, because, to put it simply, if the baseline water level is higher, every storm surge will be that much stronger. Now, a surge of six feet has a one percent chance of happening each year — it’s what climatologists call a “100 year” storm. By 2050, if sea-level rise happens as rapidly as many scientists think it will, today’s hundred-year floods will become five times more likely, making mass destruction a once-a-generation occurrence. Like a stumbling boxer, the city will try to keep its guard up, but the sea will only gain strength.•

Tags: ,


Victor Frankenstein was right to name his horrid creation after himself because those who enable monsters are monsters. 

Everyone who’s had a hand in enabling the rise of Donald Trump, an American Mussolini, owns his disgraceful behavior, be it a Beltway insider like James Baker or a Silicon Valley emotional homunculus such as Peter Thiel or Ivanka Trump and Jared Kushner. They don’t emerge clean after backing a condo salesman who’s remade himself as Bull Connor. His stains are theirs. That doesn’t mean they’ll get what they truly deserve in life, as few do, but it’s clear who they are now.

Why would someone who should know better behave this way? For power, either directly for themselves or for some misguided ideology. An excerpt from Hannah Seligson’s Highline piece “Is Ivanka for Real?”:

It’s also unlikely that Ivanka would hear many qualms about Donald’s tactics from her husband. According to news reports, Jared is thrilled about the prospect of making it to the White House or perhaps starting amedia company with Donald after the election is over. He also seems to be unfazed by his father-in-law’s racially insensitive positions. Esquire reported that he told some Jewish friends who disliked Donald’s anti-Muslim rhetoric that they “don’t understand what America is or what American people think.” Somebody who has spent significant time with Ivanka and Jared said they genuinely seem to love each other and have a strong marriage. But he also observed how insular their world can be. Their birthday parties, he said, are assemblages of high-society and power types like Hugh Jackman and Eric Schmidt, not of close friends. Another person who went to Jared’s 35th birthday party at the Gramercy Park Hotel told Esquire that the median age of the attendees was close to 70.

I asked someone else who has known both Ivanka and Jared for years why they had thrown their lot in with Donald so whole-heartedly. “Power, power, power, power,” he speculated. “Jared’s got plenty of money, but the only way he can separate himself from his family is power. They’re a great match because that’s also what Ivanka is after.”

Ivanka and Jared appear to have made the calculation that, even with some bad press, the exposure provided by a presidential run will only make them more influential over time. “It’s in the Trump DNA to capitalize on every opportunity,” said someone who knows Ivanka both personally and professionally. “And Ivanka is taking this as an opportunity to build her brand with millions upon millions of people looking.” On the morning after her speech at the GOP Convention, her official brand account tweeted, “Shop Ivanka’s look from her #RNC speech” along with a link to Nordstrom, which, at the time, was selling her $158 rose-colored sheath dress. It sold out. The day before, she had posted a picture of Mike Pence and her family on her blog, declaring, “I couldn’t be more proud of what my father has accomplished!” The caption contained a link to the shoes she was wearing —light blue round-toe pumps from her line—that Lord & Taylor still has on clearance for $67.50.

Tags: , , ,


Before robots take our jobs, a more mechanical form of human will handle many of them. In fact, it’s already happening.

The new connectedness and tools have allowed for certain types of employment to be shrunk if not disappeared. It’s true whether your collar is blue or white, whether you have a job or career, if you’re a taxi driver or the passenger being transported to a fancy office.

“Meatware”–a term which perfectly sums up a faceless type of human labor pool–minimizes formerly high-paying positions into tasks any rabbit can handle. It’s a race to the bottom, where there’s plenty of room, with the winners also being losers.

In Mark Harris’ insightful Backchannel article, the writer hired some Mechanical Turk-ers to explore the piecework phenomenon. The opening:

Harry K. sits at his desk in Vancouver, Canada, scanning sepia-tinted swirls, loops and blobs on his computer screen. Every second or so, he jabs at his mouse and adds a fluorescent dot to the image. After a minute, a new image pops up in front of him.

Harry is tagging images of cells removed from breast cancers. It’s a painstaking job but not a difficult one, he says: “It’s like playing Etch A Sketch or a video game where you color in certain dots.”

Harry found the gig on Crowdflower, a crowdworking platform. Usually that cell-tagging task would be the job of pathologists, who typically start their careers with annual salaries of around $200,000 — an hourly wage of about $80. Harry, on the other hand, earns just four cents for annotating a batch of five images, which takes him between two to eight minutes. His hourly wage is about 60 cents.

Granted, Harry can’t perform most of the tasks in a pathologist’s repertoire. But in 2016 — 11 years after the launch of the ur-platform, Amazon Mechanical Turk — crowdworking (sometimes also called crowdsourcing) is eating into increasingly high-skilled jobs. The engineers who are developing this model of labor have a bold ambition to atomize entire careers into micro-tasks that almost anyone, anywhere in the world, can carry out online. They’re banking on the idea that any technology that can make a complex process 100 times cheaper, as in Harry’s case, will spread like wildfire.•



It’s no mere coincidence painkillers have been the hot drug in America in this new century, because, wow, it’s hurt. 

Until recently, I had relatives living in the Oxy capital of NYC, and when I visited and walked around, it was a bit like encountering zombies, lost souls still hopeful enough to continue buying lottery tickets but unable to wish for more. That’s as much as the Dream lives not only there but in many stretches of the U.S. It’s been decades of decline for the former middle class, and for a lot of folks it feels like endgame. It’s not their imagination.

Big Pharma incentivized doctors to hand our fistfuls of opioid scripts, sure, but the loss of hope was the other toxic half of the equation. Hard drugs were once the province of the poor who were already at rock bottom and comfortably middle class kids who could afford a (temporary) fall, but almost nobody can pay that price anymore, even as the nation grows wealthier in the macro. That’s led some to do the unthinkable, to embrace a Berlusconi who dreams of being a Mussolini, someone who wants to Make America White Again. That’s a lottery without a winning number.

The great David Simon, the Victor Hugo of Baltimore, just conducted a Reddit Ask Me Anything and addressed this topic, among others. A few exchanges follow.


I can genuinely say that The Wire directly inspired me to pursue the career path that I’m in today. I first watched the show while in college, and it informed me about many issues that I had previously been unaware of or apathetic too. Bubbles story arc connected with me so deeply that I took my first sociology course and began volunteering with homeless populations. Today I’m working as a substance abuse and mental health care coordinator in the field of community health, where I primarily work with lower income and homeless individuals.

The content you create has an impeccable ability to educate the public about real world issues through compelling storytelling that is absolutely unmatched. Thank you for the work that you do and inspiring me to pursue a career in a field that I previously wouldn’t have considered.

At this point what do you believe needs to happen to start making an impact in combating the growing opioid epidemic in our country?

David Simon:

I believe the abuse of narcotics — whether street drugs or pharmasale — is the result of a fundamental existential crisis among working and middle-class Americans in the same way that it was once that for the underclass. We need to return to an economic model that values labor, and the human lives that comprise labor.


What’s your take on the Black Lives Matter vs. Blue Lives Matter situation?

David Simon:

Black lives matter. So do blue lives. But the context of the “black lives matter” credo is that unlike blue lives, or white lives — which have de facto mattered in our country for generations — African-Americans have been far too vulnerable to unnecessary and hyperbolic response by law enforcement. This is simply so, and is now evidenced by the smart phone revolution.


Where do you see print journalism heading in the next decade? Any examples of recent work that you find interesting?

David Simon:

I want and we need to see an on-line revenue stream for journalism established that ensures that professional reporters can earn a living covering the quotidian beats of institutionalized America. When stuff is funded, it’s good and fixed and every day. Citizen journalist is not a phrase I take seriously in any sense. I think Pro Publica and Mother Jones and a number of on-line elements show great chops; but the money still isn’t right. People need to pay and copyright has to matter again, or it can’t grow as it needs.


What can a common person do to stop the death of journalism?

David Simon:

Pay for it. Online. Pay a little bit each month. You did when they dumped it on the doorstep, and you can pay even less than that now to support the salaries of trained reporters and photographers and videographers.


What’s your bucket list project or subject that you’d like to tackle?

David Simon:

A history of the CIA from post-WWII to 9/11/2001. And a narrative of the American leftists who fought in Spain and paid early for our stated ideals. Also, a small feature film about David Maulsby, a rewrite man, and Jack, a gorilla at the Baltimore Zoo. I’ll say no more about that.•



In an Atlantic Q&A, Derek Thompson has a smart conversation with the Economist’s Ryan Avent, the author of the soon-to-be-published The Wealth of Humans, a book whose sly title suggests abundance may not arrive without a degree of menace. Avent is firmly in the McAfee-Brynjolfsson camp, believing the Digital Age will rival the Industrial one in its spurring of economic and societal disruption. An excerpt:

The Atlantic:

There is an ongoing debate about whether technological growth is accelerating, as economists like Erik Brynjolfsson and Andrew McAfee (the authors of The Second Machine Age) insist, or slowing down, as the national productivity numbers indicate. Where do you come down?

Ryan Avent:

I come down squarely in the Brynjolfsson and McAfee camp and strongly disagree with economists like Robert Gordon, who have said that growth is basically over. I think the digital revolution is probably going to be as important and transformative as the industrial revolution. The main reason is machine intelligence, a general-purpose technology that can be used anywhere, from driving cars to customer service, and it’s getting better very, very quickly. There’s no reason to think that improvement will slow down, whether or not Moore’s Law continues.

I think this transformative revolution will create an abundance of labor. It will create enormous growth in [the supply of workers and machines], automating a lot of industries and boosting productivity. When you have this glut of workers, it plays havoc with existing institutions.

I think we are headed for a really important era in economic history. The Industrial Revolution is a pretty good guide of what that will look like. There will have to be a societal negotiation for how to share the gains from growth. That process will be long and drawn out. It will involve intense ideological conflict, and history suggests that a lot will go wrong.•

Tags: ,


We were a web before there was a Web. Things didn’t begin going viral in the Digital Age, and human systems existed long before the Industrial Revolution or even agriculture. None of this is new. What’s different about the era of machines is the brute efficiency of data due to heightened computing power being applied to increasingly ubiquitous connectedness. 

Some more dazzling thoughts by Yuval Noah Harari’s on “Dataism” can be read in Wired UK, which presents a passage from the historian’s forthcoming book, Homo Deus: A Brief History of Tomorrow. Just two examples: 1) “The entire human species is a single data-processing system,” and 2) “We often imagine that democracy and the free market won because they were ‘good.’ In truth, they won because they improved the global data-processing system.”

Harari writes that humans are viewed increasingly as an anachronism by Dataists, who prefer intelligence to the continuation of the species, an “outdated technology.” Like the Finnish philosopher Erkki Kurenniemi, they doubt the long-term preservation of “slime-based machines.” I don’t know how widespread this feeling really is, but I have read theorists in computing who feel it their duty to always opt for greater intelligence, even if it should come at the cost of humanity. I think the greater threat to our survival isn’t conscious decisions made at our expense but rather the natural progression of systems that don’t necessarily require us.

An excerpt:

Like capitalism, Dataism too began as a neutral scientific theory, but is now mutating into a religion that claims to determine right and wrong. The supreme value of this new religion is “information flow”. If life is the movement of information, and if we think that life is good, it follows that we should extend, deepen and spread the flow of information in the universe. According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.

This vision is reminiscent of some traditional religious visions. Thus Hindus believe that humans can and should merge into the universal soul of the cosmos – the atman. Christians believe that after death, saints are filled by the infinite grace of God, whereas sinners cut themselves off from His presence. Indeed, in Silicon Valley, the Dataist prophets consciously use traditional messianic language. For example, Ray Kurzweil’s book of prophecies is called The Singularity is Near, echoing John the Baptist’s cry: “the kingdom of heaven is near” (Matthew 3:2).

Dataists explain to those who still worship flesh-and-blood mortals that they are overly attached to outdated technology. Homo sapiens is an obsolete algorithm. After all, what’s the advantage of humans over chickens? Only that in humans information flows in much more complex patterns than in chickens. Humans absorb more data, and process it using better algorithms. (In day-to-day language, that means that humans allegedly have deeper emotions and superior intellectual abilities. But remember that, according to current biological dogma, emotions and intelligence are just algorithms.)

Well then, if we could create a data-processing system that absorbs even more data than a human being, and that processes it even more efficiently, wouldn’t that system be superior to a human in exactly the same way that a human is superior to a chicken?•


ISIS / ISIL / Waffen / @albaraka_news

Thomas Friedman’s popular notion that nations don’t go to war if they share financial concerns (and a taste for McDonald’s french fries) failed to take something awfully important into account: Not everyone is rational and places material welfare above ideology. Some, in fact, are complete loons who want to blow those Golden Arches to kingdom come.

Thomas Nagel writes on a related topic for the London Review of Books, critiquing Richard English’s Does Terrorism Work? Immoral as it is, politically motivated violence certainly can be used effectively by powerful states (though it sometimes backfires), but the philosopher wonders if terror can secure victory for non-government groups (Al Qaeda, ISIS. etc.). He concludes such actions almost never succeed, except in rare cases where there are extenuating circumstances. Why then the continued improvisation of explosive devices? Nagel argues that delusion takes hold over groups that realize non-violent measures won’t triumph but don’t comprehend that neither will violent ones. An except:

English makes it clear that one of the things these four groups share is hatred and the desire for revenge, which comes out in personal testimony if not always in their official statements of aims. He quotes Osama bin Laden: ‘Every Muslim, from the moment they realise the distinction in their hearts, hates Americans, hates Jews and hates Christians.’ Revenge for perceived injuries and humiliations is a powerful motive for violence, and if it is counted as a secondary aim of these movements, it defines a sense in which terrorism automatically ‘works’ whenever it kills or maims members of the target group. In that sense the destruction of the World Trade Center and Mountbatten’s assassination were sterling examples of terrorism working. But even though English includes revenge in his accounting, this is not what would ordinarily be meant by the question, ‘Does terrorism work?’ What we really want to know about are the political effects.

And here the record is dismal. What struck me on reading this book is how delusional these movements are, how little understanding they have of the balance of forces, the motives of their opponents and the political context in which they are operating. In this respect, it is excessively charitable to describe them as rational agents. True, they are employing violent means which they believe will induce their opponents to give up, but that belief is plainly irrational, and in any event false, as shown by the results.•

Tags: ,


Seminal reading about NYC of the last five decades is “My Lost City,” Luc Sante’s brilliant 2003 New York Review of Books paean to an era not too long ago when hardly anyone here was a have-not, even if they were poor, with a trove of printed matter and records and furnishings to be had on many a curb to whomever was willing to haul it away. The riches poureth over and provided a different, and often deeper, kind of wealth. Okay, some people truly had it worse four decades or so ago. For instance, child prostitutes were a staple of Times Square. Relentless gentrification, however, wasn’t the only way to deal with that horror.

Sante has now republished “The Last Time I Saw Basquiat” in the NYRB, another piece about a time of greater creativity that’s been lost, though he’s hopeful in asserting that the struggle against wealth inequality for an affordable, working-class New York continues. I wish I felt the same. In Cohen-esque terms, the war to me seems over, the good guys having lost. 

An excerpt:

The last time I saw Jean I was going home from work, had just passed through the turnstile at the 57th Street BMT station. We spotted each other, he at the bottom of the stairs, me at the top. As he climbed I witnessed a little silent movie. He stopped briefly at the first landing, whipped out a marker and rapidly wrote something on the wall, then went up to the second landing, where two cops emerged from a recess and collared him. I kept going.

A month later he was famous and I never saw him again. We no longer traveled in the same circles. I was happy for him, but then it became obvious he was flaming out at an alarming pace. I heard stories of misery and excess, the compass needle flying around the dial, a crash looming. When he died I mourned, but it seemed inevitable, as well as a symptom of the times, the wretched Eighties. He was a casualty in a war—a war that, by the way, continues. Years later I needed money badly and undertook to sell the Basquiat productions I own, but got no takers, since they were too early, failed to display the classic Basquiat look. I’m glad it turned out that way.•


Mother Teresa, now officially a saint, was given an unmitigated flogging in the 2003 Slate piece “Mommie Dearest, written by Christopher Hitchens, that godless heathen (I mean that as a compliment as well as a statement of fact). An excerpt:

MT was not a friend of the poor. She was a friend of poverty. She said that suffering was a gift from God. She spent her life opposing the only known cure for poverty, which is the empowerment of women and the emancipation of them from a livestock version of compulsory reproduction. And she was a friend to the worst of the rich, taking misappropriated money from the atrocious Duvalier family in Haiti (whose rule she praised in return) and from Charles Keating of the Lincoln Savings and Loan. Where did that money, and all the other donations, go? The primitive hospice in Calcutta was as run down when she died as it always had been—she preferred California clinics when she got sick herself—and her order always refused to publish any audit. But we have her own claim that she opened 500 convents in more than a hundred countries, all bearing the name of her own order. Excuse me, but this is modesty and humility?

The rich world has a poor conscience, and many people liked to alleviate their own unease by sending money to a woman who seemed like an activist for “the poorest of the poor.” People do not like to admit that they have been gulled or conned, so a vested interest in the myth was permitted to arise, and a lazy media never bothered to ask any follow-up questions. Many volunteers who went to Calcutta came back abruptly disillusioned by the stern ideology and poverty-loving practice of the ‘Missionaries of Charity,’ but they had no audience for their story. George Orwell’s admonition in his essay on Gandhi—that saints should always be presumed guilty until proved innocent—was drowned in a Niagara of soft-hearted, soft-headed, and uninquiring propaganda.•


In a great Politico piece, Kevin Baker sees Rudy Giuliani’s second honeymoon-ish “American Mayor” phase as a brief, mostly unearned aberration in a nearly three decades campaign of race baiting and a distortion of facts that began in earnest with his defeat by David Dinkins in the race for Gracie Mansion in 1989. Those who see Giuliani’s deranged delegate speeches for Donald Trump as odd may think again, as the former created something of a template for the latter during his 1993 rise to the mayoralty, an ugly spectacle of lies and hate speech that served to divide the city. Baker is masterful at defying a collective (and often faulty) memory of NYC politics, recalling the past with great clarity and some glorious phrasing–he describes Daniel Moynihan as “New York’s great stuffed owl of a senator.” Perhaps most damning is the writer’s excoriation of Giuliani’s two terms in office, which were largely lackluster and incessantly petty. An excerpt:

Nobody remembers it this way now, but the Dinkins administration compiled New York’s best record on crime since World War II, adding 6,000 more cops and enjoying a record, 36 straight months of drops in the crime rate. But for New Yorkers this was eclipsed by big headline events like the Crown Heights riot of 1991—a clash between African-Americans and Orthodox Jews that Giuliani would insist on calling a “pogrom,” implying that it was countenanced by Mayor Dinkins. The crime statistics had turned around, and quality of life was slowly but visibly improving in much of New York, but that’s not how people saw it at the time—in part thanks to Giuliani’s relentless, Trumpian campaign to tell them it was a still a cesspool.

Even once-liberal elements of the press internalized Giuliani’s apocalyptic view of his own city. Richard Cohen, in an October 1993 column in the Washington Post the month before the election, scoffed that, “Aside from the deranged, there’s not a single Gothamite who thinks it has gotten better under Dinkins—no matter what his statistics say,” while the Times’ James McKinley concluded, “Mr. Dinkins will never be able to prove his policies have curbed crime.” John Taylor, in Time, conceded that New Yorkers might actually be safer, but that they felt less safe, because the crimes still going on—though he did not give a specific example—were Trumpishly hellish: “Entire families are executed in drug wars. Teenagers kill each other over sneakers. Robbers casually shoot victims even if they have surrendered wallets. The proliferation of carjackings means people are no longer safe even in their automobiles.”

With actual facts about the crime rate effectively banished from the debate, pundits could feel free to embrace the throwback notion pushed by Giuliani that America’s real urban problem was not so much poverty or racism, but black people demanding special treatment, much like their tribune in city hall. Black-scolding reached a sort of frenzy that April, when New York’s great stuffed owl of a senator, Daniel Patrick Moynihan, gave his famous, “Defining deviancy down” speech, in which he asked “what in the last 50 years in New York is now better than it was” back in 1943, and concluded that nothing was better, especially crime. Moynihan received almost universal adoration for these supposedly bold words, the media having failed to notice that crime was at record lows in 1943 because most of the city’s young men were off fighting something called World War II. Or that there was a deadly race riot in New York that year anyway, set off by a cop shooting a black soldier. Or that Harlem had been officially “off-limits” to visiting white servicemen, or that black people were effectively banned from all of the city’s best restaurants, hotels, colleges, hospitals, or jobs in 1943.

Whatever. The Giuliani campaign, and its attendant press corps, was as far past facts as the Trump campaign is now. The perception became the reality.•

Tags: , ,


Donald Trump, Father Coughlin reborn as Yucko the Clown, has in the past couple of weeks often tried to keep his Mussolini-esque tendencies under wraps, fitfully feigning concern for key minority voting blocs–the Birther as civil rights activist, President Arpaio as Mexican diplomat–before abruptly reverting to his rancidness as he did with his appalling immigration speech in Arizona. 

Some in the media have already given in to judging Trump merely on how his depraved comments play in the polls. From T.A. Frank at Vanity Fair:

“So how did Trump’s speech, delivered at a Phoenix rally, really go? No one can yet say. Ultimately, the only relevant measurement is whether it moved his numbers up or down relative to those of his opponent…”

Yes, when it comes to the horse race, numbers are the only relevant measurement, and the horse race is of paramount importance. But just because winning by appealing to the sun-less side of the American eclipse is the only concern of Trump, that doesn’t mean his psychotic proclamations don’t mean anything on their own. They do. They’re despicable, and there’s utility in pointing that out independent of their efficacy or lack thereof.

The Phoenix speech was not received merrily within the Republican National Committee, which was already at odds with the campaign. Two excerpts follow about the continuous internecine war. 

From Alexander Burns and Maggie Haberman at the New York Times:

The Republican National Committee had high hopes that Donald J. Trump would deliver a compassionate and measured speech about immigration on Wednesday, and prepared to lavish praise on the candidate on the party’s Twitter account.

So when Mr. Trump instead offered a fiery denunciation of migrant criminals and suggested deporting Hillary Clinton, Reince Priebus, the party chairman, signaled that aides should scrap the plan, and the committee made no statement at all.

The evening tore a painful new wound in Mr. Trump’s relationship with the Republican National Committee, imperiling his most important remaining political alliance.

Mr. Priebus and his organization have been steadfastly supportive of Mr. Trump, defending him in public and spending millions of dollars to aid him. But the collaboration between Mr. Trump’s campaign and Mr. Priebus’s committee has grown strained over the last month, according to six senior Republicans with detailed knowledge of both groups, some of whom asked to speak anonymously for fear of exacerbating tensions.•

From Alex Isenstadt at Politico:

Late last week, with Labor Day and the final stretch of the 2016 campaign approaching, Trump’s son-in-law, Jared Kushner, met with Republican National Committee brass — including chief of staff Katie Walsh and political director Chris Carr — in New York City. Kushner, who has in many respects assumed the role of campaign manager, asked a series of direct questions to the GOP officials — all surrounding the troubles the party was having in deploying field staffers, opening up swing-state headquarters, and establishing field offices in battlegrounds that will decide the election.

Those present for the meeting, and those briefed on it, insisted there were no fireworks, no drag-out fights. But they said Kushner’s questions reflected a growing realization within Trump’s team that for all the party’s talk about implementing a major swing-state deployment plan, it hasn’t yet materialized.

For weeks, Republican officials and operatives have groused about a dearth of campaign infrastructure in battlegrounds across the country — a state of affairs that could have an impact on GOP candidates up and down the ballot. But like many aspects of the Trump campaign, the deployment plan has been wracked by confusion, false starts and a lack of quick decision-making. On Aug. 18, Paul Manafort, Trump’s former campaign chairman, came to Trump’s Alexandria, Virginia, headquarters for a day of meetings. He left ready to finalize a series of decisions.

But the next morning, Manafort, under withering scrutiny surrounding his work overseas, abruptly quit. His departure created a chain reaction, delaying the talks for days on end.•

Tags: , , ,


What perplexed me about Gawker during the last few years of existence and throughout its holy-shit Hulk Hogan trial was that the principals on the inside of the company seemed tone-deaf at best and oblivious at worst. That allowed an emotional homunculus like Peter Thiel to use a short stack from his billions to drive the media company into bankruptcy.

In Matthew Garrahan’s Financial Times interview with Nick Denton, the former owner discusses why Thiel and others in Silicon Valley were so angered about darts thrown at them by Gawker, stressing insulation from criticism on the outside can be vital when building a corporation. Perhaps the same is true of those running an independent media empire?

An excerpt:

The appeal is likely to take at least a year to get to court, which means Denton and Thiel will not be burying the hatchet soon. And yet they have much in common. They are of similar age: Denton turned 50 last month, while Thiel will be 49 in October. They are both gay, tech-obsessed European émigrés (Thiel is from Germany; Denton from the UK) and they are both libertarians.

There the similarities end, Denton suggests. “Thiel’s idea of freedom is that you can create a society that is insulated from mainstream society … and imagine your own world in which none of the old rules apply.” He is alluding to Thiel’s interest in seasteading — the largely theoretical creation of autonomous societies beyond the reach of meddling national governments. “My idea of free society always had more of an anarcho-syndicalist bent,” he says. “If I was in Barcelona during the Spanish civil war [an anarcho-syndicalist] is probably what I would have been.”

Still, he says he understands the desire to operate beyond the restrictions of normal society, saying that such thinking is common in start-up culture. He points to Uber, the ride-sharing app, to underline the point. When its founders set out to launch a product that would up-end the personal transportation industry, they had to protect their vision from external doubters or naysayers. “You need to be insulated from the critics if you’re going to persuade people to join you, believe in you, invest in you.” Great companies are often based on a single idea, he continues. “And if someone questions that idea, it can undermine the support within the organisation for that idea.”

This, he says, explains Thiel’s animosity towards Gawker. Valleywag, a Denton-owned tech site that was eventually folded into Gawker.com, used to cover Silicon Valley with a critical eye and was a constant thorn in the side of its community of companies and investors — including Thiel.•

Tags: ,


The robots may be coming for our jobs, but they’re not coming for our species, not yet.

Anyone worried about AI extincting humans in the short term is really buying into sci-fi hype far too much, and those quipping that we’ll eventually just unplug machines if they get too smart is underselling more distant dangers. But in the near term, Weak AI (e.g., automation) is far more a peril to society than Strong AI (e.g., conscious machines). It could move us into a post-scarcity tomorrow, or it could do great damage if it’s managed incorrectly.What happens if too many jobs are lost all at once? Will there be enough of a transition period to allow us to pivot?

In a Technology Review piece, Will Knight writes of a Stanford study on AI that predicts certain key disruptive technologies will not have cut a particularly wide swath by 2030. Of course, even this research, which takes a relatively conservative view of the future, suggests we start discussing social safety nets for those on the short end of what may become an even more imbalanced digital divide.

The opening:

The odds that artificial intelligence will enslave or eliminate humankind within the next decade or so are thankfully slim. So concludes a major report from Stanford University on the social and economic implications of artificial intelligence.

At the same time, however, the report concludes that AI looks certain to upend huge aspects of everyday life, from employment and education to transportation and entertainment. More than 20 leaders in the fields of AI, computer science, and robotics coauthored the report. The analysis is significant because the public alarm over the impact of AI threatens to shape public policy and corporate decisions.

It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030, but cautions that remaining technical obstacles will limit such technologies to certain niches. It also warns that the social and ethical implications of advances in AI, such as the potential for unemployment in certain areas and likely erosions of privacy driven by new forms of surveillance and data mining, will need to be open to discussion and debate.•



Not an original idea: Driverless cars are perfected in the near future and join the traffic, and some disruptive souls, perhaps us, decide to purchase an autonomous taxi and set it to work. We charge less than any competitor, use our slim profits for maintenance and to eventually buy a second taxi. Those two turn into an ever-growing fleet. We subtract our original investment (and ourselves) from the equation, and let this benevolent monster grow, ownerless, allowing it to automatically schedule its own repairs and purchases. Why would anyone need Uber or Lyft in such a scenario? Those outfits would be value-less.

In a very good Vanity Fair “Hive” piece, Nick Bilton doesn’t extrapolate Uber’s existential risk quite this far, but he writes wisely of the technology that may make rideshare companies a shooting star, enjoying only a brief lifespan like Compact Discs, though minus the outrageous profits that format produced. 

The opening:

Seven years ago, just before Uber opened for business, the company was valued at exactly zero dollars. Today, it is worth around $68 billion. But it is not inconceivable that Uber, as mighty as it currently appears, could one day return to its modest origins, worth nothing. Uber, in fact, knows this better than almost anyone. As Travis Kalanick, Uber’s chief executive, candidly articulated in an interview with Business Insider, ride-sharing companies are particularly vulnerable to an impeding technology that is going to change our society in unimaginable ways: the driverless car. “The world is going to go self-driving and autonomous,” he unequivocally told Biz Carson. He continued: “So if that’s happening, what would happen if we weren’t a part of that future? If we weren’t part of the autonomy thing? Then the future passes us by, basically, in a very expeditious and efficient way.”

Kalanick wasn’t just being dramatic. He was being brutally honest. To understand how Uber and its competitors, such as Lyft andJuno, could be rendered useless by automation—leveled in the same way that they themselves leveled the taxi industry—you need to fast-forward a few years to a hypothetical version of the future that might seem surreal at the moment. But, I can assure you, it may well resemble how we will live very, very soon.•


A CBP Border Patrol Agent investigates a potential landing area for illegal immigrants along the Rio Grande River in Texas

Surveillance is a murky thing almost always attended by a self-censorship, quietly encouraging citizens to abridge their communication because maybe, perhaps someone is watching or listening. It’s a chilling of civil rights that happens in a creeping manner. Nothing can be trusted, not even the mundane, not even your own judgement. That’s the goal, really, of such a system–that everyone should feel endlessly observed.

In a Texas Monthly piece, Sasha Von Oldershausen, a border reporter in West Texas, finds similarities between her stretch of America, which feverishly focuses on security from intruders, and her time spent living under theocracy in Iran. An excerpt:

Surveillance is key to the CBP’s strategy at the border, but you don’t have to look to the skies for constant reminders that they’re there. Internal checkpoints located up to 100 miles from the border give Border Patrol agents the legal authority to search any person’s vehicle without a warrant. It’s enough to instill a feeling of guilt even in the most exemplary of citizens. For those commuting daily on roads fitted with these checkpoints, the search becomes rote: the need to prove one’s right to abide is an implicit part of life.

Despite the visible cues, it’s still hard to figure just how all-seeing the CBP’s eyes are. For one, understanding the “realities” of border security varies based on who you talk to.

Esteban Ornelas—a Mexican citizen who was charged with illegal entry into the United States in 2012 and deported shortly thereafter—swears that he was caught was because a friend he was traveling through the backcountry with sent a text message to his family. “They traced the signal,” he told me in his hometown of Boquillas.

When I consulted CBP spokesperson Brooks and senior Border Patrol agent Stephen Crump about what Ornelas had told me, they looked at each other and laughed. “That’s pretty awesome,” Crump said. “Note to self: develop that technology.”

I immediately felt foolish to have asked. But when I asked Pauling that same question, his reply was much more austere: “I can’t answer that,” he said, and left it at that.•




Some argue, as John Thornhill does in a new Financial Times column, that technology may not be the main impediment to the proliferation of driverless cars. I doubt that’s true. If you could magically make available today relatively safe and highly functioning autonomous vehicles, ones that operated on a level superior to humans, then hearts, minds and legislation would soon favor the transition. I do think driving as recreation and sport would continue, but much of commerce and transport would shift to our robot friends.

Earlier in the development of driverless, I wondered if Americans would hand over the wheel any sooner than they’d turn in their guns, but I’ve since been convinced we (largely) will. We may have a macro fear of robots, but we hand over control to them with shocking alacrity. A shift to driverless wouldn’t be much different.

An excerpt from Thornhill in which he lists the main challenges, technological and otherwise, facing the sector:

First, there is the instinctive human resistance to handing over control to a robot, especially given fears of cyber-hacking. Second, for many drivers cars are an extension of their identity, a mechanical symbol of independence, control and freedom. They will not abandon them lightly.

Third, robots will always be held to far higher safety standards than humans. They will inevitably cause accidents. They will also have to be programmed to make a calculation that could kill their passengers or bystanders to minimise overall loss of life. This will create a fascinating philosophical sub-school of algorithmic morality. “Many of us are afraid that one reckless act will cause an accident that causes a backlash and shuts down the industry for a decade,” says the Silicon Valley engineer. “That would be tragic if you could have saved tens of thousands of lives a year.”

Fourth, the deployment of autonomous vehicles could destroy millions of jobs. Their rapid introduction is certain to provoke resistance. There are 3.5m professional lorry drivers in the US.

Fifth, the insurance industry and legal community have to wrap their heads around some tricky liability issues. In what circumstances is the owner, car manufacturer or software developer responsible for damage?•


« Older entries § Newer entries »