There’s a decent chance that we become the “monsters” we fear.

Superintelligent machines that become our conquerors are often said to be on the horizon, but I think they’re a further distance away. When we do get close enough to see them, they may resemble us. They will be another version of ourselves, a new “human” resulting from a souped-up evolution of our own design. We will be the end of us. Not soon, but someday.

In Stephen Hsu’s excellent Nautilus essay about a co-evolution of carbon and silicon, he envisions a “future with a diversity of both human and machine intelligences.” An excerpt:

By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.

The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

These two threads—smarter people and smarter machines—will inevitably intersect. Just as machines will be much smarter in 2050, we can expect that the humans who design, build, and program them will also be smarter.•

Tags:

From the February 14, 1933 Brooklyn Daily Eagle:

Tags: ,

Parallel to Alex Gibney’s Steve Jobs doc is Danny Boyle’s fictional take on the subject, the mercifully Kutcher-less exploration of the motivations of the Apple founder and his peers in Silicon Valley. In a Guardian piece by Catherine Shoard, the director discusses his movie at Telluride in somewhat though not overwhelmingly hyperbolic terms. The opening:

The director Danny Boyle has called for more films to be made about the creators of influential new technology. Speaking at the Telluride film festival, where his Aaron Sorkin-scripted biopic of Apple co-founder Steve Jobs is winning largely rave reviews, Boyle said that those in the movie industry had a responsibility to examine the import of people such as Jobs and Mark Zuckerberg, the Facebook creator who was the subject of Sorkin’s 2010 hit, The Social Network.

“These films have to be made,” he said. “Benign as they may seem, they have created forces that are more powerful than governments and banks. And they don’t seem motivated by money. I find that extraordinary. It’s a paradigm shift we seem blissfully unaware of. They’re not interested in money but in data. Our data.”

The film is largely an interiors piece, unfolding in real time in the 40 minutes before three key Apple product launches: the Mac in 1984, the NeXT box in 1988, once Jobs has split from Apple, and the iMac in 1998, when he’s back in business with the company. Yet despite the offer of tax breaks from countries such as England and Hungary, Boyle was insistent that the film not be shot far from Silicon Valley. “San Francisco is the Bethlehem of the second industrial revolution,” he said. “It’s where the extraordinary forces emerged that now rule our lives.”•

Tags: , ,

The endless fetishization of food is mind-numbing, but Alice Driver’s story at Vice “Munchies” takes a smart, offbeat approach to the topic, wondering about the future of nutrition, how we’re going to feed a growing population without further imperiling the environment. She does so while in Mexico, trailing outré chef Andrew Zimmern, who fears he will be viewed as the “fat white guy [who] goes around world eats fermented dolphin anus, comes home.” Zimmern thinks Soylent will eventually be the meal of the poor–or maybe something else we can’t even yet visualize. I would think lab-grown food will play a significant role. 

An excerpt:

I was skeptical of the argument that Soylent was simply a McDonald’s alternative, but I found Zimmern’s second point—that Soylent would be the food of the future for the poor—more compelling. He explained, “I’m at this strange intersection where I’m talking to all these different people about it. You can’t tell me when you’re turning crickets into cricket flour to put in a protein bar and masking it with ground up cranberries and nuts—you can’t tell me that that’s eating crickets or grasshoppers. It’s not. You’re eating a ground-up natural protein source. I would think that solving hunger problems in poverty-stricken areas, it’s probably better to give people a healthy nutri-shake or something once a day. What drives a lot of investigation of alternative foods is hunger and poverty. Ten years ago I told everybody, ‘Yes, it’s going to be bugs. It’s going to be crickets.’ Today, I think it’s going to be something else that we just don’t know yet because you’re talking about 50 years from now or 20 or ten years from now—who knows what we’re going to have invented by then?”

I tried to imagine the world’s poor subsisting off of Soylent, but I couldn’t help feel that there was something perverse about that solution to world hunger.

Meanwhile, in Oaxaca, Zimmern focused on learning about traditional pre-Hispanic cuisine, in which insects played a prominent role.•

Tags: ,

 

10 search-engine keyphrases bringing traffic to Afflictor this week:

  1. brian eno and david graeber conversation
  2. which nations are the best at robotics?
  3. who will dominate AI?
  4. uploading consciousness into a computer
  5. mazie gordon-phillips phillips bowery character
  6. u.s. government mind control experiments
  7. adults impersonating high school students
  8. the growth of solar energy in the next 20 years
  9. humans having digital twins doppelgangers
  10. elon musk explaining mars ambitions
This week the new relationship between Presidents Obama and Rouhani, so close to fruition, was denied at the last moment.

This week, the new union between Presidents Obama and Rouhani, so close to fruition, was in danger of being denied at the last moment.

 

  • Steven Levy has a great discussion with Jerry Kaplan about next-level AI.
  • Citi’s “Disruptive Innovations” report suggests basic income.
  • Some think fears of near-term technological unemployment are overstated.

Sometimes a truth is hidden for so long that the reveal becomes anticlimax. For many years, Americans would have given anything to know the identity of Watergate’s Deep Throat, but how many could today readily name him as W. Mark Felt, who seemed to mysteriously disappear back into the shadows immediately after acknowledging in 2005 his key role in the Woodward-Bernstein reportage?

Similarly more interesting in his nebulous state was “D.B. Cooper,” who in 1971 hijacked a plane, collected ransom and parachuted into parts unknown. Excited 2011 headlines named Lynn Doyle Cooper as most likely being the daring criminal, but by then it was almost beside the point, the man had become myth.

This original 1971 Walter Cronkite report about the D.B. Cooper hijacking, heist and escape, contains interviews with many members of the shaken flight crew.

Before Mailer and Breslin tried to relocate to New York City’s Gracie Mansion, William F. Buckley made his own quixotic run for the mayor’s office for the Conservative Party. In these 1965 clips on NBC’s Meet the Press, Buckley discusses his candidacy, which, as the New York Times wrote in 2008, “drew much of its support from aggrieved white ethnic voters who were angry over crime, urban unrest and liberal policies on poverty and welfare.”

Gossip really bothers me on a visceral level, but I have to acknowledge its utility. Before news organs with something to lose will touch a story, whispers carry the day. While most of it’s petty and unnecessary, but occasionally it can be an insurgency. Sometimes gossip, the original viral information, is the fastest route to justice. 

In 1973, gossipmonger Rona Barrett and Sigmund Freud’s polymath grandson, Sir Clement Freud, got into a dust-up on a program Jack Paar hosted years after he abandoned the Tonight Show.

When Newt Gingrich and Karl Rove and their ilk appealed to white Americans based on “family values” and other such clean-sounding exclusionary terms, they cultivated a voting bloc on issues they didn’t sincerely care about but found useful. The GOP power brokers were merely playing with the dreams of the blindly faithful to help themselves consolidate power. 

But the dreams were not dashed, just deferred. The Trump candidacy and its copious anger and name-calling, which dispenses with the coded language of bigotry for the real deal, is aimed as much at Republicans who let these folks down as the Democrats whom they believe have upended their prosperity. The GOP’s bedrock, from the Bush family to Fox News, is at long last meeting the piledriver of its own design. The Tea Party was the first wave of the ungodly energy unloosed. Trump is the next phase, the anarchic spirit visited upon the most important national political campaign. The controls have been commandeered, the mutiny complete.

From the Economist:

On one domestic issue, to be fair, he has staked out a clear, bold position. Alas, it is an odious one. He wants to build a wall on the Mexican border and somehow make Mexico pay for it. He would deport all 11m immigrants currently thought to be in America illegally. Apart from the misery this would cause, it would also cost $285 billion, by one estimate—roughly $900 in new taxes for every man, woman and child left in Mr Trump’s America. This is necessary, he argues, because Mexican illegal immigrants are “bringing drugs. They’re bringing crime. They’re rapists.” Not only would he round them all up; he would also round up and expel their children who were born on American soil and are therefore American citizens. That this would be illegal does not bother him.

His approach to foreign affairs is equally crude. He would crush Islamic State and send American troops to “take the oil”. He would “Make America great again”, both militarily and economically, by being a better negotiator than all the “dummies” who represent the country today. Leave aside, for a moment, the vanity of a man who thinks that geopolitics is no harder than selling property. Ignore his constant reminders that he wrote The Art of the Deal, which he falsely claims is “the number-one-selling business book of all time.” Instead, pay attention to the paranoia of his worldview. “[E]very single country that does business with us” is ripping America off, he says. “The money [China] took out of the United States is the greatest theft in the history of our country.” He is referring to the fact that Americans sometimes buy Chinese products. He blames currency manipulation by Beijing, and would slap tariffs on many imported goods. He would also, in some unspecified way, rethink how America protects allies such as South Korea and Japan, because “if we step back they will protect themselves very well. Remember when Japan used to beat China routinely in wars?”

Towering populism

Mr Trump’s secret sauce has two spices. First, he has a genius for self-promotion, unmoored from reality (“I play to people’s fantasies. I call it truthful hyperbole,” he once said). Second, he says things that no politician would, so people think he is not a politician. Sticklers for politeness might object when he calls someone a “fat pig” or suggests that a challenging female interviewer has “blood coming out of her wherever”. His supporters, however, think his boorishness is a sign of authenticity—of a leader who can channel the rage of those who feel betrayed by the elite or left behind by social change. It turns out that there are tens of millions of such people in America.•

Tags:

There is a fascinating premise underpinning Steven Levy’s Backchannel interview with Jerry Kaplan, the provocatively titled, “Can You Rape a Robot?”: AI won’t need become conscious for us to treat it as such, for the new machines to require a very evolved sense of morality. Kaplan, the author of Humans Need Not Apply, believes that autonomous machines will be granted agency if they can only mirror our behaviors. Simulacrum on an advanced level will be enough. The author thinks AI can vastly improve the world, but only if we’re careful to make morality part of the programming.

An exchange:

Steven Levy:

Well by the end of your book, you’re pretty much saying we will have robot overlords — call them “mechanical minders.”

Jerry Kaplan:

It is plausible that certain things can [happen]… the consequences are very real. Allowing robots to own assets has severe consequences and I stand by that and I will back it up. Do I have the thing about your daughter marrying a robot in there?

Steven Levy:

No.

Jerry Kaplan:

That’s a different book. [Kaplan has a sequel ready.] I’m out in the far future here, but it’s plausible that people will have a different attitude about these things because it’s very difficult to not have an emotional reaction to these things. As they become more a part of our lives people may very well start to inappropriately imbue them with certain points of view.•

Tags: ,

It’s not for sure that this time will be different, that automation will lead to technological unemployment on a large scale, but all the ingredients are in place. Such a shift would make us richer in the aggregate, but how do we extend the new wealth beyond the owners of capital? 3-D printers may become ubiquitous and make manufacturing much less expensive, leading to cheaper prices and abundance. But the basic needs of food, shelter, etc. will still be required by those no longer employable in the new arrangement. 

Sure, it’s possible thus-unimagined fields will bloom in which humans won’t be “redundancies,” but if they don’t or if there aren’t enough of them? What then?

In a FT Alphaville post, Izabella Kaminska writes of Citi’s latest “Disruptive Innovations” report, which suggests, among other remedies, universal basic income. An excerpt:

Could this time be different, in that where previous manifestations of “robot angst” created new and usually better jobs and sectors to replace those lost, this time there is no automatism for better job creation once existing jobs become redundant?

If that’s the case, Citi says there may indeed be some feedback between weak aggregate demand and growing polarisation of productivity across workers and firms. And this inevitably leads to larger inequalities in income and wealth.

So what’s to be done?

According to Citi a list of potentially desirable policy measures includes:

a) improve and adapt education and training to better align workers’ skills with the demands of firms and technologies,
b) reduce barriers to reallocating resources, including by reducing barriers to labour mobility and simplifying bankruptcy procedures,
c) increase openness to trade and FDI to facilitate knowledge transfers,
d) increase support for entrepreneurship,
e) improve access to credit for restructuring and retraining, and
f) use the tax-transfer mechanism (e.g. through a guaranteed minimum income for all, or an ambitious negative income tax, public funding of health care and long-term care etc.) to support those left behind by technological advances.

Note with particular attention that last policy recommendation: a basic income for one and all to help society adjust to the new hyper technological environment, in a way that encourages competition and productivity in laggard firms, and dilutes the power of the winner-takes-all corporates.•

Tags:

File this February 19, 1928 Brooklyn Daily Eagle article about Palestine under “Bad Predictions.” In addition to bemoaning that Palestinian profiteers were turning the land into a dusty tourist trap and a squalid one at that, it also openly scoffed at the notion that Jewish settlers, still a target of casual anti-Semitism, could ever be a power in the region. The new settlement of Hapharalm, or Israel, was singled out as particularly “laughable.”

More than anything, Steve Jobs was a salesman, maybe the greatest one ever, with a taste for auto-hagiography. Sure, that’s not the total picture. While he had absolutely nothing to do with the creation of Apple I and Apple II, he did ultimately (twice) become the company’s Nudge-in-Chief who hectored his teams to perfection, the way Ahab urged his to the great white whale. 

I can’t wait to see Alex Gibney’s new doc, Steve Jobs: The Man in the Machine, which wonders why the late Apple founder was mourned deeply in office parks as well as Zuccotti Park. In an L.A. Weekly piece, Amy Nicholson sees Gibney’s latest as almost a sequel to his last work, Going Clear, the Cult of Mac being analogous in some ways to Scientology. An excerpt: 

Both Scientology and Apple were founded by now-dead gurus who commanded devotion. Both are corporations that claim to stand for something purer than greed. Neither pays fair taxes. And neither functions openly, speaks freely or tolerates critics.

Where the two films differ is us. Dismantle Scientology, and audiences will cheer. Chink away at the cult of Apple, and we all feel accused. I imagine that people will slink out of Steve Jobs keeping their iPhones guiltily stashed. When they make it a safe distance from the theater, they’ll glide their smartphones in front of their faces, swipe the black monoliths awake and disappear into the dream machines of their own desires: where they want to visit, what they want to hear and who they want to reach. As MIT professor Sherry Turkle describes it, the iPhone that was meant to connect the globe instead made us “alone together.” In the future, will historians wondering how society fractured look to Jobs’ Apple as the original sin?

We love our smartphones. In the eight years since the iPhone 1, they’ve become necessities — almost a human right. Though they’re made of circuits and wires, our attachment to these external brains is personal. They keep us company, and in turn we fondle them, sleep with them, flip out when they break. Which is why we have this documentary about their creator and not docs about the inventors of the subway, the shower, the fridge. Gibney’s film asks “Why did Jobs’ death make us mourn?”•

Tags: ,

robot-face

All knowledge cannot be reduced to pure information–not yet anyway.

Machines may eventually rise to knowledge, or perhaps humans will be reduced to mere information. The first outcome poses challenges, while the second is the triumph of a new sort of fascism.

In a NYRB piece that argues specifically against MOOCs and more broadly against humans being replaced by machines or encouraged to be more machine-like, David Bromwich is convinced that virtual education is a scary step toward the mechanization of people. 

I’m not so dour about MOOCs, especially since everyone doesn’t have the privilege of a high-quality classroom situation. Their offerings seem an extension to me of the mission of public libraries: Make the tools of knowledge available to everyone. The presence of both online education and physical colleges simultaneously is the best-case scenario. Having one without the other is far less good. Bromwich’s fear, a realistic one, is that traditional higher education will be seriously disrupted by the new order.

From Bromwich:

American society is still on the near side of robotification. People who can’t conjure up the relevant sympathy in the presence of other people are still felt to need various kinds of remedial help: they are autistic or sociopathic, it may be said—those are two of a range of clinical terms. Less clinically we may say that such people lack a certain affective range. However efficiently they perform their tasks, we don’t yet think well of those who in their everyday lives maximize efficiency and minimize considerate, responsive, and unrehearsed interaction, whether they neglect such things from physiological incapacity or a prudential fear of squandering their energy on emotions that are not formally necessary.

This prejudice continues to be widely shared. But the consensus is visibly weaker than it was a decade ago. As people are replaced by machines—in Britain, they call such people “redundant”—the survivors who remain in prosperous employment are being asked to become more machinelike. This fits with the idea that all the valuable human skills and varieties of knowledge are things that can be assimilated in a machinelike way. We can know the quantity of information involved, and we can program it to be poured into the receiving persons as a kind of “input” that eventually yields the desired “product.” Even in this short summary, however, I have introduced an assumption that you may want to stand back and question. Is it really the case that all knowledge is a form of information? Are there some kinds of learning or mental activity that are not connected with, or properly describable as, knowledge?•

 

Tags:

All fast-casual dining won’t likely be automated nor will restaurants with human staff soon be an overwhelming minority. It will not in the near future resemble the way a few shoes are still made by hand while almost all of them are manufactured by machines. I don’t think that happens so quickly or absolutely.

But not all (or almost all) of these jobs have to disappear for the sector’s workers to be devastated. In most places, anything out of sight in the kitchen that can be robotized will be, and some visible positions will as well. Of course, some restaurants and hotels and other corners of the hospitality industry will go all in and completely disappear the human element.

I’m not suggesting we dash robot heads with rocks, but we probably need to have some political solutions at hand, should, say, popular dining and the trucking and taxi industries no longer be there to employ tens of millions of Americans. A Plan B would be handy then.

One of the trailblazers in disappearing visible workers is the new digital automat known as Eatsa, the San Francisco cafe I blogged about a couple of days ago. In a smart Atlantic piece, Megan Garber looks at the underlying meaning of this nouveau restaurant beyond its threat of technological unemployment, how it’s selling not just meals but social withdrawal. An excerpt:

The core premise here, though, is that at Eatsa, you will interact with no human save the one(s) you are intentionally dining with. The efficiencies are maximized; the serendipities are minimized. You are, as it were, bowl-ing alone.

That in itself, is noteworthy, no matter how Eatsa does as a business—another branch is slated to open in Los Angeles later this year. If fast food’s core value was speed, and fast casual’s core value was speed-plus-freshness, Eatsa’s is speed-plus-freshness-plus-a lack of human interaction. It’s attempting an automat-renaissance during the age of Amazon and Uber, during a time when the efficiency of solitude has come to be seen, to a large extent, as the ultimate luxury good. Which is to say that it has a very good chance of success.•

Tags:

Industrial robots are built to be great (perfect, hopefully) at limited, repetitive tasks. But with Deep Learning experiments, the machines aren’t programmed for chores but rather to teach themselves to learn how to master them from experience. Since every situation in life can’t be anticipated and pre-coded, truly versatile AI needs to autonomously conquer obstacles that arise. In these trials, the journey has as much meaning–more, really–than the destination.

Of course, not everyone would agree that humans are operating from such a blank slate, that we don’t already have some template for many behaviors woven into our neurons–a collective unconsciousness of some sort. Even if that’s so, I’d think there’ll soon be a way for robots to transfer such knowledge across generations.

One current Deep Learning project: Berkeley’s Brett robot, designed to be like a small child, though a growing boy. The name stands for “Berkeley Robot for the Elimination of Tedious Tasks,” and you might be tempted to ask how many of them it would take to screw in a light bulb, but it’s already far beyond the joke stage. As usual with this tricky field, it may take longer than we’d like for the emergence of such highly functional machines, but perhaps not as long as we’d expect.

Jack Clark of Bloomberg visited the motherless “child” at Berkeley and writes of it and some of the other current bright, young things. An excerpt from his report:

What makes Brett’s brain tick is a combination of two technologies that have each become fundamental to the AI field: deep learning and reinforcement learning. Deep learning helps the robot perceive the world and its mechanical limbs using a technology called a neural network. Reinforcement learning trains the robot to improve its approach to tasks through repeated attempts. Both techniques have been used for many years; the former powers Google and other companies’ image and speech recognition systems, and the latter is used in many factory robots. While combinations of the two have been tried in software before, the two areas have never been fused so tightly into a single robot, according to AI researchers familiar with the Berkeley project. “That’s been the holy grail of robotics,” says Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington.

After years of AI and robotics research, Berkeley aims to devise a system with the intelligence and flexibility of Rosie from The Jetsons. The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot. Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people. These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

 

Tags: , ,

man-rocket_180892k

John Lanchester, who wrote one of my favorite articles of the year with “The Robots Are Coming in the London Review of Books, returns to that same publication to think about more tinkerers and their machines, namely the Wright brothers and Elon Musk.

The occasion is a dual review of David McCullough’s new work about the former and Ashlee Vance’s of the latter. As the piece notes, the aviation pioneer Wrights were ignored, disbelieved and mocked during their first couple of successful flights, the press too skeptical to accept what was clear as the sky if only they would open their eyes.

Puzzlingly, Lanchester is of the notion that the SpaceX founder Elon Musk is less than a household name, which is a curious thing since the Iron Man avatar is one of the most famous people on Earth, receiving the type of wide acclaim before coming close to Mars that was denied the Wrights even after they successfully took flight in Kitty Hawk. Just strange.

Otherwise it’s a very well-written piece, and one that astutely points out that tinkerers today who want to do more than merely create apps often need a planeload of cash, something the Wrights didn’t require. Perhaps 3-D printers will change that?

A passage in which Lanchester compares the siblings to their spiritual descendant:

When David McCullough’s book came out, it went straight to the top of the US bestseller list, taking up a position right next to Ashlee Vance’s biography of Elon Musk. At which point you may well be asking, who he? The answer is that Musk is the South African-born entrepreneur who runs three of the most interesting companies in America, in the fields of clean energy and interplanetary exploration: SolarCity (solar batteries), Tesla (electric cars), and SpaceX (commercial spaceflight). It’s the third of these companies which is the maddest and most entertaining. Where most corporate mission statements are so numbing they’d be useful as a form of medical anaesthesia, SpaceX’s is ‘creating the technology needed to establish life on Mars’. ‘I would like to die thinking that humanity has a bright future,’ Musk explained to Vance. ‘“If we can solve sustainable energy and be well on our way to becoming a multiplanetary species with a self-sustaining civilisation on another planet – to cope with a worst-case scenario happening and extinguishing human consciousness – then,” and here he paused for a moment, “I think that would be really good.”’

There are a number of suggestive parallels between Musk and the Wrights, beyond the obvious ones to do with an interest in flight. The bishop had very high standards and set no limits on the intellectual curiosity he encouraged in his children; Musk’s father had the same standards and the same insistence on no limits, but was (is) a tortured and difficult presence, ‘good at making life miserable’, in Musk’s words: ‘He can take any situation no matter how good it is and make it bad.’ The Wrights were poorish, the Musks affluentish, but both grew up with an emphasis on learning things first-hand. ‘It is remarkable how many different things you can get to explode,’ Musk says about his childhood experiments. ‘I’m lucky I have all my fingers.’ One very odd thing is a parallel to do with bullies: Musk was set on and beaten half to death by a gang of thugs at his school in Johannesburg; Wilbur Wright was attacked so badly at the age of 18 – beaten with a hockey stick – that he took years to recover from his injuries and missed a college education as a result. His assailant, Oliver Crook Haugh, went on to become a notorious serial killer. Something about these very bright young men set off the bullies’ hatred for difference.

The Wrights took calculated risks. Musk does the same.•

Tags: , , , ,

From the June 2, 1854 Brooklyn Daily Eagle:

 

 

Tags: ,

dt2

If Donald Trump grew a small, square mustache above his lip, would his poll numbers increase yet again? For a candidate running almost purely on attention, can any shock really be deleterious?

Howard Dean was the first Internet candidate and Barack Obama the initial one to ride those new rules to success. But things have already markedly changed: That was a time of bulky machines on your lap, and the new political reality rests lightly in your pocket. A smartphone’s messages are brief and light on details, and its buzzing is more important than anything it delivers.

The diffusion of media was supposed to make it impossible for a likable incompetent like George W. Bush to rise. How could such a person survive the scrutiny of millions of “citizen journalists” like us? If anything, it’s made it easier, even for someone who’s unlikable and incompetent. For a celeb with a Reality TV willingness to be ALL CAPS all the time, facts get lost in the noise, at least for awhile.

That doesn’t mean Donald Trump, an adult baby with an attention span that falls somewhere far south of 15 months, will be our next President, but it does indicate that someone ridiculously unqualified and hugely bigoted gets to be on the national stage and inform our political discourse. The same way Jenny McCarthy used her platform to play doctor and spearhead the anti-vaccination movement, Trump gets to be a make-believe Commander-in-Chief for a time.

Unsurprisingly, Nicholas Carr has written the best piece on the dubious democracy the new tools have delivered, a Politico Magazine article that analyzes election season in a time that favors a provocative troll, a “snapchat personality,” as he terms it. The opening:

Our political discourse is shrinking to fit our smartphone screens. The latest evidence came on Monday night, when Barack Obama turned himself into the country’s Instagrammer-in-Chief. While en route to Alaska to promote his climate agenda, the president took a photograph of a mountain range from a window on Air Force One and posted the shot on the popular picture-sharing network. “Hey everyone, it’s Barack,” the caption read. “I’ll be spending the next few days touring this beautiful state and meeting with Alaskans about what’s going on in their lives. Looking forward to sharing it with you.” The photo quickly racked up thousands of likes.

Ever since the so-called Facebook election of 2008, Obama has been a pacesetter in using social media to connect with the public. But he has nothing on this year’s field of candidates. Ted Cruz live-streams his appearances on Periscope. Marco Rubio broadcasts “Snapchat Stories” at stops along the trail. Hillary Clinton and Jeb Bush spar over student debt on Twitter. Rand Paul and Lindsey Graham produce goofy YouTube videos. Even grumpy old Bernie Sanders has attracted nearly two million likers on Facebook, leading the New York Times to dub him “a king of social media.”

And then there’s Donald Trump. If Sanders is a king, Trump is a god. A natural-born troll, adept at issuing inflammatory bulletins at opportune moments, he’s the first candidate optimized for the Google News algorithm.•

Tags: ,

There’s good news about life on Earth after climate change, but first the bad news: Death, massive amounts of death.

As Lizzie Wade states in her smart Wired article, we’ll likely be around to see the disaster we’ve created, but we don’t have a great shot at waiting out the recovery. That will take eons. The positive side doesn’t involve us, but rather the creatures that may thrive and replenish the landscape after we’re gone. But first they’ll have to survive us. Godspeed to them. An excerpt:

The flip side of mass extinction, however, is rapid evolution. And if you’re willing to take the long view—like, the million-year long view—there’s a ray of hope to be found in today’s rare species. The Amazon, in particular, is packed with plant species that pop up few and far between and don’t even come close to playing a dominant role in the forest. But they might have treasure buried in their genes.

Rare species—especially those that are only distantly related to today’s common ones—“have all kind of traits that we don’t even know about,” says [evolutionary geneticist Christopher] Dick. Perhaps one will prove to thrive in drought, and another will effortlessly resist new pests that decimate other trees. “These are the species that have all the possibilities for becoming the next sets of dominant, important species after the climate has changed,” Dick says.

That’s why humans can’t cut them all down first, he argues. If rainforests are going to have a fighting chance of recovering their biodiversity and ecological complexity, those rare species and their priceless genes need to be ready and able to step into the spotlight. It might to be too late to save the world humanity knows and loves. But it still can still do its best to make sure the new one is just as good—someday.•

Tags: ,

A digitized Automat with no visible workers roughly describes Eatsa, a San Francisco fast-casual eatery for tomorrow that exists today. Tamara Palmer of Vice visited the restaurant and found it “much more reminiscent of an Apple store than a fast food franchise.” Its design may be too cool to work everywhere in America, but I bet some variation of it will. Sooner or later, Labor in the sector will be noticeably dinged by technological unemployment. The opening:

People often muse on a future controlled by machines, but that is already well in motion here in the Bay Area, where hotels are employing robot butlers, Google and Tesla are putting driverless vehicles on the road, and apps that live every aspect of your life for you continue to proliferate. The rush to put an end to human contact is at a fever pitch around these parts, where a monied tech elite has the deep pockets to support increasingly absurd services.

Right on trend, this week marks the debut of Eatsa, a quick-service quinoa bowl “unit” (as one owner called it) billing itself as San Francisco’s premiere “automated cafe.”

I attended a media preview lunch at Eatsa last week to test out the concept before the doors officially opened. Pushing a button to summon an Uber ride to my door, I wondered how good automated food might be.

I realized it doesn’t really matter, because as California inches towards a $15 per hour minimum wage, that’s the direction we’re headed in, starting with a people-free fast food world.•

Tags:

I’m mixing my 20th-century sci-fi authors, but like Billy Pilgrim naked in a Tralfamadore zoo, we may be kept as pets by intelligent machines. That’s what Philip K. Dick Android, who can learn new words in real-time, promises his NOVA interlocutors.

Or perhaps they’ll eliminate us. Or maybe by the time they exist, we will be very different. We might become those conscious machines we so fear. We might be them. Nobody knows.

My first Virtual Reality experience was during the 1990s while working in a non-profit media place that had a clunky VR helmet for visitors to experience. One guest was rock icon Lou Reed, who sat in a chair and pulled the device over his head. He paused a moment, and then said to the woman who was assisting, “What happens now? Does someone pull on my cock?”

Perhaps because it didn’t come with free tug jobs or maybe because the technology was still lacking, Virtual Reality was a bomb two decades ago. Those who’ve tested the latest models are awed by what years of development and greater computing power has wrought. The tool certainly could be a tremendous boon to education, but you could say the same of gaming, and that’s never been leveraged correctly. 

The opening of “Grand Illusions,” an Economist report:

YOUR correspondent stands, in a pleasingly impossible way, in orbit. The Earth is spread out beneath. A turn of the head reveals the blackness of deep space behind and above. In front is a table full of toys and brightly coloured building blocks, all of which are resolutely refusing to float away—for, despite his being in orbit, gravity’s pull does not seem to have vanished. A step towards the table brings that piece of furniture closer. A disembodied head appears, and pair of hands offer a toy ray-gun. “Go on, shoot me with it,” says the head, encouragingly. Squeezing the trigger produces a flash of light, and the head is suddenly a fraction of its former size, speaking in a comic Mickey-Mouse voice (despite the lack of air in low-Earth orbit) as the planet rotates majestically below.

It is, of course, an illusion, generated by a virtual-reality (VR) company called Oculus. The non-virtual reality is a journalist wearing a goofy-looking headset and clutching a pair of controllers in a black, soundproofed room at a video-gaming trade fair in Germany. But from the inside, it is strikingly convincing. The virtual world surrounds the user. A turn of the head shifts the view exactly as it should. Move the controllers and, in the simulation, a pair of virtual arms and hands moves with them. The disembodied head belongs to an Oculus employee in another room, who is sharing the same computer-generated environment. The blocks on the table obey the laws of physics, and can be stacked up and knocked down just like their real-world counterparts. The effect, in the words of one VR enthusiast, is “like sticking your head into a wormhole that leads to some entirely different place.”

Matrix algebra

The idea of virtual reality—of building a convincing computer-generated world to replace the boring old real one—has fuelled science fiction’s novels and movies since the 1950s. In the 1990s, as computers became commonplace, several big firms tried to build headsets as a first attempt to realise the idea. They failed. The feeble computers of the time could not produce a convincing experience. Users suffered from nausea and headaches, and the kit was expensive and bulky. Although VR found applications in a few bits of engineering and science, the consumer version was little more than a passing fad in the world’s video-game arcades. But now a string of companies are betting that information technology, both hardware and software, has advanced enough to have another go. They are convinced that their new, improved virtual reality will shake up everything from video-gaming to social media, and from films to education.•

 

Vladimir Bekhterev had a great brain, but he lacked diplomacy.

Joseph Stalin probably was a “paranoiac with a short, dry hand,” but when the Russian neurologist reportedly spoke that diagnosis after examining the Soviet leader, he died mysteriously within days. Many thought he’d been poisoned to avenge the slight. Or maybe it was just a coincidence. A cloud of paranoia envelops all under an autocratic regime, whether we’re talking about Stalin in the 20th century or Vladimir Putin today: Some deaths are very suspect, so all of them become that way. At any rate, the scientist’s gray matter became an exhibit in his own collection of genius brains. An article in the December 27, 1927 Brooklyn Daily Eagle recorded the unusual series of events.

Tags: ,

Terrible products that fail miserably delight us not only because of the time-tested humor of a spectacular pratfall, but because it’s satisfying to feel now and then that we’re not just a pack of Pavlovian dogs prepared to lap up whatever is fed us, especially if it’s a Colgate Ready Meal and a Crystal Pepsi.

In a really smart Financial Times column, Tim Harford takes a counterintuitive look at how companies can avoid launching surefire duds. The usual manner has been to find out which products representative people want, but he writes of an alternative strategy: Discover what consumers of horrible taste embrace and then bury those products deep in a New Mexico desert alongside Atari’s E.T. video games. Of course, it does say something that companies can’t just identify what’s awful. Why do almost all businesses become echo chambers?

An excerpt:

If savvy influential consumers can help predict a product’s success, might it not be that there are consumers whose clammy embrace spells death for a product? It’s a counter-intuitive idea at first but, on further reflection, there’s a touch of genius about it.

Let’s say that some chap — let’s call him “Herb Inger” — simply adored Clairol’s Touch of Yogurt shampoo. He couldn’t get enough of Frito-Lay’s lemonade (nothing says “thirst-quenching” like salty potato chips, after all). He snapped up Bic’s range of disposable underpants. Knowing this, you get hold of Herb and you let him try out your new product, a zesty Cayenne Pepper eyewash. He loves it. Now you know all you need to know. The product is doomed, and you can quietly kill it while it is still small enough to drown in your bathtub.

A cute idea in theory — does it work in practice? Apparently so. Management professors Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker have studied people, such as Herb, whom they call “Harbingers of Failure.” (Their paper by that name is forthcoming in the Journal of Marketing Research.) They used a data set from a chain of more than 100 convenience stores. The data covered more than 100,000 customers with loyalty cards, more than 10 million transactions and nearly 10,000 new products. Forty per cent of those products were no longer stocked after three years, and were defined as “flops.”•

Tags: , , , ,

In a newly revised edition of Federico Fellini’s 1980 book, Making a Film, there’s a fresh translation of “A Spectator’s Autobiography,” the wonderful essay by Italo Calvino that begins the volume. It’s been adapted for publication by the NYRB.

In the piece, Calvino notes that the unpunctual habits of Italian moviegoers in the 1930s portended the ultimate widespread fracturing of the traditional narrative structure, an artifice intended to satisfy, if fleetingly, our deep craving for order, to deliver us a simple solution to the complex puzzle of life and its jagged pieces. 

An excerpt:

Italian spectators barbarously made entering after the film already started a widespread habit, and it still applies today. We can say that back then we already anticipated the most sophisticated of modern narrative techniques, interrupting the temporal thread of the story and transforming it into a puzzle to put back together piece by piece or to accept in the form of a fragmentary body. To console us further, I’ll say that attending the beginning of the film after knowing the ending provided additional satisfaction: discovering not the unraveling of mysteries and dramas, but their genesis; and a vague sense of foresight with respect to the characters. Vague: just like soothsayers’ visions must be, because the reconstruction of the broken plot wasn’t always easy, especially if it was a detective movie, where identifying the murderer first and the crime afterward left an even darker area of mystery in between. What’s more, sometimes a part was still missing between the beginning and the end, because suddenly while checking my watch I’d realize I was running late; if I wanted to avoid my family’s wrath I had to leave before the scene that was playing when I entered came back on. Therefore lots of films ended up with holes in the middle, and still today, more than thirty years later—what am I saying?—almost forty, when I happen to see one of those films from back then—on television, for example—I recognize the moment in which I entered the theater, the scenes that I’d watched without understanding them, and I recover the lost pieces, I put the puzzle back together as if I’d left it incomplete the day before.•

  • See also:

Fellini feuds with Oriana Fallaci. (1963)

Tags: ,

« Older entries § Newer entries »