Donald Trump, the Kim Jong-un of casino bankruptcy, had largely played footsie with Ted Cruz even while excoriating all other GOP hopefuls. That changed recently, however, when the two most detestable of all the candidates began throwing haymakers at one another. Amusing to hear Trump play the likability card with Cruz, since the former Apprentice host has tried to not only connect with members of the 4-H club but also those in the 3-K club.
The most telling line of the entire GOP race was spoken before Trump and Cruz were torn asunder, when they still resisted attacking one another. The aspiring potentate said this of his Texan-Canadian competitor: “Everything I say, he agrees with me. No matter what I say.” It was a tacit acknowledgement by Trump that he knew he was spouting crazy shit because he thought it would resonate with nativist voters he needed to float his cockamamie campaign.
Largely it has, at least with a sizable-enough swath of a frayed and divided party. It’s been the Reality TV version of the political season, full of disgraceful taunts and pathetic posturing, and the question is if the show gets cancelled in Iowa and New Hampshire or if the season is sadly extended.
Returning to America after a trip, I encountered a chatty immigration officer. “You guys should have finished off the crusades when you had the chance,” he said as he handed back my British passport. In themselves such encounters mean little. But I have had many similar ones recently — and the plural of anecdote is data, as they say.
Within the next three weeks, we shall find out if the rise of Donald Trump is silly season froth that comes before voting, or whether we are in the midst of a dramatic upheaval in US politics. My head is agnostic. But my gut tells me things are changing for the worse. Either way, the time for speculation is nearly over.
The rest of the world is almost as obsessed about America’s political health as the US. Every time I have been abroad in the past few months, people ask the same question: “Could Donald Trump be president?” The answer is probably not. But it comes with a strict health warning. More seasoned observers have been wrong about US politics in the past year and show few signs of lifting their batting average.
In spite of that, the consensus holds that Mr Trump will not be the Republican nominee. Should he become so, he would lose the presidential election. If, by some miracle, he won it, he would make a disastrous president. The next question is: “What is fuelling Mr Trump’s popularity?” (And for better informed foreigners, that of Ted Cruz too.) This is the issue that matters most.•
Has there ever been an era when enthusiasts have gotten so far ahead of themselves in terms of scientific and technological possibilities? I was reading an article the other day, clearly written by an intelligent person, who proclaimed that by 2050 we would see the “end of death,” that we will have left mere biological life behind. I’m not saying that such a transition is impossible, but it won’t be happening during our lifetimes. We are, ultimately, toast.
Likewise, I have no doubt we can eventually colonize space if we don’t do ourselves in first. We should certainly be sending human-less probes and 3D printers to Mars and elsewhere, but it’s probably a good idea to stay realistic about what we can accomplish in each era. In a rush to save ourselves, we may lose sight of the proper path.
The idea that humans will eventually travel to and inhabit other parts of our galaxy was well expressed by the early Russian rocket scientist Konstantin Tsiolkovsky, who wrote, “Earth is humanity’s cradle, but you’re not meant to stay in your cradle forever.” Since then the idea has been a staple of science fiction, and thus become part of a consensus image of humanity’s future. Going to the stars is often regarded as humanity’s destiny, even a measure of its success as a species. But in the century since this vision was proposed, things we have learned about the universe and ourselves combine to suggest that moving out into the galaxy may not be humanity’s destiny after all.
The problem that tends to underlie all the other problems with the idea is the sheer size of the universe, which was not known when people first imagined we would go to the stars. Tau Ceti, one of the closest stars to us at around 12 light-years away, is 100 billion times farther from Earth than our moon. A quantitative difference that large turns into a qualitative difference; we can’t simply send people over such immense distances in a spaceship, because a spaceship is too impoverished an environment to support humans for the time it would take, which is on the order of centuries. Instead of a spaceship, we would have to create some kind of space-traveling ark, big enough to support a community of humans and other plants and animals in a fully recycling ecological system.•
“Greed is good,” proclaimed fictional robber baron Gordon Gekko in 1987, echoing a speech from a year earlier by the very real Ivan Boesky, who by the time Wall Street opened had traded the Four Seasons for the Graybar Hotel, his desires having pried him from the penthouse. The point is well-taken, however, when applied correctly: Unhealthy desires can be useful. You don’t get people to risk life and limb–emigrating to the “New” World or participating in the dangerous Manifest Destiny–unless there’s a potential for a better life, and, often, a bigger bank account.
I’ve posted previously about my queasiness over recent U.S. regulation which unilaterally allows its corporations to lay claim to bodies in space, but perhaps the quest to go for the gold in out there has a silver lining. While it’s gross for those already fabulously wealthy to be wondering who will use asteroid mining to become the first trillionaire, Grayson Cary considers in a smart Aeon essay that perhaps avarice is a necessary evil if we are to colonize space and safeguard our species against single-planet calamity. As the writer states, past multinational treaties may inhibit unfettered speculation, but probably not. Private, public, U.S., China, etc.–it’s going to be a land rush that sorts itself out as we go, and go we will. As Cary writes, “There comes a point at which Earthbound opinions hardly matter.”
Over the 2015 Thanksgiving holiday – which, in the spirit of appropriation, seems appropriate – President Barack Obama signed into law the Spurring Private Aerospace Competitiveness and Entrepreneurship (SPACE) Act. It had emerged from House and Senate negotiations with surprisingly robust protections for US asteroid miners. In May, the House had gone only so far as to say that ‘[a]ny asteroid resources obtained in outer space are the property of the entity that obtained them’. In the Senate, commercial space legislation had moved forward without an answer to the question of property. In the strange crucible of the committee process, the bill ended up broader, bolder and more patriotic than either parent.
‘A United States citizen,’ Congress resolved, ‘engaged in commercial recovery of an asteroid resource or a space resource under this chapter shall be entitled to any asteroid resource or space resource obtained, including to possess, own, transport, use and sell the asteroid resource or space resource obtained.’ It’s a turning point, maybe a decisive one, in a remarkable debate over the administration of celestial bodies. It’s an approach with fierce critics – writing for Jacobin magazine in 2015, Nick Levine called it a vision for ‘trickle-down astronomics’ – and the stakes, if you squint, are awfully high. A small step for 535 lawmakers could amount to one giant leap for humankind.
If you hew to the right frame of mind, decisions about space policy have enormous consequences for the future of human welfare. Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, offered a stark version of that view in a paper called ‘Astronomical Waste: The Opportunity Cost of Delayed Technological Development’ (2003). By one estimate, he wrote, ‘the potential for approximately 1038 human lives is lost every century that colonisation of our local supercluster is delayed; or, equivalently, about 1029 potential human lives per second’. Suppose you accept that perspective, or for any other reason feel an urgent need to get humanity exploring space. How might a species hurry things up?
For a vocal chorus of pro-space, pro-market experts, the answer starts with property: to boldly go and buy and sell. ‘The only way to interest investors in building space settlements,’ writes the non-profit Space Settlement Institute on its website, ‘is to make doing so very profitable.’ In other words: show me the money.•
Confirmation bias is a dangerous thing, so the Harvard economist Roland Fryer likes to stick to data, which can, of course, lead to some inconvenient truths. How about being an African-American scholar in the time of Ferguson who’s convinced that police in the U.S. are no more likely to shoot a black person than a white one?
Fryer’s argument, which he relates to John McDermott of the Financial Times, is that the numbers say officers harass and manhandle African-Americans in a disproportionate way, but actual lethal violence is proportionate among different race groups. The more minor and incessant acts of persecution persuade black folks that they are being shot far more often.
Well, I haven’t studied the numbers, but if this is true it should make us incredibly vigilant of the type racial profiling and serial intimidation that divides us. The so-called quality-of-life approach to policing has provided too much wiggle room for some to be targeted. Even Fryer himself acknowledges that he had guns pulled on him by police six or seven times during his youth.
At a quiet table in the cavernous Hawksmoor Seven Dials, a branch of the high-end restaurant chain in central London, where the decor is brown and the meat is red, Fryer tells me how he spent two days last year on the beat shadowing cops in Camden, New Jersey. (On his first day on patrol a woman overdosed in front of him and died.) What Fryer wanted to figure out was whether the killings of Michael Brown and Eric Garner — two African-Americans whose deaths led to widespread protests — were part of an observable pattern of discrimination, as activist groups such as Black Lives Matter have suggested. After his week on patrol, he collected more than 6m pieces of data from forces such as New York City’s on cases of blacks, whites and Latinos being victims of police violence.
The graph he passes between the salt and pepper displays his provisional findings. The horizontal axis is a scale of the severity of the violence, from shoving on the left all the way to shootings on the right. The curve starts high, suggesting strong differences in minor incidents, but descends to zero as the cases become more violent. In other words, once contextual factors were taken into account, blacks were no more likely to be shot by police. All of which raises the question: why the outcry in 2014 in Ferguson, Missouri, where Brown was shot?
“That’s the data,” Fryer says. “Now one hypothesis for why Ferguson happened — not the shooting but the outcry — was not because people were making statistical inference, not from whether Michael Brown was guilty or innocent but because they fucking hate the police.” He continues: “The reason they hate the police is because if you spent years having hands put on you and [being] pushed to the ground and handcuffed without proper cause, and then you hear about a [police] shooting in your town, how could you believe it was anything but discrimination?”•
In a Techcrunch piece, Vivek Wadhwa identifies 2016 as a technological inflection point, naming six fields which he believes will see significant progress, promising the next 12 months “will be the beginning of an even bigger revolution, one that will change the way we live, let us visit new worlds, and lead us into a jobless future.”
I don’t know for most of the areas he mentions that this year will be any more important than 2015 or 2017. Consider the example of space exploration. Perhaps in 2016 private companies or government will accomplish something more impressive than the Falcon 9 landing or maybe not. Even if they do, it will be part of an incremental process rather than a radical breakthrough. Life on Mars will get nearer every year.
Wadhwa’s best bet, I think, is in the area of driverless cars, which will likely move much closer to fruition based on tests done this year. The writer is more measured with robotics, believing the industrial kind is on the cusp of major advances, but personal assistants still have a ways to go. An excerpt:
The 2015 DARPARobotics Challengerequired robots to navigate over an eight-task course simulating a disaster zone. It was almost comical to see them moving at the speed of molasses, freezing up, and falling over. Forget folding laundry and serving humans; these robots could hardly walk. As well, although we heard some three years ago that Foxconn would replace a million workers with robots in its Chinese factories, it never did so.
The breakthroughs may, however, be at hand. To begin with, anew generationof robots is being introduced by companies such as Switzerland’s ABB, Denmark’s Universal Robots, and Boston’s Rethink Robotics—robots dextrous enough to thread a needle and sensitive enough to work alongside humans. They can assemble circuits and pack boxes. We are at the cusp of the industrial-robot revolution.
Household robots are another matter. Household tasks may seem mundane, but they are incredibly difficult for machines to perform. Cleaning a room and folding laundry necessitate software algorithms that are more complex than those to land a man on the moon. But there have been many breakthroughs of late, largely driven by A.I., enablingrobots to learncertain tasks by themselves and teach each other what they have learnt. And with the open source robotic operating system, ROS, thousands of developers worldwide are getting close toperfectingthe algorithms.
Don’t be surprised when robots start showing up in supermarkets and malls—and in our homes. Remember Rosie, the robotic housekeeper from the TV series The Jetsons? I am expecting version 1 to begin shipping in the early 2020s.•
Nothing’s so useful in politics as boogeymen. Fixing an actual large-scale problem is hard, sometimes impossible, so attention is often diverted to a relatively miniscule one. There’s an added bonus: Frightened people are paralyzed, easy to manipulate.
During the second half of the 1960s, when the American social fabric began to fray in a cultural revolution that no one could contain, motorcycle gangs became useful stooges as symbols of barbarians at the gates. In 1966, when a shocking report of a California crime made the Hell’s Angels Public Enemy No. 1, Hunter S. Thompson elucidated the disproportionate attention the unholy rollers were receiving inan articlein the Nation. Of course, the following year he fed the myth himself with a book about his travels–and travails–with the club. An excerpt:
The California climate is perfect for motorcycles, as well as surfboards, swimming pools and convertibles. Most of the cyclists are harmless weekend types, members of the American Motorcycle Association, and no more dangerous than skiers or skin divers. But a few belong to what the others call “outlaw clubs,” and these are the ones who–especially on weekends and holidays–are likely to turn up almost anywhere in the state, looking for action. Despite everything the psychiatrists and Freudian casuists have to say about them, they are tough, mean and potentially as dangerous as a pack of wild boar. When push comes to shove, any leather fetishes or inadequacy feelings that may be involved are entirely beside the point, as anyone who has ever tangled with these boys will sadly testify. When you get in an argument with a group of outlaw motorcyclists, you can generally count your chances of emerging unmaimed by the number of heavy-handed allies you can muster in the time it takes to smash a beer bottle. In this league, sportsmanship is for old liberals and young fools. “I smashed his face,” one of them said to me of a man he’d never seen until the swinging started. “He got wise. He called me a punk. He must have been stupid.”
The most notorious of these outlaw groups is the Hell’s Angels, supposedly headquartered in San Bernardino, just east of Los Angeles, and with branches all over the state. As a result of the infamous “Labor Day gang rape,” the Attorney General of California has recently issued an official report on the Hell’s Angels. According to the report, they are easily identified:
The emblem of the Hell’s Angels, termed “colors,” consists of an embroidered patch of a winged skull wearing a motorcycle helmet. Just below the wing of the emblem are the letters “MC.” Over this is a band bearing the words “Hell’s Angels.” Below the emblem is another patch bearing the local chapter name, which is usually an abbreviation for the city or locality. These patches are sewn on the back of a usually sleeveless denim jacket. In addition, members have been observed wearing various types of Luftwaffe insignia and reproductions of German iron crosses.* (*Purely for decorative and shock effect. The Hell’s Angels are apolitical and no more racist than other ignorant young thugs.) Many affect beards and their hair is usually long and unkempt. Some wear a single earring in a pierced ear lobe. Frequently they have been observed to wear metal belts made of a length of polished motorcycle drive chain which can be unhooked and used as a flexible bludgeon… Probably the most universal common denominator in identification of Hell’s Angels is generally their filthy condition. Investigating officers consistently report these people, both club members and their female associates, seem badly in need of a bath. Fingerprints are a very effective means of identification because a high percentage of Hell’s Angels have criminal records.
In addition to the patches on the back of Hell’s Angel’s jackets, the “One Percenters” wear a patch reading “1%-er.” Another badge worn by some members bears the number “13.” It is reported to represent the 13th letter of the alphabet, “M,” which in turn stands for marijuana and indicates the wearer thereof is a user of the drug.
The Attorney General’s report was colorful, interesting, heavily biased and consistently alarming–just the sort of thing, in fact, to make a clanging good article for a national news magazine. Which it did; in both barrels. Newsweek led with a left hook titled “The Wild Ones,” Time crossed right, inevitably titled “The Wilder Ones.” The Hell’s Angels, cursing the implications of this new attack, retreated to the bar of the DePau Hotel near the San Francisco waterfront and planned a weekend beach party. I showed them the articles. Hell’s Angels do not normally read the news magazines. “I’d go nuts if I read that stuff all the time,” said one. “It’s all bullshit.”
Newsweek was relatively circumspect. It offered local color, flashy quotes and “evidence” carefully attributed to the official report but unaccountably said the report accused the Hell’s Angels of homosexuality, whereas the report said just the opposite. Time leaped into the fray with a flurry of blood, booze and semen-flecked wordage that amounted, in the end, to a classic of supercharged hokum: “Drug-induced stupors… no act is too degrading… swap girls, drugs and motorcycles with equal abandon… stealing forays… then ride off again to seek some new nadir in sordid behavior…”
Where does all this leave the Hell’s Angels and the thousands of shuddering Californians (according to Time) who are worried sick about them? Are these outlaws really going to be busted, routed and cooled, as the news magazines implied? Are California highways any safer as a result of this published uproar? Can honest merchants once again walk the streets in peace? The answer is that nothing has changed except that a few people calling themselves the Hell’s Angels have a new sense of identity and importance.
After two weeks of intensive dealings with the Hell’s Angels phenomenon, both in print and in person, I’m convinced the net result of the general howl and publicity has been to obscure and avoid the real issues by invoking a savage conspiracy of bogeymen and conning the public into thinking all will be “business as usual” once this fearsome snake is scotched, as it surely will be by hard and ready minions of the Establishment.•
Northwestern economist Robert Gordon may be too bearish on the transformative powers of the Internet, but he does make a good case that the technological innovations of a century ago dwarf the impact of the information revolution.
A well-written and sadly un-bylined Economist review of the academic’s new book, The Rise and Fall of American Growth, looks at how the wheels came off the U.S. locomotive in the 1970s, courtesy of the rise of global competition and OPEC along with increasing inequality on the homefront. Gordon is dour about the prospects of a new American century, believing technologists are offering thin gruel and that Moore’s Law is running aground. The reviewer thinks the economist is ultimately too dismissive of Silicon Valley.
The technological revolutions of the late 19th century transformed the world. The life that Americans led before that is unrecognisable. Their idea of speed was defined by horses. The rhythm of their days was dictated by the rise and fall of the sun. The most basic daily tasks—getting water for a bath or washing clothes—were back-breaking chores. As Mr Gordon shows, a succession of revolutions transformed every aspect of life. The invention of electricity brought light in the evenings. The invention of the telephone killed distance. The invention of what General Electric called “electric servants” liberated women from domestic slavery. The speed of change was also remarkable. In the 30 years from 1870 to 1900 railway companies added 20 miles of track each day. By the turn of the century, Sears Roebuck, a mail-order company that was founded in 1893, was fulfilling 100,000 orders a day from a catalogue of 1,162 pages. The price of cars plummeted by 63% between 1912 and 1930, while the proportion of American households that had access to a car increased from just over 2% to 89.8%.
America quickly pulled ahead of the rest of the world in almost every new technology—a locomotive to Europe’s snail, as Andrew Carnegie put it. In 1900 Americans had four times as many telephones per person as the British, six times as many as the Germans and 20 times as many as the French. Almost one-sixth of the world’s railway traffic passed through a single American city, Chicago. Thirty years later Americans owned more than 78% of the world’s motor cars. It took the French until 1948 to have the same access to cars and electricity that America had in 1912.
The Great Depression did a little to slow America’s momentum. But the private sector continued to innovate. By some measures, the 1930s were the most productive decade in terms of the numbers of inventions and patents granted relative to the size of the economy. Franklin Roosevelt’s government invested in productive capacity with the Tennessee Valley Authority and the Hoover Dam.
The second world war demonstrated the astonishing power of America’s production machine. After 1945 America consolidated its global pre-eminence by constructing a new global order, with the Marshall Plan and the Bretton Woods institutions, and by pouring money into higher education. The 1950s and 1960s were a golden age of prosperity in which even people with no more than a high-school education could enjoy a steady job, a house in the suburbs and a safe retirement.
But Mr Gordon’s tone grows gloomy when he turns to the 1970s.•
It’s not a done deal that technological employment will be widespread, that the “lights-out” factory will become the norm, but it’s possible to the extent that we should worry about such a scary situation now.
I doubt the answers will lie in somehow reigning in technology. Not to overly anthropomorphize robots, but they have a “life” of their own. If humans and machines can both do the same job, the work will ultimately become the domain of AI. The solutions, if needed, will have to emerge from policy. Not the kind that artificially limits machines, but the type that provides security derived from social safety nets.
In an In These Times article, David Moberg writes that “much will depend on whether we humans leave robotization to the free market or whether we take deliberate steps to shape our future relationships with robots.” I disagree with his suggestion that perhaps we can design robots to merely augment human production. That’s implausible and at best an intermediary step, but the author writes intelligently on the topic.
If we’re on the brink of a period of robotic upheaval, labor organizing will be more crucial than ever. Workers will need unions with the power to negotiate the needs of the displaced.
Another aspect of the disruption could be an exacerbation of economic inequality. MIT economist David Autor argues that the advent of computing in the late 1970s helped drive our current stratification. As demand increased for abstract labor (college-educated workers using computers) and decreased for manual, routine labor (service workers with few skilled tasks), he says, the pay for different occupations consequently became more polarized, fueling the rise of inequality.
But Lawrence Mishel and his Economic Policy Institute colleagues, along with Dean Baker, argue that this model of polarization misses important nuances of contemporary labor markets and ignores the primary driver of inequality: public policy, not robots. They point to a range of U.S. policies, including encouragement of financial sector growth and suppression of the minimum wage, as contributing to burgeoning inequality.
No matter who is right, it’s indisputable that public policy, in addition to unions, can play a powerful role in curbing the ill effects of technological disruption.
For every action, a reaction: Small drones, in addition to all the good they can do, can be used for illicit surveillance and delivering explosives and smuggling, among other nefarious deeds, so Michigan researchers created a concept prototype of an anti-drone tool called “robotic falconry,” which nets the interloping technology and commandeers it to a safe place. What will the countermeasure be when spy drones can fit on the head of a pin? There’ll be a market, so something will emerge.
In January 2015, a Washington, DC, hobbyist accidentally flew his DJI Phantom quadcopter drone over the White House fence and crashed it on the lawn.
Two years earlier, a prankster sent his drone toward German prime minister Angela Merkel during a campaign rally.
Small drones have also proven to be effective tools of mischief that doesn’t make the national news, from spying to smuggling to hacking. So when Mo Rastgaar was watching World Cup soccer and heard about snipers protecting the crowd, he doubted that they’d fully understood a drone’s potential.
“I thought, ‘If the threat is a drone, you really don’t want to shoot it down—it might contain explosives and blow up. What you want to do is catch it and get it out of there.’”
Safe Drone Catcher
So Rastgaar, an associate professor of mechanical engineering at Michigan Technological University, began work on a drone catcher, which could pursue and capture rogue drones that might threaten military installations, air traffic, sporting events—even the White House.•
Some people actually believe that those participating in the Gig Economy, that Libertarian wet dream, are mostly entrepreneurial souls gladly Ubering others just until they secure seed money for their startup. That’s preposterous.
Piecework employment isn’t good at all for Labor unless basic income in uncoupled from work, which isn’t the arrangement most citizens find themselves in. And if wages remain flat and too many people are reduced to rabbits with tasks but no benefits, we’re in a collective quandary.
Andrew Callaway has penned a Policy Alternatives article about his perplexing experiences in the so-called Sharing Economy. The writer ultimately doesn’t feel that such an arrangement is bad for everyone, but that most will not prosper within its new rules. The opening:
If you spend enough time in San Francisco, you’ll notice sharing economy workers everywhere. While you’re waiting to get some food, look for the most frantic person in the lineup and you can bet they’re working with an app. Some of them are colour-coded: workers in orange T-shirts are with Caviar, a food delivery app; those in green represent Instacart, an app for delivering groceries. The blue jackets riding Razor scooters are with Luxe—if you’re still driving yourself around this city, these app workers will park your car.
In the Bay Area, there are thousands of such people running through the aisles, fidgeting in line and racing against the clock. They spend most of their time in cars, where it can be harder to spot them. Oftentimes they’re double-parked in the bike lane, picking up a burrito from inside an adjacent restaurant or waiting for a passenger to come down from the apartment on top. If you look closely, you’ll see a placard in the window that says Uber or a glowing pink moustache indicating they drive around Lyft’s passengers. Last summer, I was one of them.
Oh, Canada! I’m writing you from Berkeley, California to warn you about this thing called “the sharing economy.” Since no one is really sharing anything, many of us prefer the term “the exploitation economy,” but due to its prevalence many in the Bay Area simply think of it as “the economy.”•
I recall reading and loving Michael Idov’s “The Movie Set That Ate Itself,” his strange 2011 GQ journalistic walkabout in which he reported from the insane Ukraine film set of certifiable auteur Ilya Khrzhanovsky. Several unforeseen WTF professional and geopolitical moments later, he found himself one of Russia’s top screenwriters, crafting successful TV shows and films during the chill of the Second Cold War, perhaps an astute social commentator or maybe an unwitting government stooge.
Idov’s written a piece about his unexpected life changes for the New York Times Magazine, which is the first excellent longform article I’ve read this new year (sorry, Sean Penn). A passage about how the magazine editor began to branch out from the news biz to show biz, which offered greater freedom from the Kremlin’s intentionally fuzzy censorship rules:
Russia and the United States had exchanged the first salvos in the new cold war. Congress passed the Magnitsky Act, barring certain apparatchiks from entering the United States. In an asymmetric response, the Duma barred all Americans from adopting Russian children — a sudden jolt of direct discrimination, as my wife and I had been considering exactly that.
At work, too, not a week seemed to pass without a new law designed to curb free speech. Hastily adopted legislation basically made it illegal to offend any social group — though as wielded by the authorities, the new laws primarily seemed to protect the strong from the weak. Impugning the Soviet Union’s conduct in World War II was illegal. Disrespecting Russia’s ‘‘territorial integrity’’ was illegal. Mentioning drugs or suicide in a way that could be construed as ‘‘instructional’’ was illegal, and prosecutors could use an agency called Roskomnadzor to shut down any website for so much as an unruly user comment. A vile anti-gay law banned speech that ‘‘creates false equivalence between traditional and nontraditional lifestyle.’’ (This in a country whose pop stars’ wardrobes suggest that Russia’s biggest natural resource is rhinestones.) I had to fight Condé Nast’s in-house counsel for the right to publish a positive review of the Liberace biopic Behind the Candelabra; he objected to the use of the word ‘‘love’’ to describe a same-sex relationship.
The genius of all these laws was in their purposeful inconsistency, which ensured that almost anyone could be silenced at any time; they were designed to be implemented capriciously, to weed out undesirables. Editing a magazine became hazardous to your health — mental and otherwise. GQ’s political columnist, Andrew Ryvkin, was beaten up on the street by two pro-Putin writers of some renown, Sergei Minaev and Eduard Bagirov. I myself ended up slapping a Tatler editor on the steps of the Bolshoi Theater after he wrote anti-Semitic diatribes about me. This was shaping up to be the most surreal year of my life.
One night, I called Ryvkin with a spur-of-the-moment idea: ‘‘Let’s write Louie, but about me in Moscow.’’ Ryvkin had a similar background to mine (he spent his formative years in Boston) and similar comedic sensibilities; we both worshiped 30 Rock and Louis C.K. Three weeks, a few joints and several pizzas later, we had a pilot. The main character, a neurotic, blocked, broke Brooklyn novelist, comes to Moscow to promote his book, gets Jew-baited on live TV by a glib Russian oligarch and reconnects with his childhood friend Roman, now an out-of-control photographer modeled on Terry Richardson. The friends spend most of the episode crafting an appropriate response to the slur and finally head over to the oligarch’s club to beat him up. When they get there, however, the offender offers the novelist a plum job in Moscow, forcing him to sell out on the spot.
The script was a mishmash of autobiography and anger, filled with profanity, drug use, gay jokes, Nazi jokes and weird structural hiccups. I was venting every frustration of my day job. In a good measure of how little I cared about the pilot’s suitability for Russian TV, I named its protagonist Matt Rushkin, ‘‘Rashka’’ being an émigré’s derogatory term for Russia itself.•
That wonderful Wallace Shawn gathered all his guilt into an indigestible lump to write, in 1996, The Designated Mourner, about intellect under siege as society goes up in flames. Not as good as Aunt Dan and Lemon or Marie and Bruce, but interesting stuff in the run-up to the new millennium. In retrospect, Shawn seemed to have misfired a bit. It wasn’t the top that was vanishing but the middle.
Another thing we’ve lost besides the middle in our new normal is memory, that decidedly un-pliant thing. Even things from a few years ago seem like ancient history. Perhaps more than designated mourners what we need now are designated reminders, people who can point out that the world didn’t begin with downloads.
One of the most colorful of current reminders is Matt Novak, founder of Paleofuture. After moving that site at Gizmodo, Novak penned “Oregon Was Founded As a Racist Utopia,” a post that seems very timely right now. Not that Oregonians are responsible for the Bundy brigade of anti-government interlopers, but it does speak to the history of regional resistance to authority. The opening:
When Oregon was granted statehood in 1859, it was the only state in the Union admitted with a constitution that forbade black people from living, working, or owning property there. It was illegal for black people even to move to the state until 1926. Oregon’s founding is part of the forgotten history of racism in the American west.
Waddles Coffee Shop in Portland, Oregon was a popular restaurant in the 1950s for both locals and travelers alike. The drive-in catered to America’s postwar obsession with car culture, allowing people to get coffee and a slice of pie without even leaving their vehicle. But if you happened to be black, the owners of Waddles implored you to keep on driving. The restaurant had a sign outside with a very clear message: “White Trade Only — Please.”
It’s the kind of scene from the 1950s that’s so hard for many Americans to imagine happening outside of the Jim Crow South. How could a progressive, northern city like Portland have allowed a restaurant to exclude non-white patrons? This had to be an anomaly, right? In reality it was far too common in Oregon, a state that was explicitly founded as a kind of white utopia.•
Life has always been, in some sense, a tale of two cities, those who have and those who have not–or at least much less. Even granting that, however, we’re living in a wildly unequal world. In a Factor-Tech piece, Lucy Ingham analyzes the conditions that have made it possible for the 1% to own most of the assets. She traces concentrated wealth in the U.S. back to the Ronald Reagan economic policies (tax cuts for the rich, deregulation, etc.) and a less sexy salvo, a change in law allowing companies to buy back large amounts of their own stock. The writer thinks financialization, more than automation, is the problem, and the result of a growing underclass has been a rising police state. An excerpt:
Inequality has always existed, and there is an argument to say it’s an inherent part of human society. However, the level of inequality is now far beyond what we perceive it to be, and that’s a big problem.
“The American consciousness about inequality is frozen in a previous era,” says [Les] Leopold, citing the US results of an international poll about the perceived gap between entry level workers’ and CEOs’ pay as an example.
In the poll, people from all walks of life and political affiliation were asked to state what they thought the average gap was between the lowest and highest earners at a typical company.
“By and large, no matter what their age or background or political affiliation was, it sort of came out to about 40:1 – for every one dollar to the entry-level worker, 40 to the CEO,” says Leopold. “That’s kind of what it was in 1970.”
The reality, according to The Labor Institute’s data about the top 100 CEOs, is 829:1, making the inequality gap around 20 times larger that people perceived it to be. In 2016 the Institute believes it will be worse still, projecting 859:1.
Yet when asked in the poll what the ratio should be, participants consistently said it should be even lower that the imagined rate of 40:1.
“Strong Republicans in this survey think it ought to be 12:1, strong Democrats say 5:1, the average is about 8:1,” adds Leopold.
So how have we not noticed that the reality is so very far from our perceptions?•
Sean Penn screen performances often depend on quantity as much as quality–not the best acting, but the most acting–so it’s no surprise his attempt at gonzo journalism, a Rolling Stone feature he wrote about his facacta jungle interview of Joaquín Archivaldo Guzmán Loera (or “El Chapo” as he’s known to his business associates) is logorrheic. The short Q&A embedded within the long article is deeply unsatisfying and the piece as a whole is a mess, though not one without interest. It’s more fascinating, though, for allowing a close-up of the actor-director’s staccato brain droppings and the technological logistics of securing a clandestine meeting with Mexico’s most-wanted man than for any insight into the cartel kingpin. It only takes two paragraphs for Penn to describe his very own Oscar Zeta Acosta in this way: “Espinoza is the owl who flies among falcons.” Bless his editor.
Penn, unsurprisingly, has deep sympathy for El Chapo despite his beheadings of those he wanted to eliminate and murders of priests who refused extortion demands, arguing that American drug users are complicit in these crimes. In that case, Penn’s nose should be arrested for multiple homicide. Galling that the lightweight inquisition allows the subject to downplay his horrific violence and an odd way to protest the U.S. War on Drugs, which is an undoubtedly stupid thing. An excerpt:
It’s been about two hours of flight, when we descend from above the lush peaks to ward a sea-level field. The pilot, using his encrypted cellphone, talks to the ground. I sense that the military is beefing up operations in its search area. Our original landing zone has suddenly been deemed insecure. After quite a bit of chatter from ground to air, and some unnervingly low altitude circling, we find an alternate dirt patch where two SUVs wait in the shade of an adjacent tree line, and land. The flight had been just bumpy enough that each of us had taken a few swigs off a bottle of Honor tequila, a new brand that Kate is marketing. I step from plane to earth, ever so slightly sobering my bearings, and move toward the beckoning waves of waiting drivers. I throw my satchel into the open back of one of the SUVs, and lumber over to the tree line to take a piss. Dick in hand, I do consider it among my body parts vulnerable to the knives of irrational narco types, and take a fond last look, before tucking it back into my pants.
Espinoza had recently undergone back surgery. He stretched, readjusted his surgical corset, exposing it. It dawns on me that one of our greeters might mistake the corset for a device that contains a wire, a chip, a tracker. With all their eyes on him, Espinoza methodically adjusts the Velcro toward his belly, slowly looks up, sharing his trademark smile with the suspicious eyes around him. Then, “Cirugia de espalda [back surgery],” he says. Situation defused.
We embark into the dense, mountainous jungle in a two-truck convoy, crossing through river after river for seven long hours. Espinoza and El Alto, with a driver in the front vehicle, myself and Kate with Alonzo and Alfredo in the rear. At times the jungle opens up to farmland, then closes again into forest. As the elevation begins to climb, road signage announces approaching townships. And then, as it seems we are at the entrance of Oz, the highest peak visibly within reach, we arrive at a military checkpoint. Two uniformed government soldiers, weapons at the ready, approach our vehicle. Alfredo lowers his passenger window; the soldiers back away, looking embarrassed, and wave us through. Wow. So it is, the power of a Guzman face. And the corruption of an institution. Did this mean we were nearing the man?
It was still several hours into the jungle before any sign we were getting closer. Then, strangers appear as if from nowhere, onto the dirt track, checking in with our drivers and exchanging hand radios. We move on. Small villages materialize from the jungle; protective peasant eyes relax at the wave of a familiar driver. Cellphones are of no use here, so I imagine there are radio repeaters on topographical high points facilitating their internal communications.
We’d left Los Angeles at 7 a.m. By 9 p.m. on the dash clock we arrive at a clearing where several SUVs are parked. A small crew of men hover. On a knoll above, I see a few weathered bungalows. I get out of the truck, search the faces of the crew for approval that I may walk to the trunk to secure my bag. Nods follow. I move. And, when I do…there he is. Right beside the truck. The world’s most famous fugitive: El Chapo.•
Tom Chatfield, an uncommonly thoughtful commenter on the technological world we’ve built for ourselves, is interviewed by Nigel Warburton of Aeon about staying human in a machine age. In the seven-minute piece, Chatfield notes that games in the Digital Age have become more meaningful than work in many instances because the former builds skills in players while the latter looks to replace the messy human component.
A much more exiting model of human-machine interaction, Chatfield offers, is one where we maximize what people and AI are each good at. That would be great and is doable in the short run if we choose to approach the situation that way, but I do believe that ultimately, whatever tasks that both humans and machines can do will be ceded almost entirely to silicon. A freestyle-chess system to production will have a short shelf life in most applications. We may be left to figure out brand new areas in which we can thrive and define why we exist.
At any rate, smart stuff about automated systems. Watch here.
If you’d asked me what Charles Koch eats for lunch, I would have guessed pulled pork or jerk chicken. The billionaire industrialist and right-wing benefactor opted for the former when he sat down to dine and talk with Stephen Foley of the Financial Times for an interesting interview. Funny that Koch now regrets many of the policies he’s spent elephantine sums supporting in the new century. (Of course, it’s not the first time he’s voiced opinions at odds with the think-tanks, projects and politicians he bankrolls.) Something tells me he’ll be regretting the beliefs he currently supports in another decade.
An excerpt about the current slate of GOP 2016 hopefuls:
I ask about the rhetorical turn the race has taken when it comes to dealing with Islamist terror, and about Trump’s assertion that the US could require all Muslims in the country to register with the government.
“Well, then you destroy our free society,” Koch says of the idea. “Who is it that said, ‘If you want to defend your liberty, the first thing you’ve got to do is defend the liberty of people you like the least’?”
He then expounds on the war on terror. “We have been doing this for a dozen years. We invaded Afghanistan. We invaded Iraq. Has that made us safer? Has that made the world safer? It seems like we’re more worried about it now than we were then, so we need to examine these strategies.”
It’s a view that also contrasts with that of another Republican frontrunner; Ted Cruz’s plan to carpet-bomb Isis strongholds is anathema to Koch. “I’ve studied revolutionaries a lot,” he says. “Mao said that the people are the sea in which the revolutionary swims. Not that we don’t need to defend ourselves and have better intelligence and all that, but how do we create an unfriendly sea for the terrorists in the Muslim communities? We haven’t done a good job of that.” With about 1.6bn Muslims worldwide “in country after country. What,” he asks, “are we going to do: go bomb each one of them?”
These particular views could almost have come from the mouth of Bernie Sanders, the socialist challenger to Hillary Clinton for the Democratic nomination and a regular basher of the Kochs.•
It’s usually better to worry too soon than too late about an ethical quandary, but the National Institute of Health is thinking far in advance when it expresses concern about scientists attempting to grow human organs in lab animals. It’s not that the NIH believes such experiments are bad for the creatures–that would be understandable–but the agency wants to halt the research because it feels injecting human cells into other species may invest them with a human level of understanding. It’s really difficult to believe that’s happening anytime soon.
In a MIT Technology Review report, Anthony Regalado reports that numerous American labs are pushing forward on this front despite threats of funding being pulled. An excerpt:
The experiments rely on a cutting-edge fusion of technologies, including recent breakthroughs in stem-cell biology and gene-editing techniques. By modifying genes, scientists can now easily change the DNA in pig or sheep embryos so that they are genetically incapable of forming a specific tissue. Then, by adding stem cells from a person, they hope the human cells will take over the job of forming the missing organ, which could then be harvested from the animal for use in a transplant operation.
“We can make an animal without a heart. We have engineered pigs that lack skeletal muscles and blood vessels,” says Daniel Garry, a cardiologist who leads a chimera project at the University of Minnesota. While such pigs aren’t viable, they can develop properly if a few cells are added from a normal pig embryo. Garry says he’s already melded two pigs in this way and recently won a $1.4 million grant from the U.S. Army, which funds some biomedical research, to try to grow human hearts in swine.
The worry is that the animals might turn out to be a little too human for comfort, say ending up with human reproductive cells, patches of people hair, or just higher intelligence. “We are not near the island of Dr. Moreau, but science moves fast,” NIH ethicist David Resnik said during the agency’s November meeting. “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.”•
To many Americans, the sons of Bundy and their militia mates evoke one question: why? Why would a group of well-fed, so-called patriots think their own government, for whatever flaws it possesses, is the devil? It seems madness, alien to any rational interpretation. That could be because the terroristic behavior isn’t driven by facts but by faith, and one given to particularly violent tendencies. You don’t need religion to do something rash and scary, of course, but it can be a very potent ingredient in a toxic mix.
There has always been faith-fuelled madness in the country, as best demonstrated in Gilbert Seldes’ book The Stammering Century, and Jon Krakauer believes the Oregon occupation is powered by the same spiritual madness that abetted the murders committed by Dan and Ron Lafferty, which he investigated in Under the Banner of Heaven.
In a Medium article, the author excerpts two pieces of his 2004 book particularly pertinent to current events. An excerpt:
After Dan Lafferty read The Peace Maker in the early 1980s and resolved to start living the principle of plural marriage, he announced to his wife, Matilda, that he intended to wed her oldest daughter — his stepdaughter. At the last minute, however, he abandoned that plan and instead married a Romanian immigrant named Ann Randak, who took care of the horses on one of Robert Redford’s ranches up Spanish Fork Canyon, in the mountains east of the Dream Mine. Ann and Dan met when he borrowed a horse from her to ride in a local parade. She wasn’t LDS, says Dan, “but she was open to new experiences. Becoming my plural wife was her idea.” Ann, he adds, “was a lovely girl. I called her my gypsy bride.”
Living according to the strictures laid down in The Peace Maker felt good to Dan — it felt right, as though this really were the way God intended men and women to live. Inspired, Dan sought out other texts published by a well-known fundamentalist and Dream Mine backer, Ogden Kraut, about Mormonism as it was practiced in the early years of the church.
It didn’t take him long to discover that polygamy wasn’t the only divine principle the modern LDS Church had abandoned in its eagerness to be accepted by American society. Dan learned that in the 19th century, both Joseph Smith and Brigham Young had preached about the righteousness of a sacred doctrine known as “blood atonement:” Certain grievous acts committed against Mormons, as Brigham explained it, could only be rectified if the “sinners have their blood spilt upon the ground.” And Dan learned that Joseph had taught that the laws of God take precedence over the laws of men.
Legal theory was a subject of particular interest to Dan. His curiosity had first been aroused when he was training to be a chiropractor in California, following a run-in he had with state and county authorities. At the time, he supported his family primarily by running a small sandwich business out of their home. Dan, Matilda, and the oldest kids would get out of bed before dawn every morning in order to make and wrap stacks of “all natural” vegetarian sandwiches, which Dan would then sell to other chiropractic students during the lunch hour.
“It was a very profitable little hustle,” Dan says proudly. “Or it was until the Board of Health closed me down for not following regulations. They claimed I needed a license, and that I wasn’t paying the required taxes.” Just before he was put out of business, Matilda had given birth to a baby boy. Money was tight. Losing their main source of income was problematic. It also proved to be a pivotal event in Dan’s passage to fundamentalism.
“After they shut me down,” Dan recalls, “I didn’t know quite what to do. It didn’t seem right to me, that the government would penalize me just for being ambitious and trying to support my family — that they would actually force me to go on welfare instead of simply letting me run my little business. It seemed so stupid — the worst kind of government intrusion. In The Book of Mormon, Moroni talks about how all of us have an obligation to make sure we have a good and just government, and when I read that, it really got me going. It made me realize that I needed to start getting involved in political issues. And I saw that when it comes right down to it, you can’t really separate political issues from religious issues. They’re all tied up together.”•
Hanna Reitsch would have been a feminist hero, if it weren’t for the Nazism.
Like the equally talented Leni Riefenstahl, politics made her story the thorniest thing. Reitsch was a pioneering, early-20th-century test pilot, an aviatrix as she was called in that era, but her gifts and great daring were used in the service of the Nazi Party beginning in the 1930s. Her importance in the scheme of things was such that she visited Hitler in his bunker in 1945.
Although her reputation always sullied–and, of course, should have been–Reitsch nonetheless did enjoy considerable standing despite her past, becoming a champion glider, and even being invited as a guest of the White House during the Kennedy Administration.
In 1976, three years before her death, Reitsch was interviewed about her aerial exploits.
Even though he remains one of the pantheon filmmakers, Fritz Lang had mixed feelings about the medium. Talkies initially left him cold and later on he found then Hollywood studio system a discombobulating compromise.
In 1972, Lang was interviewed by two reporters, Lloyd Chesley and Michael Gould, and confided in them that he had tired of directing movies by the advent of talking pictures and decided to recreate himself as a chemist. A disreputable money man dragged him back into the business and gave him the creative freedom to make the chilling classic, M. An excerpt from the interview:
Your themes changed from epic to intimate when you began making sound films.
I got tired from the big films. I didn’t want to make films anymore. I wanted to become a chemist. About this time an independent man—not of very good reputation—wanted me to make a film and I said ‘No, I don’t want to make films anymore.’ And he came and came and came, and finally I said ‘Look, I will make a film, but you will have nothing to say for it. You don’t know what it will be, you have no right to cut it, you only can give the money.’ He said ‘Fine, understood.’ And so I made M.
We started to write the script and I talked with my wife, Thea von Harbou, and I said ‘What is the most insidious crime?’ We came to the fact of anonymous poison letters. And then one day I said I had another idea—long before this mass murderer,[Peter] Kurten, in the Rhineland. And if I wouldn’t have the agreement for no one to tell me anything, I would never, never have made M. Nobody knew Peter Lorre.•
In 1975, Lang and William Friedkin, two directors transfixed by extreme evil, engaged in conversation.
John Cale sometimes seems exhausted talking about The Velvet Underground, and who could blame him? An unlikely rock star to begin with, the Welsh musician was a classically trained violinist with strong avant garde leanings who arrived in New York City just as its rock and art scenes were exploding into one another, collaborating almost immediately with volatile Lou Reed and soon enough vampiric Andy Warhol. Cale lasted two albums with the band, but has never left its reputation. How could he?
“In Chicago, I was singing lead because Lou had hepatitis, no one knew the difference. We turned our faces to the wall and turned up very loud. Paul Morrissey (later the director of Trash) and Danny Williams had different visions of what the light show should be like and one night I looked up to see them fighting, hitting each other in the middle of a song. Danny Williams just disappeared. They found his clothes by the side of a river, with his car nearby … the whole thing. He used to carry this strobe around with him all the time and no one could figure out why till we found out he kept his amphetamine in it.”
“We worked the Masonic Hall in Columbus Ohio. A huge place filled with people drinking and talking. We tuned up for about ten minutes, tuning, fa-da-da, up, da-da-da, down. There’s a tape of it. Played a whole set to no applause, just silences.”
“In San Francisco, we played the Fillmore and no one liked us much. We put the guitars against the amps, turned up, played percussion and then split. Bill Graham came into the dressing room and said, ‘You owe me 20 more minutes’. I’d dropped a cymbal on Lou’s head and he was bleeding. ‘Is he hurt?’ Graham said, ‘We’re not insured.'”
“Severn Darden brought this young chick up to meet me there and he introduced her as one of my ardent admirers. This was a long time ago and I didn’t know about such things, so I said, ‘Pleased to meet you,’ and walked off. Two days later in L.A., here comes Severn again with this girl. I say hello again and leave. We’re all staying at the castle in L.A., and things are very hazy, if you know what I mean. Well, this girl is there too. I smile but I still don’t understand. About two in the morning the door of my room opens and she walks in naked and gets into bed. Went on for five nights. I don’t think I even got her address.”
The Velvets suffered from all kinds of strange troubles. They spent three years on the road away from New York City, their home, playing Houston, Boston, small towns in Pennsylvania, anywhere that would pay them scale.
“We needed someone like Andy,” John says. “He was a genius for getting publicity. Once we were in Providence to play at the Rhode Island School of Design and they sent a TV newsman to talk to us. Andy did the interview lying on the ground with his head propped up on one arm. There were some studded balls with lights shining on them and when the interviewer asked him why he was on the ground, Andy said, “So I can see the stars better.” The interview ended with the TV guy lying flat on his back saying, “Yeah, I see what you mean.”•
A 21-year-old John Cale the year he arrived in NYC and the one before he met Reed, on I’ve Got a Secret.
Every political season has its boogeymen, those frightening figures raised to scare up votes, and this particularly vitriolic period in the U.S. has Muslims, Mexicans and Chinese manufacturers. The latter pair are supposedly responsible for the decline of American manufacturing, and by extension, the middle class.
Outsourcing the making of American products to other countries may have been somewhat of a problem over the last three decades, but it’s a different kind of challenge workers are facing today: Potentially widespread automation that goes far beyond the factory floor. It’s nothing new, but the current pace of robotics progress is unprecedented.
Technological unemployment has been paid scant attention by 2016 hopefuls, most of whom are promising a return to postwar American manufacturing, which is most definitely not going to happen. In a New York Times editorial, Emma Roller argues that trying to turn back the digital clock is “not an attainable, or even a desirable, goal.” An excerpt:
Republicans aren’t the only ones obsessing over reclaiming these factory jobs. Last month, Hillary Clinton mentioned factory closings when she released her own plan to restore manufacturing jobs through a network of tax credits and federal funding for research. Senator Bernie Sanders, meanwhile, in criticizing the Trans-Pacific Partnership, has argued that such international trade deals are to blame for the loss of manufacturing jobs in this country.
The problem with this sort of rhetoric is that a lot of the manufacturing jobs the United States lost over the past 50 years didn’t go overseas; they simply disappeared with the advent of new technology.
James Sherk, a research fellow in labor economics at the Heritage Foundation, said the trend in machines taking over factory work that was previously done by humans has been going on since the 1950s. But for presidential candidates, it’s a lot easier to blame other countries rather than robots.
“It’s those basically rote, repetitive tasks where you’re fixing the same thing,” he said. “It’s very hard to imagine any of those positions coming back. Basically, a robot is a lot more affordable than a human employee.”
The skills needed to work on a factory floor today are quite different than they were 20, 10 or even five years ago. Don’t blame stingy companies or over-regulation by the government; blame the rapid progress of technology.•
At some point this century, and probably sooner than later, sensors will live inside pretty much all manufactured objects, moving every last thing into an interconnected data-gathering and -crunching system. Part of the mission will be to make individual lives and entire cities more efficient, constantly upgrading, but much of it will be about consumerism, creating and selling “products that respond to their owner’s tastes,” as Quentin Hardy of the New York Times notes in his really smart article about technologist Adam Bosworth attempting to bring about “data singularity.” All the world will be a “smart object,” privacy will be compromised to an unprecedented degree, and there’ll be no way to opt out. The blessing will be mixed.
Imagine if almost everything — streets, car bumpers, doors, hydroelectric dams — had a tiny sensor. That is already happening through so-called Internet-of-Things projects run by big companies like General Electric and IBM.
All those devices and sensors would also wirelessly connect to far-off data centers, where millions of computer servers manage and learn from all that information.
Those servers would then send back commands to help whatever the sensors are connected to operate more effectively: A home automatically turns up the heat ahead of cold weather moving in, or streetlights behave differently when traffic gets bad. Or imagine an insurance company instantly resolving who has to pay for what an instant after a fender-bender because it has been automatically fed information about the accident.
Think of it as one, enormous process in which machines gather information, learn and change based on what they learn. All in seconds.
“I’m interested in affecting five billion people,” said Mr. Bosworth, a former star at Microsoft and Google who now makes interactive software atSalesforce.com, an online software company that runs sales for thousands of corporations. “We’re headed into one of those historic discontinuities where society changes.”•
Donald Trump doesn’t want to force menstruating women to wear burqas, but what else can he do? I mean, he’s a businessman.
It’s amusing to listen to the hideous hotelier try to torpedo Bernie Sanders with cheap insults the way he does his fellow Republicans, the Vermont Senator impervious to taunts like “wacko” because of his gravitas, common sense and sheer likability. It’s difficult to envision Sanders faring well in Southern primaries, but perhaps he’s awakening a positive populist energy that won’t readily go away as Trump has awakened an enduring hatred. As the Occupy movement framed the 2012 election season, maybe Sanders will do the same now. Is he part of an elongated prelude to something significant?
In 2003 I wrote in my The New Ruthless Economythat one of the great imponderables of the twenty-first century was how long it would take for the deteriorating economic circumstances of most Americans to become a dominant political issue. It has taken over ten years but it is now happening, and its most dramatic manifestation to date is the rise of Bernie Sanders. While many political commentators seem to have concluded that Hillary Clinton is the presumptive Democratic nominee, polls taken as recently as the third week of December show Sanders to be ahead by more than ten points in New Hampshire and within single-figure striking distance of her in Iowa, the other early primary state.
Though he continues to receive far less attention in the national media than Hillary Clinton or Donald Trump, Sanders is posing a powerful challenge not only to the Democratic establishment aligned with Hillary Clinton, but also the school of thought that assumes that the Democrats need an establishment candidate like Clinton to run a viable campaign for president. Why this should be happening right now is a mystery for historians to unravel. It could be the delayed effect of the Great Recession of 2007-2008, or of economists Thomas Piketty and Emmanuel Saez’s unmasking of the vast concentration of wealth among the top 1 percent and the 0.1 percent of Americans, or just the cumulative effect of years of disappointment for many American workers.
Such mass progressive awakenings have happened before. I remember taking part in antiwar demonstrations on the East and West coasts in the Fall and Winter of 1967–1968. I noticed that significant numbers of solid middle-class citizens were joining in, sometimes with strollers, children, and dogs in tow. I felt at the time that this was the writing on the wall for Lyndon Johnson, as indeed it turned out to be. We may yet see such a shift away from Hillary Clinton, despite her strong performance in the recent debates and her recent recovery in the polls.
If it happens, it will owe in large part to Sanders’s unusual, if not unique, political identity.•
The Atlantic put together a predictably smart piece (“Can the Planet Be Saved?“) which asks scientists and thinkers what they feel most despairing and most hopeful about at year’s end. The first entry, by University of Arizona Law and Public Policy Professor Robert Glennon, speaks to a challenge made stark by the California droughts that worsened in 2015: water security. Our concept of H2O is baffling, as it’s priced cheap (and often wasted frivolously) yet along with oxygen the dearest thing. An excerpt:
Reason for despair: I despair that we don’t consider water to be scarce or valuable. A century of lax water laws and regulations has spoiled most Americans. We turn on the tap and out comes as much water as we want for less than we pay for cable television or cellphone service. When most Americans think of water, they think of it as similar to air—as infinite and inexhaustible. In reality, it’s both finite and exhaustible.
Because we don’t respect water as remarkable, we use needless quantities for frivolous purposes, such as growing grass in the desert. And because we don’t pay the real cost of water (only the cost of the infrastructure to provide it), we remove the incentive to conserve. Perhaps most important, our innovation economy has encouraged engineers and inventors to create water-saving technologies that extend our supply; but the price of water is so low that few of them have viable business plans.•
A BBC thought experiment proposes we carpet the Sahara with solar panels, asking experts in different fields about the feasibility of such a project. Seems like a no-brainer as a way to produce clean energy, but Helen Anne Curry, a lecturer in the Department of History and Philosophy of Science at Cambridge University, points out that innovating our way out of pollution may cause new issues unless we also curb consumption. Skeptical soul that I am, I don’t see our hunger for energy being sated anytime soon. Better technology may be our only chance. An excerpt:
Helen Anne Curry: Technology alone is rarely the answer
“I am interested in exploring the persistent optimism that surrounds new technologies, even after multiple failures.
“The technological fix is appealing; it’s exciting to think we can solve problems without fundamentally having to change the way we live, the way we get to work every day or the number of cheap flights we take.
“But you can’t just take one point in the system and say ‘that’s solved’; there is much more that extends outwards.
“Think of the work that was done to solve local air pollution in the mid-twentieth century, which was to build super-tall smokestacks.
“But they don’t eliminate the pollution from the air. They just throw it up much higher in the atmosphere, so in fact it circulates further. One of the subsequent problems of building these was they created acid rain in places that didn’t have this kind of concentrated industry.
“We can use our science and technology knowledge to bring other peoples of the world into the quality of life that the global north has enjoyed for far longer.
“Yet if you look back on 60 years of policy work and intervention, there’s a lot of ways in which we’ve failed. We haven’t been able to deliver the social, scientific and technological progress which we envisioned.
“I think the only reason to pursue [solar panels in the Sahara] would be if it were a stopgap measure in which the long-term goal would be to reduce consumption of energy and to change our lifestyles to be more sustainable, so that subsequent generations don’t have to deal with as many problems as we’re going to leave them.”•
Catastrophist philosopher Nick Bostrom believes machine superintelligence may be the greatest existential risk facing humankind, that it could, perhaps sooner than later, be the end of us if we’re not careful. There’s nothing theoretically impossible about that, though I seriously doubt the sooner part. First maybe McDonald’s will be fully automated, and then much, much, much later on we face a robot-inspired endgame. I actually think it’s more likely that such computer intelligence will help us engineer our own evolution into whatever it is we become in the long run, though miscalculation leading to a cascading disaster might become a plausible scenario at some point.
In a Washington Post piece, Joel Achenbach explores the so-called Artificial Intelligence threat and the professional worriers who analyze it and exhort us to shape the future. MIT computer scientist Daniela Rus is presented as a counterpoint to Bostrom, physicist Mark Tegmark and other thinkers who fear an AI-inspired end is near. The opening:
The world’s spookiest philosopher is Nick Bostrom, a thin, soft-spoken Swede. Of all the people worried about runaway artificial intelligence, and Killer Robots, and the possibility of a technological doomsday, Bostrom conjures the most extreme scenarios. In his mind, human extinction could be just the beginning.
Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.
Then it goes interstellar.
“You could have a superintelligence whose only goal is to make as many paper clips as possible, and you get this bubble of paper clips spreading through the universe,” Bostrom calmly told an audience in Santa Fe, N.M., earlier this year.
He added, maintaining his tone of understatement, “I think that would be a low-value future.”
Bostrom’s underlying concerns about machine intelligence, unintended consequences and potentially malevolent computers have gone mainstream. You can’t attend a technology conference these days without someone bringing up the A.I. anxiety. It hovers over the tech conversation with the high-pitched whine of a 1950s-era Hollywood flying saucer.•