Nicholas Carr

You are currently browsing articles tagged Nicholas Carr.

Haven’t yet gotten my stinking paws on Sandy Pentland’s new book, Social Physics: How Social Networks Can Make Us Smarter, but the Nicholas Carr critique in Technology Review is instructive even if you possess no prior knowledge of the computer scientist’s vision for the future. 

A society all watched over by machines of loving grace seems implausible to me, whether we’re talking about totalitarian states or democratic ones. The former will use sensors and chips to monitor and manipulate behavior—and so will the latter, actually. And the distance from a nudge to a shove is shorter than we may accept.

The supposed virtue of Big Data is that it can possibly view behavior and justice without prejudice, except that it’s programmed by humans who possess those preconceived notions. As Carr explains, a further failing is that skimming the surface of society for information to engineer the populace pays no mind to historical context, so it can be a feedback loop rather than a corrective. To be succinct: It lacks the depth of the past and an understanding of the very nature of being human.

An excerpt:

Ultimately, Pentland argues, looking at people’s interactions through a mathematical lens will free us of time-worn notions about class and class struggle. Political and economic classes, he contends, are “oversimplified stereotypes of a fluid and overlapping matrix of peer groups.” Peer groups, unlike classes, are defined by “shared norms” rather than just “standard features such as income” or “their relationship to the means of production.” Armed with exhaustive information about individuals’ habits and associations, civic planners will be able to trace the full flow of influences that shape personal behavior. Abandoning general categories like “rich” and “poor” or “haves” and “have-nots,” we’ll be able to understand people as individuals—even if those individuals are no more than the sums of all the peer pressures and other social influences that affect them.

Replacing politics with programming might sound appealing, particularly given Washington’s paralysis. But there are good reasons to be nervous about this sort of social engineering. Most obvious are the privacy concerns raised by collecting ever more intimate personal information. Pentland anticipates such criticisms by arguing for a “New Deal on Data” that gives people direct control over the information collected about them. It’s hard, though, to imagine Internet companies agreeing to give up ownership of the behavioral information that is crucial to their competitive advantage.

Even if we assume that the privacy issues can be resolved, the idea of what Pentland calls a “data-driven society” remains problematic. Social physics is a variation on the theory of behavioralism that found favor in McLuhan’s day, and it suffers from the same limitations that doomed its predecessor. Defining social relations as a pattern of stimulus and response makes the math easier, but it ignores the deep, structural sources of social ills. Pentland may be right that our behavior is determined largely by social norms and the influences of our peers, but what he fails to see is that those norms and influences are themselves shaped by history, politics, and economics, not to mention power and prejudice. People don’t have complete freedom in choosing their peer groups. Their choices are constrained by where they live, where they come from, how much money they have, and what they look like. A statistical model of society that ignores issues of class, that takes patterns of influence as givens rather than as historical contingencies, will tend to perpetuate existing social structures and dynamics. It will encourage us to optimize the status quo rather than challenge it.

Politics is messy because society is messy, not the other way around.•

Tags: , ,

The problem with America worrying about the existential risks of AI is that losing the race to AI is also an existential risk. If we invest correctly in the future (not just Artificial Intelligence but also solar and supercomputers) while providing enough infrastructure projects and social safety nets to keep afloat those displaced (hopefully temporarily) by our transition into the Digital Age, the country shouldn’t fall behind China or any other state. Of course, we’re so politically confused and toxic right now that such a scenario seems possible though not plausible. If China should win this arms race and Space Race rolled into one, the authoritarian nation will have the military heft and soft power to shape the world.

Daniel Kliman and Harry Krejsa worry about this dark potential in “Is China Leaping Past Us?” a Politico piece about this Sputnik Moment 2.0:

Its companies are attempting to acquire U.S. firms in key advanced technology sectors like semiconductor development and manufacturing. Chinese corporations have also opened research centers in the United States to tap American talent, and made early-stage investments in American startups focused on cutting-edge technologies like artificial intelligence and robotics. A small Silicon Valley venture might find access to their intellectual property a minor price to pay for a game-changing capital infusion.

Failing to address China’s efforts to acquire U.S. technology will have far-reaching consequences. The Commission on the Theft of American Intellectual Property estimates that piracy, theft, and counterfeiting by China costs the U.S. economy between $225 billion and $600 billion a year, or up to 3 percent of the entire U.S. GDP. In the long term, the costs only grow more daunting. If scientific advances in quantum communications, artificial intelligence, biotechnology, energy, and battery technology increasingly move to China, so will the future industries – and jobs – that will accompany them. Moreover, future U.S. military advantage depends on America’s continued technological leadership. If China outpaces the United States in innovation, loss of America’s military edge in the Asia-Pacific, if not globally, could follow.•

No matter who is the victor, or if several nations are, the future we’re creating is a machine that will swallow up our privacy and attempt to quantify, surveil and commodify us ceaselessly. And no one will be able to hop over the sensors or hit an OFF switch. In a smart New York Times op-ed “These Are Not the Robots We Were Promised,” Nicholas Carr believes our warm welcome of these nascent ambient technologies, as the robots become shapeless and ubiquitous, speaks to our narcissism, which is certainly so. But I think it may be more than that. Religion may have declined, but our fear of being alone on a spinning, jagged rock remains as strong as ever.

An excerpt:

Although they may not look like the robots we envisioned, smart speakers do have antecedents in our cultural fantasy life. The robot they most recall at the moment is HAL, the chattering eyeball in Stanley Kubrick’s sci-fi classic 2001: A Space Odyssey. But their current form — that of a stand-alone gadget — is not likely to be their ultimate form. They seem fated to shed their physical housing and turn into a sort of ambient digital companion. Alexa will come to resemble Samantha, the “artificially intelligent operating system” that beguiles the Joaquin Phoenix character in the movie “Her.” Through a network of speakers, microphones and sensors scattered around our homes, we’ll be able to converse with our solicitous A.I. assistants wherever and whenever we like.

Mark Zuckerberg, the Facebook C.E.O., spent much of last year programming a prototype of such a virtual agent. In a video released in December, he gave a demo of the system. Walking around his Silicon Valley home, he conducted a running dialogue with his omnipresent chatbot, calling on it to supply him with a clean T-shirt and toast bread for his breakfast, play movies and music, and entertain his infant daughter. Hooked up to cameras with facial-recognition software, the digitized Jeeves also acted as a sentry for the Zuckerberg compound, screening visitors and unlocking the gate.

Whether real or fictional, robots hold a mirror up to society. If Rosie and her kin embodied a 20th-century yearning for domestic order and familial bliss, smart speakers symbolize our own, more self-absorbed time.

It seems apt that as we come to live more of our lives virtually, through social networks and other simulations, our robots should take the form of disembodied avatars dedicated to keeping us comfortable in our media cocoons. Even as they spy on us, the devices offer sanctuary from the unruliness of reality, with all its frictions and strains.•

Tags: , ,

“The future is speeding at us, and it’s almost abusive how deeply cynical both sides are,” the Republican political consultant Rick Wilson recently said, speaking of the response from his party and the Democrats to manufacturing and automation. Specifically, he was referring to how the Trump Administration has promised a return to glory for plants and mines and the Democrats belief that every worker formerly on the assembly line can be upskilled into a software engineer. I doubt most conservatives beyond Trump believe the former and it’s dubious the majority of Democrats believe the latter. Those ideas, however, have been prominent in the last year.

· · ·

The idea that robotics will displace many American workers is true now as it has been for at least a century. As long as there have been machines, really, they’ve always gradually taken over some work as new opportunities were created. The question is whether we’re on the verge of an AI boom that will speed this transition beyond management. Such rapid progress would mean we’re becoming wealthy in the aggregate, but distribution would likely be a huge problem. That’s why so many in the tech field have suggested a Universal Basic Income, something for everyone, not just a reverse tax credit to boost the less fortunate from poverty. But while this work-less future is possible, it seems far from plausible. 

· · ·

Currently there’s wide agreement on all sides that production numbers don’t show a radical expansion of technology displacing workers and boosting output. The only caution is that advances are sometimes overpromised, then ridiculed and then they deliver in a massive way. Not so with cold fusion, but that certainly was the case with computers and the Internet.

In 1985, the lively New York Times reporter Erik Sandberg-Diment sarcastically eulogized the laptop, laughing at what Silicon Valley had believed could be the future. The opening:

“WHATEVER happened to the laptop computer? Two years ago, on my flight to Las Vegas for Comdex, the annual microcomputer trade show, every second or third passenger pulled out a portable, ostensibly to work, but more likely to demonstrate an ability to keep up with the latest fad. Last year, only a couple of these computers could be seen on the fold-down trays. This year, every one of them had been replaced by the more traditional mixed drink or beer.

Was the laptop dream an illusion, then?

Imagine his humbling just two decades later when the Feynman’s “Plenty of Room at the Bottom” theory was proven correct and the iPhone was introduced.

· · ·

Robots will show up in China just in time,” Daniel Kahneman has said. In order to sustain its giant population, China will need robotics on a mass scale. It’s neighbor Japan will probably require automation on a much grander scale despite a much smaller population. An ardently anti-immigrant country with a graying citizenry, Japan is among the states that could be asking an inverse question: What will happen if robots don’t take all the jobs?

· · ·

My best guess is that there will always be work to do in the future, but sometimes not enough. Not every job needs to disappear to destabilize society in a serious way, just enough. If entire industries vanish into the zeros and ones in too fast a fashion the way video stores across America were decimated by Netflix’s 3,500 employees and endless algorithms (and, yes, I define algorithms as robots), that can leave sectors in the dust. Many of those positions at first will be lousy jobs (e.g., truck driver), but that doesn’t mean those already settled into such careers will have an easy time of it. AI may not be an avalanche that crushes us all, but it could be a continuous series of small earthquakes.

Two excerpts on opposite sides of the argument follow.

_______________________

An exchange about a potential AI revolution from a Reddit AMA by Life 3.0 author Max Tegmark:

Question:

Do you believe AI will take over the majority of “menial” jobs within the working world, and if so how will we as people adjust to support those who would have been employed within those positions?

Max Tegmark:

Not only menial jobs, but also many jobs that require lots of training for us humans, such as analyzing radiology images to determine whether patients have cancer. To safeguard your career, go for jobs that machines are bad at – involving people, unpredictability and creativity. Avoid careers about to get automated away, involving repetitive or structured actions in a predictable setting. Telemarketers, warehouse workers, cashiers, train operators, bakers or line cooks. Drivers of trucks, buses, taxis and Uber/Lyft cars are likely to follow soon. There are many more professions (including paralegals, credit analysts, loan officers, bookkeepers and tax accountants) that, although they aren’t on the endangered list for full extinction, are getting most of their tasks automated and therefore demand much fewer humans. I give more detailed job advice in Chapter 3 of my new book. If machines becomes able to do all our jobs in a few decades, that doesn’t have to spell doom and gloom as is commonly assumed. It could give everyone who wanted a life of leisure and play if we as a society share the vast new wealth produced by machines in a way such that nobody gets worse off. The’ll be plenty enough resources to do this, but whether there’s the political will is another matter, and currently I feel that things are moving in the opposite direction in the US and most western countries, with the large groups of people getting steadily poorer in real terms – creating anger which helps explain the victories of Trump & Brexit.•

______________________

From Nicholas Carr’s latest Rough Type rebuttal to the idea that the robots are coming for us:

You can see the robot age everywhere but in the labor statistics, I wrote a few months ago, channeling Robert Solow. The popular and often alarming predictions of a looming unemployment crisis, one that would stem from rapid advances in robotics, artificial intelligence, and other computer automation technologies, have become increasingly hard to square with the economy’s rebound to near full employment. If computers were going to devastate jobs on a broad scale, one would think there’d be signs of it by now. We have, after all, been seeing remarkable gains in computing and software for many decades, while the broadband internet has been working its putative magic for more than twenty years. And it’s not like a shortage of corporate cash is curtailing investment in technology. Profits have been robust and capital cheap.

Still, even as jobs rebounded from the depths of the Great Recession, overall wage growth has appeared sluggish, at times stagnant. It has seemed possible that the weakness in wages might be the canary in the automated coal mine, an early indication of a coming surge in technological unemployment. If humans are increasingly competing for jobs against automatons, of both the hardware and software variety, that might explain workers’ inability to reap wage gains from a tightening labor market — and it might presage a broad shift of work from people to machines. At some point, if automation continued to put downward pressure on pay, workers would simply give up trying to compete with technology. The robots would win.

But even here, there’s growing reason to doubt the conventional wisdom.•

Tags: ,

Loved the long centerpiece of Garry Kasparov’s Deep Thinking, in which perhaps the greatest chess champion of all recreates his epic 1996 and 1997 matches with Deep Blue, the IBM program that ultimately toppled him–and by extension, us. The barrage of machinations employed by the computer company are fascinating and worthy of Cold War spooks, which makes this section read like an espionage thriller married to insightful sportswriting.

The rest of the book is an interesting meditation on the intelligent machines that increasingly surround us, consume us, though Kasparov’s argument that we should stop worrying and learn to love the “bomb” doesn’t completely convince because he gives short shrift to the many potential pitfalls.

A few random thoughts.

· · ·

While Steven Levy’s cover line, “The Brain’s Last Stand,” was a great way to sell his Newsweek article that previewed the second match, it also was a simplification of a complex point. There’s no one instant when intelligent machines absolutely surpass us, no Turing Test or ego-deflating checkmate, Watson win or Singularity moment can do the trick. It’s a gradual process. Apollo 11’s success, IBM’s victory and Deep Learning’s mysterious prowess are all part of an eerie landscape in which there is no Main Street. The landmarks are scattered and continually being built.

Kasparov makes this point himself in depicting the titanic contests as great theater and personally taxing but almost completely beside the point. He knew that even if he triumphed in ’97, the machines would soon far surpass their carbon-based competitors. Kasparov might have staved off IBM long enough to avoid being the one to “earn” the John Henry tag, but soon enough the number one player in the world, whoever that may have been, was going down.

· · ·

Early in the volume, on page 47, Kasparov tries to relate to people whose livelihoods, and sometimes communities, have been devastated by technological innovation (in tandem with globalization), arguing that “few people in the world know better than I do what it’s like to have your life’s work threatened by a machine.” Hmm, it would seem that a brilliant, world-famous, fairly well-off guy in his thirties would be fine even if he was shoved aside by AI, but maybe he was truly terrified like those who hope the plant in Ohio doesn’t kick them to the curb.

Later that very same page, however, Kasparov writes: “Nor did I believe the apocalyptic predictions about what might happen if I lost a match to a machine. I was always optimistic about the future of chess in a digital age.” Whew, crisis averted! 

The author sees a progression in which for a period of indeterminate length humans and machines collaborate on many forms of work until our silicon sisters take full control of these processes and we move on to other more important matters. That’s probably correct, but it’s not so likely to unfold as neatly off the page, especially since industries can rise and fall far more quickly during a technological boom.

Think how rapidly CDs went from the most successful format in music-business history to being almost worthless when the sounds disappeared into the 0s and 1s. Consider that Blockbuster and Polaroid and Fotomat went under during just the first wave of the Internet. Even if the aggregate job numbers don’t end up diminished, discomfiting displacement may become a permanent feature of life, as we’re all engaged in a never-ending game of musical chairs. That can’t be healthy for a society. From an economic standpoint, you wouldn’t want to be a nation that misses out on the Digital Age, but things could, and probably will, get messy. Some will be seated comfortably and many will fall to the floor.

· · ·

I’m working from memory, but I think Kasparov dedicates about three pages to the thorny problems of surveillance, privacy, hacking, etc. That’s not nearly adequate. As physical objects from cars to refrigerators to personal assistants are computerized and seamlessly integrated into our lives, these issues will become enormous. Actually, they already are. The author believes these complications to be fixable bugs.

The main problem with his reasoning is it assumes there must be reasonable answers to vexing Digital Era questions. That’s not necessarily so. Perhaps there’s no taming the anarchy of a “smart” world that’s super-connected. Certainly Kasparov’s arch-nemesis Vladimir Putin wouldn’t have been able to influence the Brexit and U.S. Presidential votes without linked computers, those chaos agents. Mayhem may be baked so deeply into the new tools that the havoc is inseparable–and insuperable. It could even be that a highly technological society, a deeply connected and heavily sensored one, ultimately destroys itself. I don’t believe that scenario plausible, but it’s irresponsible to not consider it possible.

Those challenges are just the ones we’re aware of. Nobody knew a century ago that the internal combustion engine would soon create an existential threat. Tomorrow’s tools will be far more powerful and so probably will be their unintended consequences.

· · ·

In one passage, Kasparov asserts that his quote from 1989 in which he predicted AI would become world champion before a woman did wasn’t sexist. Well, perhaps that’s so, but the suspicion seems more understandable if you know the context the author has omitted. If anyone in 1989 suspected Kasparov was deeply sexist that’s because in 1989 Kasparov was deeply sexist.

From a Playboy Interview that year:

Playboy:

How about women chess players?

Garry Kasparov:

Well, in the past, I have said that there is real chess and women’s chess. Some people don’t like to hear this, but chess does not fit women properly. It’s a fight, you know? A big fight. It’s not for women. Sorry. She’s helpless if she has men’s opposition. I think this is very simple logic. It’s the logic of a fighter, a professional fighter. Women are weaker fighters.

There is also the aspect of creativity in chess. You have to create new ideas. That’s quite difficult, too. Chess is the combination of sport, art and science. In all these fields, you can see men’s superiority. Just compare the sexes in literature, in music or in art. The result is, you know, obvious. Probably the answer is in the genes.

Playboy:

Do you realize that you’re expressing a sexist point of view, and that Western women will be enraged by it?

Garry Kasparov:

Yes, but I’m not concerned. I’m sure that women can do many things better than men in many fields. I think it’s wrong to want to be compared all the time, to want to be equal in everything. Men and women are different.•

Two daughters and two decades later, Kasparov was far more enlightened when questioned by the same magazine:

Playboy:

Why are there relatively few women chess players?

Garry Kasparov:

Tradition. How many women composers are there? Architects? Things are changing in this. We have Judit Polgar, who proved a woman can make the top 10, though she didn’t come even close to number one.•

Okay, sexism would have been a more apt word choice than tradition, but I don’t blame Kasparov from wanting to recoil from his earlier misogyny, seeing how he’s apparently grown past it. But disappearing this failing removes an important lesson: Humans, like intelligent machines, can learn and grow in surprising ways.

· · ·

In “A Brutal Intelligence: AI, Chess, and the Human Mind,” Nicholas Carr reviews Kasparov’s title for the Los Angeles Review of Books, making interesting observations about the limits of blunt-force computing and the very nature of chess. Carr notes that our type of thinking will likely be beyond the reach of computers into the long-term future but worries that “brutally efficient calculations” will become more valued than the inexpressible nuances of human thought. An excerpt:

The history of computer chess is the history of artificial intelligence. After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. If in the 1950s and 1960s the emphasis in the phrase “artificial intelligence” fell heavily on the word “intelligence,” today it falls with even greater weight on the word “artificial.”

Particularly fruitful has been the deployment of search algorithms similar to those that powered Deep Blue. If a machine can search billions of options in a matter of milliseconds, ranking each according to how well it fulfills some specified goal, then it can outperform experts in a lot of problem-solving tasks without having to match their experience or insight. More recently, AI programmers have added another brute-force technique to their repertoire: machine learning. In simple terms, machine learning is a statistical method for discovering correlations in past events that can then be used to make predictions about future events. Rather than giving a computer a set of instructions to follow, a programmer feeds the computer many examples of a phenomenon and from those examples the machine deciphers relationships among variables. Whereas most software programs apply rules to data, machine-learning algorithms do the reverse: they distill rules from data, and then apply those rules to make judgments about new situations.

In modern translation software, for example, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesn’t require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translator’s work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. Kasparov quotes an engineer from Google’s popular translation program: “When you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”

The pragmatic turn in AI research is producing many such breakthroughs, but this shift also highlights the limitations of artificial intelligence. Through brute-force data processing, computers can churn out answers to well-defined questions and forecast how complex events may play out, but they lack the understanding, imagination, and common sense to do what human minds do naturally: turn information into knowledge, think conceptually and metaphorically, and negotiate the world’s flux and uncertainty without a script. Machines remain machines.

That fact hasn’t blunted the public’s enthusiasm for AI fantasies.•

Tags: ,

Despite the robot apocalypse we’ve been promised, statistics don’t show an increase in productivity or decrease in employment. Many of the jobs recently created have been lesser ones, but even wages have shown some rise at times over the last year. Perhaps the decline of the American middle class over the last 50 years has been largely a political result rather than a technological one? It would be tough to convince people living in former manufacturing strongholds, but it may be so.

Three possible reasons the numbers don’t reveal a coming widespread technological unemployment:

  1. The numbers aren’t able to accurately capture the new automated economy. Doubtful.
  2. Automation may be overhyped for the moment the way computers or the Internet or smartphones originally were, but soon enough it will make a dent on society that will be felt deeply. Possible.
  3. The impact of automation will be gradual and manageable, improving society while not creating what Yuval Harari indelicately describes as a “useless class.” Possible.

In a Rough Type post, Nicholas Carr thinks machines may be depressing wages but have otherwise been overstated. An excerpt:

I’m convinced that computer automation is changing the way people work, often in profound ways, and I think it’s likely that automation is playing an important role in restraining wage growth by, among other things, deskilling certain occupations and reducing the bargaining power of workers. But the argument that computers are going to bring extreme unemployment in coming decades — an argument that was also popular in both the 1950s and the 1990s, it’s worth remembering — sounds increasingly dubious. It runs counter to the facts. Anyone making the argument today needs to provide a lucid and rational explanation of why, despite years of rapid advances in robotics, computer power, network connectivity, and artificial intelligence techniques, we have yet to see any sign of a broad loss of jobs in the economy.•

Tags:

In America, quantity matters.

Peter Thiel’s billions has impressed all manner of serious people, economists and social scientists and politicos, dollar signs making them somehow ignore that this “genius” was sure that there were WMDs in Iraq and certain that an unqualified sociopath should lead the nation. He may be smart about business, but he’s stupid about life, and it’s the kind of stupid that can get people killed. Thiel’s a rich man and a poor one.

His fellow Silicon Valley billionaire Mark Zuckerberg is also sanctified for large numbers. Not only does he have oodles of money, but reportedly close to two billion people are active on Facebook, that quasi-nation, which is part surveillance state and also the world’s largest sweatshop. 

Despite years of dubious business moves, comments and “social experiments,” Zuckerberg may not be a bad person, but he also isn’t necessarily a wise one, despite what the shallowest of scoreboards may say. His recent religious revival and 50-state “listening tour” provoked speculation that he’d watched another (perhaps) billionaire celebrity snake his way into the Oval Office and decided he wanted to get into the game. He certainly would be better than Trump, but maybe we should actually elect someone who’s qualified?

But why settle for a petty bureaucrat’s position like President when you can lord over a multi-national empire?

In Zuckerberg’s recent 5,700-word position paper, “Building Global Community,” he asserts that his company must lead the way in building an Earth-sized social fabric, something that doesn’t take into consideration that a) many of us want no part of Facebook, b) many of the users possess bigoted and anti-social views, c) having for-profit companies play such a role is huge potential for abuses and d) there’s no substitute for good government or actual (rather than virtual) political movements. Moreover, social media is as much a bane to democracy as a boon–and that may be a hopeful reading of its effects–so such an initiative may be akin to treating a poisoning victim with more poison.

The opening of Nicholas Carr’s outstanding Rough Type post about the Facebook founder’s massive missive, which eviscerates its “self-serving fantasy about social relations”:

The word “community” appears, by my rough count, 98 times in Mark Zuckerberg’s latest message to the masses. In a post-fact world, truth is approached through repetition. The message that is transmitted most often is the fittest message, the message that wins. Verification becomes a matter of pattern recognition. It’s the epistemology of the meme, the sword by which Facebook lives and dies.

Today I want to focus on the most important question of all: are we building the world we all want?

It’s a good question, though I’m not sure there is any world that we all want, and if there is one, I’m not sure Mark Zuckerberg is the guy I’d appoint to define it. And yet, from his virtual pulpit, surrounded by his 86 million followers, the young Facebook CEO hesitates not a bit to speak for everyone, in the first person plural. There is no opt-out to his “we.” It’s the default setting and, in Zuckerberg’s totalizing utopian vision, the setting is hardwired, universal, and nonnegotiable.

Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.  …

Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial.

The reason the idea  — that community-building on a planetary scale is practicable, necessary, and altogether good — did not seem controversial in the beginning was that Zuckerberg, like Silicon Valley in general, operated in a technological bubble, outside of politics, outside of history. Now that history has broken through the bubble and upset the algorithms, history must be put back in its place. Technological determinism must again be made synonymous with historical determinism.•

Tags: ,

I read Sean Penn’s “El Chapo Speaksat the beginning of 2016, and spent the rest of the year trying to absorb as many great articles as I could to erase from my mind the awful reporting and prose. “Espinoza is the owl who flies among falcons,” wrote the actor-director-poetaster. Yes, Sean, okay, but go fuck yourself.

The following 50 articles made me feel pretty good again. In time, I myself once more began to fly among the falcons.

Congratulations to all the wonderful writers who made the list. My apologies for not reading more small journals and sites, but the time and money of any one person, myself included, is limited.


1) “Latina Hotel Workers Harness Force of Labor and of Politics in Las Vegas” and 2) “A Fighter’s Hour of Need(Dan Barry, New York Times).

As good as any newspaper writer–or whatever you call such people now–Barry reports and composes like a dream. The first piece has as good a kicker as anyone could come up with–even if life subsequently kicked back in a shocking way–and the second is a heartbreaker about the immediate aftermath of a 2013 boxing match in which Magomed Abdusalamov suffered severe brain damage.

Even when Barry shares a byline, I still feel sure I can pick out his sentences, so flawless and inviting they are. One example of that would be…

3) “An Alt-Right Makeover Shrouds the Swastikas by Barry, Serge F. Kovaleski, Julie Turkewitz and Joseph Goldstein.

An angle used to dismiss the idea that the Make America Great White Again message resonated with a surprising, depressing number of citizens has been to point out that some Trump supporters also voted for Obama. That argument seems simplistic. Some bigots aren’t so far gone that they can’t vote for a person of a race they dislike if they feel it’s in their best interests financially or otherwise. That is to say, some racially prejudiced whites voted for President Obama. Trump appealed to them to find their worst selves. Many did.

Likewise the Trump campaign emboldened far worse elements, including white nationalists and separatists and anti-Semites. Thinking they’d been perhaps permanently marginalized, these hate groups are now updating their “brand,” hiding yesterday’s swastikas and burning crosses and other “bad optics,” and referring to themselves not as neo-Nazis but by more vaguely appealing monikers like “European-American advocates.” It’s the same monster wrapped in a different robe, the mainstreaming of malevolence, and they won’t again be easily relegated to the fringe regardless of Trump’s fate.

This group of NYT journalists explores a beast awakened and energized by Trump’s ugly campaign. It’s a great piece, though we should all probably stop calling these groups by their preferred KKK 2.0 alias of “alt-right.”

4) “No, Trump, We Can’t Just Get Along” (Charles Blow, New York Times)

In the hours after America elected, if barely, a Ku Klux Kardashian, most pundits and talk-show hosts encouraged all to support this demagogue, as if we could readily forget that he was a racist troll who demanded the first African-American President show his birth certificate, a deadbeat billionaire who didn’t pay taxes or many of his contracted workers, a draft-dodger who mocked our POWs while praising Putin, a sexual predator who boasted about his assaults, a xenophobe who blamed Mexicans and Muslims, a bigot who had a long history of targeting African-Americans with the zeal of a one-man lynching bee. In a most passionate and lucid shot across the bow, Blow said “no way,” penning an instant classic, speaking for many among the disenfranchised majority. 

5) “Lunch with the FT: Burning Man’s Larry Harvey (Tim Bradshaw, Financial Times)

If self-appointed Libertarian overlord Grover Norquist, a Harvard graduate with a 13-year-old’s understanding of government and economics, ever had his policy preferences enacted fully, it would lead to worse lifestyles and shorter lifespans for the majority of Americans. In fact, we now get to see many of his idiotic ideas played out in real-life experiments. He’s so eager to Brownback the whole country he’s convinced himself, despite being married to a Muslim woman, there’s conservative bona fides in Trump’s Mussolini-esque stylings and suspicious math.

In 2014, Norquist made his way to the government-less wonderland known as Burning Man, free finally from those bullying U.S. regulations, the absence of which allows Chinese business titans to breathe more freely, if not literally. Norquist’s belief that the short-term settlement in the Nevada desert is representative of what the nation could be every day is no less silly than considering Spring Break a template for successful marriage. He was quote as saying: “Burning Man is a refutation of the argument that the state has a place in nature.” Holy fuck, who passed him the peyote?

In his interview piece, Bradshaw broke bread in San Francisco with Harvey, co-founder of Burning Man and its current “Chief Philosophic Officer,” who speaks fondly of rent control and the Bernie-led leftward shift of the Democratic Party. Norquist would not approve, even if Harvey is a contradictory character, insisting he has a “conservative sensibility” and lamenting the way many involved in social justice fixate on self-esteem.

6) “The World Wide Cageand 7)Humans Have Long Wished to Fly Like Birds: Maybe We Shall” (Nicholas Carr, Aeon)

One of the best critics of our technological society keeps getting better.

The former piece is the introduction to Carr’s essay collection Utopia Is Creepy. The writer argues (powerfully) that we’ve defined “progress as essentially technological,” even though the Digital Age quickly became corrupted by commercial interests, and the initial thrill of the Internet faded as it became “civilized” in the most derogatory, Twain-ish use of that word. To Carr, the something gained (access to an avalanche of information) is overwhelmed by what’s lost (withdrawal from reality). The critic applies John Kenneth Galbraith’s term “innocent fraud” to the Silicon Valley marketing of techno-utopianism. 

You could extrapolate this thinking to much of our contemporary culture: binge-watching endless content, Pokémon Go, Comic-Con, fake Reality TV shows, reality-altering cable news, etc. Carr suggests we use the tools of Silicon Valley while refusing the ethos. Perhaps that’s possible, but I doubt you can separate such things.

The latter is a passage about biotechnology which wonders if science will soon move too fast not only for legislation but for ethics as well. The “philosophy is dead” assertion that’s persistently batted around in scientific circles drives me bonkers because we dearly need consideration about our likely commandeering of evolution. Carr doesn’t make that argument but instead rightly wonders if ethics is likely to be more than a “sideshow” when garages aren’t used to just hatch computer hardware or search engines but greatly altered or even new life forms. The tools will be cheap, the “creativity” decentralized, the “products” attractive. As Freeman Dyson wrote nearly a decade ago: “These games will be messy and possibly dangerous.”

8) “Calum Chace: Ask Me Anything” (Chace, Reddit)

The writer, an all-around interesting thinker, conducted an AMA based on his book, The Economic Singularity, which envisions a future–and not such a far-flung one–when human labor is a thing of the past. It’s certainly possible since constantly improving technology could make fleets of cars driverless and factories workerless. In fact, there’s no reason why they can’t also be ownerless. 

What happens then? How do we reconcile a free-market society with an automated one? In the long run, it could be a great victory for humanity, but getting from here to there will be bumpy.

9) “England’s Post-Imperial Stress Disorder(Andrew Brown, Boston Globe)

Not being intimately familiar with the nuances of the U.K.’s politics and culture, I’m wary of assigning support for Brexit to ugly nativist tendencies, but it does seem a self-harming act provoked by the growing pains of globalism. It’s not nearly as dumb a move as a President Trump, for instance, but some of the same forces are at play, particularly when it comes to the pro-Brexit, anti-immigration UKIP party.

It’s not shocking that Britain and the U.S. are trying to dodge the arrival of a new day and greater competition, a time when empires can’t merely strike back at will. We’re richer now, we have better things, but the distribution is very uneven and we feel poor inside. For some, maybe a surprising number, blame must be assigned to the “others.” As Randy Newman sang: “The end of an empire is messy at best.”

10) My President Was Black” (Ta-Nahesi Coates, The Atlantic)

It wasn’t the color of President Obama’s suit that so bothered his critics but the color of his skin. Sure, Bill Clinton was impeached and John Kerry swiftboated, but there was something so deeply disqualifying about the antagonism that faced 44, something beyond mere partisanship, which boiled over into Birtherism, interruptions during the State of the Union, denial of his Christian faith and vicious insults hurled at his gorgeous wife.

The old adage that black people have to be twice as good at a job as white people proved to be mathematically refutable: The Obamas were a million times better, and it wasn’t nearly enough for their detractors. When Obama even mildly suggested that institutional racism still existed, something he rarely did, he was labeled a “jerk” by prominent Republicans. Worse yet, his most overtly bigoted tormentor will succeed him in the White House. 

That raises an obvious question: If the perfect son isn’t good enough, then what kind of chance do his siblings have?

In a towering essay, Coates reflects on Obama’s history and the “fitful spasmodic years” of his White House tenure, which had pluses and minuses but were a gravity-defying time of true accomplishment which will never happen the same way again. In addition to macro ideas about race and identity, Coates’ writing on the Justice Department under this Administration is of particular importance.

11) “The Problem With Obama’s Faith in White America (Tressie McMillan Cottom, The Atlantic)

Hope is usually audacious but sometimes misplaced.

Without that feeling of expectation in a country founded on white supremacy that has never erased institutional racism, Barack Hussein Obama would certainly have never been elected President of the United States, not once, let alone twice. But his hope has also served as an escape hatch for white Americans who wanted to not only ignore the past but also the present. By stressing the best in us, Obama overlooked the worst of us, and that worst has never gone away.

It’s doubtful he behaved this way merely due to political opportunism: Obama seems a true believer in America and the ideals it espouses but has never lived up to. I love him and Michelle and think they’re wonderful people, but the nation has never been as good as they are, and even on a good day I’m unsure we even aspire to be. A painfully true Atlantic essay by Cottom meditates on these ideas.

12) “We’re Coming Close to the Point Where We Can Create People Who Are Superior to Others” (Hannah Devlin, The Guardian)

Devlin interviews novelist Kazuo Ishiguro, who wonders if liberal democracy will be doomed by a new type of wealth inequality, the biological kind, in which gene editing and other tools make enhancement and improved health available only to the haves. Ishiguro isn’t a fatalist on the topic, encouraging more public engagement.

Some believe exorbitantly priced technologies created for the moneyed few will rapidly decrease in price and make their way inside everyone’s pockets (and bodies and brains), the same distribution path blazed by consumer electronics. That’s possible but certainly not definite. Of course, as the chilling political winds of 2016 have demonstrated, liberal democracy may be too fragile to even survive to that point.

13) “The Privacy Wars Are About to Get a Whole Lot Worse” (Cory Doctorow, Locus Magazine)

Read the fine print. That’s always been good advice, but it’s never been taken seriously when it comes to the Internet, a fast-moving, seemingly ephemeral medium that doesn’t invite slowing down to contemplate. So companies attach a consent form to their sites and apps about cookies. No one reads it, and there’s no legal recourse from having your laptop or smartphone from being plundered for all your personal info. It quietly removes legal recourse from surveillance capitalism.

In an excellent piece, Doctorow explains how this oversight, which has already had serious consequences, will snake its way into every corner of our lives once the Internet of Things turns every item into a computer, cars and lamps and soda machines and TV screens. “Notice and consent is an absurd legal fiction,” he writes, acknowledging that it persists despite its ridiculous premise and invasive nature.

14) “The Green Universe: A Vision” (Freeman Dyson, New York Review of Books)

I’ve probably enjoyed Dyson’s “pure speculation” writings as much as anything I’ve read during my life, particularly the Imagined Worlds lecture and his NYRB essays and reviews. In this piece, the physicist goes far beyond his decades-old vision of an “Astrochicken” (a spacecraft that’s partly biological), conjuring a baseball-sized, biotech Noah’s Ark that can “seed” the Universe with millions of species of life. “Sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth,” he writes. It’s a spectacular dream, though we may bury ourselves beneath water or ash long before it can come to fruition, especially with the threat of climate change.

15) “The Augmented Human Being: A Conversation With George Church” (Edge)

CRISPR’s surprising success has swept us into an age when it all seems possible: the manipulation of humans, animals and plants, even perhaps of extinct species. Which way forward?

The geneticist Church, who has long had visions of rejuvenated woolly mammoths and augmented humans, realizes some bristle at manipulation of the Homo sapiens germline because it calls into question all we are, but apart from metaphors, there are also very real practical concerns over the games getting messy and possibly dangerous. The good (diseases being edited out of existence, organs being tailored to transplantees, etc.) shouldn’t be dreams permanently deferred, but it is difficult to understand how bad applications will be contained. Of course, the negative will probably unfold regardless, so we owe it ourselves to pursue the positive, if carefully. Church himself is on board with a cautious approach but not one that’s unduly so.

16) “The Empty Brain(Robert Epstein, Aeon)

Since the 16th century, the human brain has often been compared to a machine of one sort or another, with it being likened to a computer today. The idea that the brain is a machine seems true, though the part about gray matter operating in a similar way to the gadgets that currently sit atop our laps or in our palms is likely false. 

In a wonderfully argumentative and provocative essay, psychologist Epstein says this reflexive labeling of human brains as information processors is a “story we tell to make sense of something we don’t actually understand.” He doesn’t think the brain is tabula rasa but asserts that it doesn’t store memories like an Apple would.

It’s a rich piece of writing full of ideas and examples, though I wish Epstein would have relied less on the word “never” (e.g., “we will never have to worry about a human mind going amok in cyberspace”), because while he’s almost certainly correct about the foreseeable future, given enough time no one knows how the machines in our heads and pockets will change.

17) “North Korea’s One-Percenters Savor Life in ‘Pyonghattan‘” (Anna Fifield, The Washington Post)

Even in Kim Jong-un’s totalitarian state there are haves and have-nots who experience wildly different lifestyles. In the midst of the politically driven arrests and murders, military parades and nuclear threats, there exists a class of super rich kids familiar with squash courts, high-end shopping and fine dining. “Pyonghattan,” it’s called, this sphere of Western-ish consumerist living, which is, of course, just a drop in the bucket when compared to the irresponsible splurges of the Rodman-wrangling “Outstanding Leader.” Still weird, though.

18) “Being Leonard Cohen’s Rabbi (Rabbi Mordecai Finley, Jewish Journal)

The poet of despair, who lived for a time in a monastery, spent some of his last decade discussing spirituality and more earthly matters with the Los Angeles-based rabbi, who explains how the Jewish tradition informed Cohen’s work. “We shared a common language, a common nightmare,” he writes. One remark the prophet of doom made to Finley hits especially hard with the demons awakened during this election season: “You won’t like what comes next after America.”

19) Five Books Interview: Ellen Wayland-Smith Discusses Utopias (Five Books)

In a smart Q&AWayland-Smith, author of Oneida, talks about a group of titles on the topic of Utopia. She surmises that attempts at such communities aren’t prevalent like they were in the 1840s or even the 1960s because most of us realize they don’t normally end well, whether we’re talking about the bitter financial and organizational failures of Fruitlands and Brook Farm or the utter madness of Jonestown. That’s true on a micro-community level, though I would argue that there have never been more people dreaming of large-scale Utopias–and corresponding dystopias–then there are right now. The visions have just grown significantly in scope.

In macro visions, Silicon Valley technologists speak today of an approaching post-scarcity society, an automated, quantified, work-free world in which all basic needs are met and drudgery has disappeared into a string of zeros and ones. These thoughts were once the talking points of those on the fringe, say, a teenage guru who believed he could levitate the Houston Astrodome, but now they (and Mars settlements, a-mortality and the computerization of every object) are on the tongues of the most important business people of our day, billionaires who hope to shape the Earth and beyond into a Shangri-La. 

Perhaps much good will come from these goals, and maybe a few disasters will be enabled as well. 

20) “Sam Altman’s Manifest Destiny” (Tad Friend, New Yorker)

Friend’s “Letter from California” articles in the New Yorker are probably the long-form journalism I most anticipate, because he’s so good at understanding distinct milieus and those who make them what they are, revealing the micro and macro of any situation or subject and sorting through psychological motivations that drive the behavior of individuals or groups. To put it concisely: He gets ecosystems.

The writer’s latest effort, a profile of Y Combinator President Sam Altman, a stripling yet a strongman, reveals someone who has almost no patience for or interest in most people yet wants to save the world–or something.

It’s not a hit job, as Altman really has no intent to offend or injure, but it vivisects Silicon Valley’s Venture Capital culture and the outrageous hubris of those insulated inside its wealth and privilege, the ones who nod approvingly while watching Steve Jobs use Mahatma Gandhi’s image to sell wildly marked-up electronics made by sweatshop labor, and believe they also can think different.

When envisioning the future, Altman sees perhaps a post-scarcity, automated future where a few grand a year of Universal Basic Income can buy the jobless a bare existence (certainly not the big patch of Big Sur he owns), or maybe there’ll be complete societal collapse. Either or. More or less. If the latter occurs, the VC wunderkind plans to flee the carnage by jetting to the safety of his New Zealand spread with Peter Thiel, who has a moral blind spot reminiscent of Hitler’s secretary. A grisly death seems preferable. 

21) “The Secret Shame of Middle-Class Americans” (Neal Gabler, The Atlantic)

The term “middle class” was not always a nebulous one in America. It meant that you had arrived on solid ground and only the worst luck or behavior was likely to shake the earth beneath your feet. That’s become less and less true for four decades, as a number of factors (technology, globalization, tax codes, the decline of unions, the 2008 economic collapse, etc.) have conspired to hollow out this hallowed ground. You can’t arrive someplace that barely exists.

Middle class is now what you think you would be if you had any money. George Carlin’s great line that “the reason they call it the American Dream is because you have to be asleep to believe it” seems truer every day. It’s not so much a fear of falling anymore, but the fear of never getting up, at least not within the current financial arrangement. Those hardworking, decent people you see every day? They’re just as afraid as you are. They are you.

In the spirit of the great 1977 Atlantic article “The Gentle Art of Poverty” and William McPherson’s recent Hedgehog Review piece “Falling,” the excellent writer and film critic Gabler has penned an essay about his “secret shame” of being far poorer than appearances would indicate.

22) “Nate Parker and the Limits of Empathy(Roxane Gay, The New York Times)

We have to separate the art and the artist or we’ll end up without a culture, but it’s not always so easy to do. There was likely no more creative person who ever walked the Earth than David Bowie, whose death kicked off an awful 2016, yet the guy did have sex with children. And Pablo Picasso beat women, Louis-Ferdinand Céline was an anti-Semite, Anne Sexton molested her daughter and so on. In Gay’s smart, humane op-ed, she looks at the controversy surrounding Birth of a Nation writer-director Parker, realizing she can’t compartmentalize her feelings about creators and creations. Agree with her or not, but it’s certainly a far more suitable response than Stephen Galloway’s shockingly amoral Hollywood Reporter piece on the firestorm.

23) “The Case Against Reality (Amanda Gefter, The Atlantic)

A world in which Virtual Reality is in wide use would present a different way to see things, but what if reality is already not what we think it is? It’s usually accepted that we don’t all see things exactly the same way–not just metaphorically–and that our individual interpretation of stimuli is more a rough cut than an exact science. It’s a guesstimate. But things may be even murkier than we believe. Gefter interviews cognitive scientist Donald D. Hoffman who thinks our perception isn’t even a reliable simulacra, that what we take in is nothing like what actually is. It requires just a few minutes to read and will provoke hours of thought.

24) “Autocracy: Rules for Survival” (Masha Gessen, New York Review of Books)

For many of us the idea of a tyrant in the White House is unthinkable, but for some that’s all they can think about. These aren’t genuinely struggling folks in the Rust Belt whose dreams have been foreclosed on by the death rattle of the Industrial Age and made a terrible decision that will only deepen their wounds, but a large number of citizens with fairly secure lifestyles who want to unleash their fury on a world not entirely their own anymore. 

I’ve often wondered how Nazi Germany was possible, and I think this election has finally provided me with the answer. There has to be pervasive prejudice, sure, and it helps if there is a financially desperate populace, but I also think it’s the large-scale revenge of mediocrity, of people wanting to establish an order where might, not merit, will rule.

Gessen addresses the spooky parallels between Russia and this new U.S. as we begin what looks to be a Trump-Putin bromance. Her advice to those wondering if they’re being too paranoid about what may now occur: “Believe the autocrat.”

25) “The Future of Privacy” (William Gibson, New York Times)

What surprises me most about the new abnormal isn’t that surveillance has entered our lives but that we’ve invited it in.

For a coupon code or a “friend,” we’re willing to surrender privacy to a corporate state that wants to engage us, know us, follow us, all to better commodify us. In fact, we feel sort of left out if no one is watching.

It may be that in a scary world we want a brother looking after us even if it’s Big Brother, so we’ve entered into an era of likes and leaks, one that will only grow more profoundly challenging when the Internet of Things becomes the thing.

In a wonderful essay, Gibson considers privacy, history and encryption, those thorny, interrelated topics.

26) “Why You Should Believe in the Digital Afterlife” (Michael Graziano, The Atlantic)

When Russian oligarch Dmitry Itskov vows that by 2045 we’ll be able to upload our consciousness into a computer and achieve a sort of immortality, I’m perplexed. Think about the unlikelihood: It’s not a promise to just create a general, computational brain–difficult enough–but to precisely simulate particular human minds. That ups the ante by a whole lot. While it seems theoretically possible, this process may take awhile.

The Princeton neuroscientist Graziano plots the steps required to encase human consciousness, to create a second life that sounds a bit like Second Life. He acknowledges opinions will differ over whether we’ve generated “another you” or some unsatisfactory simulacrum, a mere copy of an original. Graziano’s clearly excited, though, by the possibility that “biological life [may become] more like a larval stage.”

27) “Big Data, Google and the End of Free Will” (Yuval Noah Harari, The Financial Times)

First we slide machines into our pockets, and then we slide into theirs.

As long as humans have roamed the Earth, we’ve been part of a biological organism larger than ourselves. At first, we were barely connected parts, but gradually we became a Global Village. In order for that connectivity to become possible, the bio-organism gave way to a technological machine. As we stand now, we’re moving ourselves deeper and deeper into a computer, one with no OFF switch. We’ll be counted, whether we like it or not. Some of that will be great, and some not.

The Israeli historian examines this new normal, one that’s occurred without close study of what it will mean for the cogs in the machine–us. As he writes, “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.”

28) “How Howard Stern Owned Donald Trump(Virginia Heffernan, Politico Magazine)

Whether it’s Howard Stern or that other shock jock Vladimir Putin, Donald Trump’s deep-seated need for praise has made him a mark for those who know how to push his buttons. In the 1990s, when the hideous hotelier was at a career nadir, he was a veritable Wack Packer, dropping by the Stern show to cruelly evaluate women and engage in all sorts of locker-room banter. Trump has dismissed these un-Presidential comments as “entertainment,” but his vulgarity off-air is likewise well-documented. He wasn’t out of his element when with the King of All Media but squarely in it. And it wasn’t just two decades ago. Up until 2014, Trump was still playing right along, allowing himself to be flattered into conversation he must have realized on some level was best avoided.

For Stern, who’s become somewhat less of an asshole as Trump has become far more of one, the joke was always that ugly men were sitting in judgement of attractive women. The future GOP nominee, however, was seemingly not aware he was a punchline. He’s a self-described teetotaler who somehow has beer goggles for himself. During this Baba Booey of an election season, Heffernan wrote knowingly of the dynamic between the two men.

29) “I’m Andrew Hessel: Ask Me Anything” (Hessel, Reddit)

If you like your human beings to come with fingers and toes, you may be disquieted by this undeniably heady AMA conducted by a futurist and a “biotechnology catalyst” at Autodesk. The researcher fields questions about a variety of flaws and illnesses plaguing people that biotech may be able to address, even eliminate. Of course, depending on your perspective, humanness itself can be seen as a failing, something to be “cured.”

30) “What If the Aliens We Are Looking For Are AI? (Richard Hollingham, BBC Future) 

If there are aliens out there, Sir Martin Rees feels fairly certain they’re conscious machines, not oxygen-hoarding humans. It’s just too inhospitable for carbon beings to travel beyond our solar system. He allows that perhaps cyborgs, a form of semi-organic post-humans, could possibly make a go of it. But that’s as close a reflection of ourselves we may be able to see in space. Hollingham explores this theory, wondering if a lack of contact can be explained by the limits we put on our search by expecting a familiar face in the final frontier.

31) “We Are Nowhere Close to the Limits of Athletic Performance” (Stephen Hsu, Nautilus

If performance-enhancing drugs weren’t at all dangerous to the athletes using them, should they be banned?

I bet plenty of people would say they should, bowing before some notion of competitive purity which has never existed. It’s also a nod to “god-given ability,” a curious concept in an increasingly agnostic world. Why should those born with the best legs and lungs be the fastest? Why should the ones lucky enough to have the greatest gray matter at birth be our best thinkers? Why should those fortunate to initially get the healthiest organs live the longest? It doesn’t make much sense to hold back the rest of the world out of respect for a few winners of the genetics lottery.

Hsu relates how genetic engineering will supercharge athletes and the rest of us, making widely available the gifts of Usain Bolt, who gained his from hard work, sure, but also a twist of fate. In fact, extrapolating much further, he believes “speciation seems a definite possibility.”

32) “How Democracies Fall Apart(Andrea Kendall-Taylor and Erica Frantz, Foreign Affairs

If we are hollow men (and women), American liberty, that admittedly unevenly distributed thing, may be over after 240 years. And it could very well end not with a bang but a whimper.

Those waiting for the moment when autocracy topples the normal order of things are too late. Election Day was that time. It’s not guaranteed that the nation transforms into 1930s Europe or that we definitely descend into tyranny, but the conditions have never been more favorable in modern times for the U.S. to capitulate to autocracy. The creeps are in office, and the creeping will be a gradual process. Don’t wait for an explosion; we’re living in its wake.

Kendall-Taylor and Frantz analyze how quietly freedom can abandon us.

33) Khizr Khan’s Speech to the 2016 Democratic National Convention (Khan, DNC)

Ever since Apple’sThink Different ad in 1997, the one in which Steve jobs used Gandhi’s image to sell marked-up consumer electronics made by sweatshop labor, Silicon Valley business titans have been celebrated the way astronauts used to be. Jobs, who took credit for that advertising campaign which someone else created, specifically wondered why we put on a pedestal those who voyage into space when he and his clever friends were changing the world–or something–with their gadgets. He believed technologists were the best and brightest Americans. He was wrong.

Some of the Valley’s biggest names filed dourly into Trump Tower recently in a sort of reverse perp walk. It was the same, sad spectacle of Al Gore’s pilgrimage, which was answered with Scott Pruitt, climate-change denier, being chosen EPA Chief. Perhaps they made the trek on some sort of utilitarian impulse, but I would guess there was also some element of self-preservation, not an unheard of sense of compromise for those who see their corporations as if they were countries, not only because of their elephantine “GDPs,” but also because of how they view themselves. I don’t think they’re all Peter Thiel, an emotional leper and intellectual fraud who now gets to play out his remarkably stupid theories in a large-scale manner. I’ve joked that Thiel has a moral blind spot reminiscent of Hitler’s secretary, but the truth is probably far darker. 

What would have been far more impressive would have been if Musk, Cook, Page, Sandberg, Bezos and the rest stopped downstairs in front of the building and read a statement saying that while they would love to aid any U.S. President, they could not in this case because the President-Elect has displayed vicious xenophobia, misogyny and callous disregard for non-white people throughout the campaign and in the election’s aftermath. He’s shown totalitarian impulses and has disdain for the checks and balances that make the U.S. a free country. In fact, with his bullying nastiness he continues to double down on his prejudices, which has been made very clear by not only his words but through his cabinet appointments. They could have stated their dream for the future doesn’t involve using Big Data to spy on Muslims and Mexicans or programming 3D printers to build internment camps on Mars. They might have noted that Steve Bannon, whom Trump chose as his Chef Strategist, just recently said that there were too many Asian CEOs in Silicon alley, revealing his white-nationalistic ugliness yet again. They could have refused to normalize Trump’s odious vision. They could have taken a stand.

They didn’t because they’re not our absolute finest citizens. Khizr and Ghazala Khan, who understand the essence of the nation in a way the tech billionaires do not, more truly represent us at our most excellent. They possess a wisdom and moral courage that’s as necessary as the Constitution itself. The Silicon Valley folks lack these essential qualities, and without them, you can’t be called our best and brightest.

And maybe Khan’s DNC speech is our ultimate Cassandra moment, when we didn’t listen, or maybe we did but when we looked deep inside for our better angels we came up empty. Regardless, he told the truth beautifully and passionately. When we went low, he went high.

34) “The Perfect Weapon: How Russian Cyberpower Invaded the U.S.” (Eric Lipton, David E. Sanger and Scott Shane)

It was thought that the Russian hacking of the U.S. Presidential election wasn’t met with an immediate response because no one thought Trump really had a chance to win, but the truth is the gravity of this virtual Watergate initially took even many veteran Washington insiders by surprise. This great piece of reportage provides deep and fascinating insight into one of the jaw-dropping scandals of an outrageous election season, which has its origins in the 1990s.

35) “Goodbye to Barack Obama’s World” (Edward Luce, The Financial Times

He must be taken seriously,” Luce wrote in the Financial Times in December 2015 of Donald Trump, as the anti-politician trolled the whole of America with his Penthouse-Apartment Pinochet routine, which seems to have been more genuine than many realized.

Like most, the columnist believed several months earlier that the Reality TV Torquemada was headed for a crash, though he rightly surmised the demons Trump had so gleefully and opportunistically awakened, the vengeful pangs of those who longed to Make America Great White Again, were not likely to dissipate.

But the dice were kind to the casino killer, and a string of accidents and incidents enabled Trump and the mob he riled to score enough Electoral College votes to turn the country, and world, upside down. It’s such an unforced error, one which makes Brexit seem a mere trifle, that it feels like we’ve permanently surrendered something essential about the U.S., that more than an era has ended.

In this post-election analysis, Luce looks forward for America and the whole globe and sees possibilities that are downright ugly.

36) “The Writer Who Was Too Strong To Live” (Dave McKenna, Deadspin)

A postmortem about Jennifer Frey, a journalistic prodigy of the 1990s who burned brilliantly before burning out. A Harvard grad who was filing pieces for newspapers before she was even allowed to drink–legally, that is–Frey was a full-time sportswriter for the New York Times by 24, out-thinking, out-hustling and out-filing even veteran scribes at a clip that was all but impossible. Frey seemed to have it all and was positioned to only get more.

Part of what she had, though, that nobody knew about, was bipolar disorder, which she self-medicated with a sea of alcohol. Career, family and friends gradually floated away, and she died painfully and miserably at age 47. The problem with formidable talent as much as with outrageous wealth is that it can be forceful enough to insulate a troubled soul from treatment. Then, when the fall finally occurs, as it must, it’s too late to rise once more.

37) “United States of Paranoia: They See Gangs of Stalkers” (Mike McPhate, The New York Times)

Sometimes mental illness wears the trappings of the era in which it’s experienced. Mike Jay has written beautifully in the last couple of years about such occurrences attending the burial of Napoleon Bonaparte and the current rise of surveillance and Reality TV. The latter is something of a Truman Show syndrome, in which sick people believe they’re being observed, that they’re being followed. To a degree, they’re right, we all are under much greater technological scrutiny now, though these folks have a paranoia which can drive such concerns into crippling obsessions.

Because we’re all connected now, the “besieged” have found one another online, banning together as “targeted individuals” who’ve been marked by the government (or some other group entity) for observation, harassment and mind control. McPhate’s troubling article demonstrates that the dream of endless information offering lucidity has been dashed for a surprising amount of people, that the inundation of data has served to confuse rather than clarify. These shaky citizens resemble those with alien abduction stories, except they seem to have been “shanghaied” by the sweep of history.

38) “The Long-Term Jobs Killer Is Not China. It’s Automation. (Claire Cain Miller, The New York Times)

Many people nowadays wonder what will replace capitalism, but I believe capitalism will be just fine.

You and me, however, we’re fucked.

The problem is that an uber technologized version of capitalism may not require as many of us or value as highly those who’ve yet to be relieved of their duties. Perhaps a thin crust at the very top will thrive, but without sound policy the rest may be Joads with smartphones. In this scenario, we’d be tracked and commodified, given virtual trinkets rather than be paid. Our privacy, like many of our jobs, will disappear into the zeros and ones.

While the orange supremacist was waving his penis in America’s face during the campaign, the thorny question of what to do should widespread automation be established was left unexplored. That’s terrifying, since more and more outsourcing won’t refer to work moved beyond borders but beyond species. Certainly great investment in education is required, but that won’t likely be enough. Not every freshly unemployed taxi driver can be upskilled into a driverless car software engineer. There’s not enough room on that road.

Miller, a reporter who understands both numbers and people in a way few do, analyzes how outsourcing will increasingly refer to work not moved beyond borders but beyond species.

39) “Nothing To Fear But Fear Itself(Sasha Von Oldershausen, Texas Monthly)

Surveillance is a murky thing almost always attended by a self-censorship, quietly encouraging citizens to abridge their communication because perhaps someone is watching or listening. It’s a chilling of civil rights that happens in a creeping manner. Nothing can be trusted, not even the mundane, not even your own judgement. That’s the goal, really, of such a system–that everyone should feel endlessly observed.

The West Texas border reporter finds similarities between her stretch of America, which feverishly focuses on security from intruders, and her time spent living under theocracy in Iran.

40) “Madness” (Eyal Press, The New Yorker)

“By the nineties, prisons had become America’s dominant mental-health institutions,” writes Press in this infuriating study of a Florida correctional facility in which guards tortured, brutalized, even allegedly murdered, inmates–and employed retaliatory measures against mental health workers who complained. Prison reform is supposedly one of those issues that has bipartisan support, but very little seems to get done in rehabilitating a system that warehouses many nonviolent offenders and mentally ill people among those who truly need to be incarcerated. It seems a breakdown of the institution but is more likely a perpetuation of business as it was intended to be. Either way, the situation needs all the scrutiny and investigation journalists can muster.

41) It May Not Feel Like Anything To Be an Alien(Susan Schneider, Nautilus)

Until deep into the twentieth century, most popular dreams of ETs usually centered on biology. We wanted new friends that reminded us of ourselves or were even cuter. When we accepted we had no Martian doppelgangers, a dejected resignation set in. Perhaps some sort of simple cellular life existed somewhere, but what thin gruel to digest.

Then a new reality took hold: Maybe advanced intelligence exists in space as silicon, not carbon. It’s postbiological.

If there are aliens out there, maybe they’re conscious machines, not oxygen-hoarding humans. It’s just too inhospitable for beings like us to travel beyond our solar system. He allows that cyborgs, a form of semi-organic post-humans, could possibly make a go of it. But that’s as close a reflection of ourselves we may be able to see in space. 

Soon enough, that may be true as well on Earth, a relatively young planet on which intelligence may be in the process of shedding its mortal coil. Another possibility: Perhaps intelligence is also discarding consciousness.

Schneider’s smart article asserts that “soon, humans will no longer be the measure of intelligence on Earth” and tries to surmise what that transition will mean.

42) “Schadenfreude with Bite(Richard Seymour, London Review of Books)

The problem with anarchy is that it has a tendency to get out of control.

In 2013, Eric Schmidt, the most perplexing of Googlers, wrote (along with Jared Cohen) the truest thing about our newly connected age: “The Internet is the largest experiment involving anarchy in history.”

Yes, indeed.

California was once a wild, untamed plot of land, and when people initially flooded the zone, it was exciting if harsh. But then, soon enough: the crowds, the pollution, the Adam Sandler films. The Golden State became civilized with laws and regulations and taxes, which was a trade-off but one that established order and security. The Web has been commodified but never been truly domesticated, so while the rules don’t apply it still contains all the smog and noise of the developed world. Like Los Angeles without the traffic lights.

Our new abnormal has played out for both better and worse. The fan triumphed over the professional, a mixed development that, yes, spread greater democracy on a surface level, but also left truth attenuated. Into this unfiltered, post-fact, indecent swamp slithered the troll, that witless, cowardly insult comic.

The biggest troll of them all, Donald Trump, the racist opportunist who stalked our first African-American President demanding his birth certificate, is succeeding Obama in the Oval Office, which is terrible for the country if perfectly logical for the age. His Lampanelli-Mussolini campaign also emboldened all manner of KKK 2.0, manosphere and neo-Nazi detritus in their own trolling, as they used social media to spread a discombobulating disinformation meant to confuse and distract so hate could take root and grow. No water needed; bile would do.

In the wonderfully written essay, Seymour analyzes the discomfiting age of the troll.

43) “An American Tragedy(David Remnick, The New Yorker)

It happened here, and Remnick, who spent years covering the Kremlin and many more thinking about the White House, was perfectly prepared to respond to a moment he hoped would never arrive. As the unthinkable was still unfolding and most felt paralyzed by the American embrace of a demagogue, the New Yorker EIC urgently warned of the coming normalization of the incoming Administration, instantly drawing a line that allowed for myriad voices to demand decency and insist on truth and facts, which is our best safeguard against the total deterioration of liberal governance.

44) “This Is New York in the Not-So-Distant Future” (Andrew Rice, New York)

Some sort of survival mechanism allows us to forget the full horror of a tragedy, and that’s a good thing. That fading of facts makes it possible for us to go on. But it’s dangerous to be completely amnesiac about disaster.

Case in point: In 2014, Barry Diller announced plans to build a lavish park off Manhattan at the pier where Titanic survivors came to shore. Dial back just a little over two years ago to another waterlogged disaster, when Hurricane Sandy struck the city, and imagine such an island scheme even being suggested then. The wonder at that point was whether Manhattan was long for this world. Diller’s designs don’t sound much different than the captain of a supposedly unsinkable ship ordering a swimming pool built on the deck just after the ship hit an iceberg.

Rice provides an excellent profile of scientist Klaus Joseph, who believes NYC, as we know it, has no future. The academic could be wrong, but if he isn’t, his words about the effects of Irene and Sandy are chilling: “God forbid what’s next.”

45) “The Newer Testament” (Robyn Ross, Texas Monthly)

A Lone Star State millennial using apps and gadgets to disrupt Big Church doesn’t really seem odder than anything else in this hyperconnected and tech-happy entrepreneurial age, when the way things have been are threatened at every turn. At Experience Life in Lubbock, Soylent has yet to replace wine and there’s no Virtual Reality confessionals, but self-described “computer nerd” Chris Galanos has done his best to take the “Old” out of the Old Testament with his buzzing, whirring House of God 2.0. Is nothing sacred anymore?

46) “The New Nationalism Of Brexit And Trump Is A Product Of The Digital Age” (Douglas Rushkoff, Fast Company)

“We are flummoxed by today’s nationalist, regressively anti-global sentiments only because we are interpreting politics through that now-obsolete television screen,” writes Rushkoff in this excellent piece about the factious nature of the Digital Age. The post-TV landscape is a narrowcasted one littered with an infinite number of granular choices and niches. It’s empowering in a sense, an opportunity to vote “Leave” to everything, even a future that’s arriving regardless of popular consensus. It’s a far cry from not that long ago when an entire world sat transfixed by Neil Armstrong’s giant leap. Now everyone is trying to land on the moon at the same time–and no one can agree where it is. It’s more democratic this way, but maybe to an untenable degree, perhaps to the point where it’s a new form of anarchy.

47) “The Incredible Fulk(Alexandra Suich, The Economist 1843)

The insanity of our increasingly scary wealth inequality is chronicled expertly in this richly descriptive article, even though it seems in no way intended as a hit piece. The title refers to Ken Fulk, Silicon Valley’s go-to “lifestyle designer,” who charges billionaires millions to create loud interiors, rooms stuffed with antique doors from shuttered mental institutions and musk-ox taxidermy, intended to “evoke feelings” or some such shit.

As the article says: “His spaces, when completed, have a theatrical quality to them, which Fulk plays up. Once he’s finished a project he often brings clients to their homes to show them the final product, a ceremony which he calls the ‘big reveal.’ For the Birches’ home in San Francisco, he hired men dressed as beefeaters to stand outside the entrance and musicians to play indoors. For another set of clients in Palm Springs, he hired synchronized swimmers, a camel and an impersonator to dress up and sing like Dean Martin.” It’s all good, provided a bloody revolution never occurs.

Fulk acknowledges a “tension between high and low” in his work. Know what else has tension? Nooses.

48) “Truth Is a Lost Game in Turkey. Don’t Let the Same Thing Happen to You.(Ece Temelkuran, The Guardian)

Nihilism is sometimes an end but more often a means.

Truth can be fuzzy and facts imprecise, but an honest pursuit of these precious goods allows for a basic decency, a sense of order. Bombard such efforts for an adequate length of time, convince enough people that veracity and reality are fully amorphous, and opportunities for mischief abound.

Break down the normal rules (written and unwritten ones), create an air of confusion with shocking behaviors and statements, blast an opening where anything is possible–even “unspeakable things”–and a democracy can fall and tyranny rise. The timing has to be right, but sooner or later that time will arrive.

Has such a moment come for America? The conditions haven’t been this ripe for at least 60 years, and nothing can now be taken for granted.

Temelkuran explains how Turkey became a post-truth state, a nation-sized mirage, and how the same fate may befall Europe and the U.S. She certainly shares my concerns about the almost non-stop use of the world “elites” to neutralize the righteous into paralysis.

49) “Prepping for Doomsday: Bunkers, Panic Rooms, and Going Off the Grid” (Clare Trapasso, Realtor.com)

Utter societal collapse in the United States may not occur in the immediate future, but it’s certainly an understandable time for a case of the willies. In advance of the November elections, the bunker business boomed, as some among us thought things would soon fall apart and busied themselves counting their gold coins and covering their asses. In a shocking twist, the result of the Presidential election has calmed many of the previously most panicked among us and activated the fears of the formerly hopeful.

50) “The 100-Year-Old Man Who Lives in the Future” (Caroline Winter, Bloomberg Businessweek)

Jacque Fresco, one of those fascinating people who walks through life building a world inside his head, hoping it eventually influences the wider one, is now into his second century of life. A futurist and designer who’s focused much of his work on sustainable living, technology and automation, Fresco is the brains behind the Venus Project, which encourages a post-money, post-scarcity, post-politician utopia. He’s clearly a template for many of today’s Silicon Valley aspiring game-changers.

Winter traveled to Middle-of-Nowhere, Florida (pop: Fresco + girlfriend and collaborator Roxanne Meadows), to write this smart portrait of the visionary after ten decades of reimagining the world according to his own specifications. He doesn’t think the road to a computer-governed utopia will be smooth, however. As Winter writes: “Once modern life gets truly hard, Fresco believes there will be a revolution that will clear the way for the Venus Project to be built. ‘There will be a lot of people getting shot, including me,’ he says wryly.” Well, he’s had a good run.•

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

mcluhannewspaper

Jeff Jarvis, theorist or something, was one of the most gleeful of public figures in celebrating the demise of traditional media. Having made his bones in the business, he wanted the new tools to feast on the flesh of print publications and network TV, believing there would emerge a democratic revolution. In ways he couldn’t anticipate, he was correct.

Jarvis grew apoplectic as a Trump Presidency seemed increasingly possible, spending great personal time volunteering for Hillary Clinton in Pennsylvania and making desperate appeals to traditional media personalities like Howard Stern, hoping, belatedly, that the new abnormal could somehow be tamed by phone banks and talk radio. Not possible. The ethical standards and common decency that had washed away easier than ink helped make sure of that. What seemed an evolution to him turned out to be a devolution. 

From “Meet the New Gatekeeper, Worse Than the Old Gatekeeper,” Nicholas Carr’s astute Rough Type post:

We celebrated our emancipation from filters, and we praised the democratization brought about by “new media.” The “people formerly known as the audience” had taken charge, proclaimed one herald of the new order, as he wagged his finger at the disempowered journalistic elites. “You were once (exclusively) the editors of the news, choosing what ran on the front page. Now we can edit the news, and our choices send items to our own front pages.”

“The means of media are now in the hands of the people,” declared another triumphalist:

So now anyone can control, create, market, distribute, find, and interact with anything they want. The barrier to entry to media is demolished. Media, always a one-way pipe, now becomes an open pool. . . . Whenever citizens can exercise control, they will. Today they are challenging and changing media — where bloggers now fact-check Dan Rather’s ass — but tomorrow they will challenge and change politics, government, marketing, and education as well. This isn’t just a media revolution, though that’s where we are seeing the impact first. This is a chain-reaction of revolutions. It has just begun.

And the pundits were right — the old media filters dissolved, and “we” took control — though the great disruption has not played out in quite the way they anticipated.•

Tags: ,

amazonbooks54

Nobody shops in brick-and-mortar stores anymore, if you don’t count about 90% of purchases.

Because so many of the physical businesses we connected to on an emotional level were killed by the Internet (book and video stores, record shops, newsstands, etc.), it seems online is predominant in retail. But that’s not nearly true, at least not yet. In order to keep expanding market share, Silicon Valley powers like Amazon are venturing off into the real world, a phenomenon that may increase exponentially. I doubt it will work very well with Amazon Books stores and their shallow selections, but perhaps the planned convenience store chain will make a go of it? Tough to say: Corporations great at one type of platform often flounder in others.

In a Technology Review piece, Nicholas Carr visits a new Amazon Books and explains the key role of the smartphone in this surprising turn of events. An excerpt:

Amazon Books may be just the vanguard of a much broader push into brick-and-mortar retailing by the company. In October, the Wall Street Journal revealed that Amazon is planning to open a chain of convenience stores, mainly for groceries, along with drive-in depots where consumers will be able to pick up merchandise ordered online. It has also begun rolling out small “pop-up” stores to hawk its electronic devices. It already has more than two dozen such kiosks in malls around the country, and dozens more are said to be in the works.

Even after 20 years of rapid growth, e-commerce still accounts for less than 10 percent of total retail sales. And now the rise of mobile computing places new constraints on Web stores. They can’t display or promote as many products as they could when their wares were spread across desktop or laptop monitors. That limits the stores’ cross-selling and upselling opportunities and blunts other merchandising tactics.

At the same time, the smartphone, with its apps, its messaging platforms, and its constant connectivity, gives retailers more ways to communicate with and influence customers, even when they’re shopping in stores. This is why the big trend in retailing today is toward “omnichannel” strategies, which blend physical stores, Web stores, and mobile apps in a way that makes the most of the convenience of smartphones and overcomes their limitations. Some omnichannel pioneers, like Sephora and Nordstrom, come from the brick-and-mortar world. But others, like Warby Parker and Bonobos, come from the Web world. Now, with its physical stores, Amazon is following in their tracks. “Pure-play Web retailing is not sustainable,” New York University marketing professor Scott Galloway told me. He points out that the deep discounting and high delivery costs that characterize Web sales have made it hard for Amazon to turn a profit. If Amazon were to remain an online-only merchant, he says, its future success would be in jeopardy. He believes the company will end up opening “hundreds and then thousands of stores.”•

Tags: ,

nyc_subway_riders_with_their_newspapers-travis-ruse

subway

The photo at top is from 2005, which might as well be a million years ago. Commuters on the NYC subway were that recently digesting every kind of printed matter, with newspapers especially prominent. We will never witness that scene again, as we’ve transitioned into the age of smartphones, a medium that has disappeared the broadsheet and tabloid and paperback. These tools are wonderfully portable and can hold far more information, though some things have been lost in the changeover. That’s not to say America was wonderful in 2005 and isn’t now–both times were rather grim–but not much good can come of making words shrink, eliminating them, even.

To paraphrase Norma Desmond: News *is* big. It’s the *gadgets* that got small. Reading on smartphones isn’t easy, so skimming headlines about current events is about the best anyone can do now. It’s not just the size of the characters that’s daunting but also the speed with which they travel, as they ping, prompt and interrupt us nonstop. News is always breaking until it feels broken.

Nicholas Carr, one of our time’s preeminent critics (cultural, social and media), has penned a really wonderful Nieman Reports piece on the “nowness” of the news, the concept of fast and first run amok. As he writes, “for 500 years the medium of print has been training us to pay attention.” Not any longer. The opening:

“Thought will spread across the world with the rapidity of light, instantly conceived, instantly written, instantly understood. It will blanket the earth from one pole to the other—sudden, instantaneous, burning with the fervor of the soul from which it burst forth.”

Those opening words would seem to describe, with the zeal typical of the modern techno-utopian, the arrival of our new online media environment with its feeds, streams, texts and tweets. What is the Web if not sudden, instantaneous and burning with fervor? But French poet and politician Alphonse de Lamartine wrote these words in 1831 to describe the emergence of the daily newspaper. Journalism, he proclaimed, would soon become “the whole of human thought.” Books, incapable of competing with the immediacy of morning and evening papers, were doomed: “Thought will not have time to ripen, to accumulate into the form of a book—the book will arrive too late. The only book possible from today is a newspaper.”

Lamartine’s prediction of the imminent demise of books didn’t pan out. Newspapers did not take their place. But he was a prophet nonetheless. The story of media, particularly the news media, has for the last two centuries been a story of the pursuit of ever greater immediacy. From broadsheet to telegram, radio broadcast to TV bulletin, blog to Twitter, we’ve relentlessly ratcheted up the velocity of information flow.

To Shakespeare, ripeness was all. Today, ripeness doesn’t seem to count for much. Nowness is all.•

Tags:

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Aeon, which already presented a piece from Nicholas Carr’s new book, Utopia Is Creepy, has another, a passage about biotechnology which wonders if science will soon move too fast not only for legislation but for ethics as well.

The “philosophy is dead” assertion that’s persistently batted around in scientific circles drives me bonkers because we dearly need consideration about our likely commandeering of evolution. Carr doesn’t make that argument but instead rightly wonders if ethics is likely to be more than a “sideshow” when garages aren’t used to just hatch computer hardware or search engines but greatly altered or even new life forms. The tools will be cheap, the “creativity” decentralized, the “products” attractive. As Freeman Dyson wrote nearly a decade ago: “These games will be messy and possibly dangerous.”

From Carr:

If to be transhuman is to use technology to change one’s body from its natural state, then we are all already transhuman. But the ability of human beings to alter and augment themselves might expand enormously in the decades ahead, thanks to a convergence of scientific and technical advances in such areas as robotics, bioelectronics, genetic engineering and pharmacology. Progress in the field broadly known as biotechnology promises to make us stronger, smarter and fitter, with sharper senses and more capable minds and bodies. And scientists can already use the much discussed gene-editing tool CRISPR, derived from bacterial immune systems, to rewrite genetic code with far greater speed and precision, and at far lower cost, than was possible before. In simple terms, CRISPR pinpoints a target sequence of DNA on a gene, uses a bacterial enzyme to snip out the sequence, and then splices a new sequence in its place. The inserted genetic material doesn’t have to come from the same species. Scientists can mix and match bits of DNA from different species, creating real-life chimeras.

As long ago as 1923, the English biologist J B S Haldane gave alecturebefore the Heretics Society in Cambridge on how science would shape humanity in the future. ‘We can already alter animal species to an enormous extent,’ he observed, ‘and it seems only a question of time before we shall be able to apply the same principles to our own.’ Society would, Haldane felt sure, defer to the scientist and the technologist in defining the boundaries of the human species. ‘The scientific worker of the future,’ he concluded, ‘will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission, and proud of it.’

The ultimate benefit of transhumanism, argues Nick Bostrom, professor of philosophy at the University of Oxford, and one of the foremost proponents of radical human enhancement, is that it expands human potential, giving individuals greater freedom ‘to shape themselves and their lives according to their informed wishes’.Transhumanismunchains us from our nature. Critics take a darker view, suggesting that biological and genetic tinkering is more likely to demean or even destroy the human race than elevate it.

The ethical debate is profound, but it seems fated to be a sideshow.•

Tags:

images (2)

The introduction to Nicholas Carr’s soon-to-be published essay collection, Utopia Is Creepy, has been excerpted at Aeon, and it’s a beauty. The writer argues (powerfully) that we’ve defined “progress as essentially technological,” even though the Digital Age quickly became corrupted by commercial interests, and the initial thrill of the Internet faded as it became “civilized” in the most derogatory, Twain-ish use of that word. To Carr, the something gained (access to an avalanche of information) is overwhelmed by what’s lost (withdrawal from reality). The critic applies John Kenneth Galbraith’s term “innocent fraud” to the Silicon Valley marketing of techno-utopianism. 

You could extrapolate this thinking to much of our contemporary culture: binge-watching endless content, Pokémon Go, Comic-Con, fake Reality TV shows, reality-altering cable news, etc. Carr suggests we use the tools of Silicon Valley while refusing the ethos. Perhaps that’s possible, but I doubt you can separate such things.

An excerpt:

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim inThe Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.•

Tags: ,

mcluhan1234

The messenger is supposed to bring the truth, not his or her wishes. It was more than 50 years ago when Marshall McLuhan predicted a Global Village, and those who believed the theorist was happy about this development were listening, at best, with one ear. The prospect frightened him

McLuhan feared the whole world being connected, thought it an invitation for mayhem, rightly believing local skirmishes would be played out on a gigantic stage. Believing a flatter world will be a more peaceful one assumes that everyone is driven by money, not ideology, not madness. 

Everything seems to arrive with more speed and regularity now, social justice and sorties alike. The whole world is in you pocket now, and it’s exploding.

Excerpts from 1)  Mathieu von Rohr’s Spiegel essay “Apocalypse Now,” and 2) Nicholas Carr’s Rough Type post “The Global Village of Violence.”


From von Rohr:

We are living in an age of shocks and crises that could well be traumatizing in their rapid succession and concentration, since it’s not yet clear whether they’re only a temporary jolt or the beginning of a trend with no end in sight. Of course, the sheer number of conflicts has remained constant in recent years. But there is much indication that we find ourselves in a new era of global instability. The biggest geopolitical stories of our time are the destabilization in the Middle East, the European security order and the European Union. In addition, there has been a societal shift in many Western countries: Many citizens are angry at the elites, because they see themselves as victims of globalization, free trade and migration. This anger has enabled the rise of political movements from the fringe to the mainstream in only a few years: Donald Trump, the Brexit movement, Front National and the Alternative for Germany, or AfD. The classic political camps are dissolving as the battle between the political left and the right is replaced by one between Isolationists and Internationalists.

Every now and then, there are phases in international politics during which more happens in the span of a few weeks than would otherwise happen in decades. Do 2014 and 2016 fall into that category? They’re not comparable to the most dramatic phases of the past century, when both World Wars broke out; nor are they anything like 1989, when the Cold War ended and the world order was rearranged. It’s also unclear whether this year will end with the same chaotic violence it started with.

But it is rather likely that global insecurity will become the new status quo.•


From Carr:

We assume that communication and harmony go hand in hand, like a pair of flower children on a garden path. If only we all could share our thoughts and feelings with everyone else all the time, we’d overcome our distrust and fear and live together peaceably. We’d see that we are all one. Facebook and other social media disabuse us of this notion. To be “all one” is to be dissolved — and for many people that is a threat that requires a reaction.

Eamonn Fitzgerald points to a recently uploaded video of a Canadian TV interview with Marshall McLuhan that aired in 1977. By the mid-seventies, a decade after his allotted minutes of fame, McLuhan had come to be dismissed as a mumbo-jumbo-spewing charlatan by the intelligentsia. What the intelligentsia found particularly irritating was that the mumbo jumbo McLuhan spewed fit no piety and often hit uncomfortably close to the mark.

Early on in the clip, the interviewer notes that McLuhan had long ago predicted that electronic communication systems would turn the world into a global village. Most of McLuhan’s early readers had taken this as a utopian prophecy. “But it seems,” the interviewer says, with surprise, “that this tribal world is not very friendly.”•

Tags: , ,

helmet77777 (1)Attempting to narrow the wealth gap by having corporations make micropayments to citizens for their information seems to me a morally bankrupt system even if it achieves the unlikely and saves some from actual bankruptcy. There has to be a better way, though whether we’re unwittingly working for Facebook and Google for free or accepting bits of coins for our efforts, it’s hard to see how we avoid this privacy-obliterating system we’ve built. We live in a very anti-government time, but corporations are far more pervasive and invasive and will only grow more so as the Internet of Things becomes the thing. We may eventually miss Big Brother.

I’m looking forward to reading Nicholas Carr’s forthcoming book, Utopia Is Creepy, which has the best title ever, and I credit him with pointing me toward Shoshana Zuboff’s Frankfurter Allgemeine essay “The Secrets of Surveillance Capitalism.” As she writes, “the very idea of a functional, effective, affordable product as a sufficient basis for economic exchange is dying,” and what is replacing it is spooky as hell. The Harvard professor’s article is devastating not for imagining a dark future that might be if things go horribly wrong but for laying out where we’re headed if we just incrementally build on the status quo.

The opening:

Google surpassed Apple as the world’s most highly valued company in January for the first time since 2010.  (Back then each company was worth less than 200 billion. Now each is valued at well over 500 billion.)  While Google’s new lead lasted only a few days, the company’s success has implications for everyone who lives within the reach of the Internet. Why? Because Google is ground zero for a wholly new subspecies of capitalism in which profits derive from the unilateral surveillance and modification of human behavior.  This is a new surveillance capitalism that is unimaginable outside the inscrutable high velocity circuits of Google’s digital universe, whose signature feature is the Internet and its successors.  While the world is riveted by the showdown between Apple and the FBI, the real truth is that the surveillance capabilities being developed by surveillance capitalists are the envy of every state security agency.  What are the secrets of this new capitalism, how do they produce such staggering wealth, and how can we protect ourselves from its invasive power?

“Most Americans realize that there are two groups of people who are monitored regularly as they move about the country.  The first group is monitored involuntarily by a court order requiring that a tracking device be attached to their ankle. The second group includes everyone else…”

Some will think that this statement is certainly true. Others will worry that it could become true. Perhaps some think it’s ridiculous.  It’s not a quote from a dystopian novel, a Silicon Valley executive, or even an NSA official. These are the words of an auto insurance industry consultant intended as a defense of  “automotive telematics” and the astonishingly intrusive surveillance capabilities of the allegedly benign systems that are already in use or under development. It’s an industry that has been notoriously exploitative toward customers and has had obvious cause to be anxious about the implications of self-driving cars for its business model. Now, data about where we are, where we’re going, how we’re feeling, what we’re saying, the details of our driving, and the conditions of our vehicle are turning into beacons of revenue that illuminate a new commercial prospect.•

Tags: ,

journalistcar (1)

Here are 50 ungated pieces of wonderful journalism from 2015, alphabetized by author name, which made me consider something new or reconsider old beliefs or just delighted me. (Some selections are from gated publications that allow a number of free articles per month.) If your excellent work isn’t on the list, that’s more my fault than yours.

  • Who Runs the Streets of New Orleans?” (David Amsden, The New York Times Magazine) As private and public sector missions increasingly overlap, here’s an engaging look at the privatization of some policing in the French Quarter.
  • In the Beginning” (Ross Andersen, Aeon) A bold and epic essay about the elusive search for the origins of the universe.
  • Ask Me Anything (Anonymous, Reddit) A 92-year-old German woman who was born into Nazism (and participated in it) sadly absolves herself of all blame while answering questions about that horrible time.
  • Rethinking Extinction” (Stewart Brand, Aeon) The Whole Earth Catalog founder thinks the chance of climate-change catastrophe overrated, arguing we should utilize biotech to repopulate dwindling species.
  • Anchorman: The Legend of Don Lemon” (Taffy Brodesser-Akner, GQ) A deeply entertaining look into the perplexing facehole of Jeff Zucker’s most gormless word-sayer and, by extension, the larger cable-news zeitgeist.
  • How Social Media Is Ruining Politics(Nicholas Carr, Politico) A lament that our shiny new tools have provided provocative trolls far more credibility than a centralized media ever allowed for.
  • Clans of the Cathode” (Tom Carson, The Baffler) One of our best culture critics looks at the meaning of various American sitcom families through the medium’s history.
  • The Black Family in the Age of Mass Incarceration” (Ta-Nehisi Coates, The Atlantic) The author examines the tragedy of the African-American community being turned into a penal colony, explaining the origins of the catastrophic policy failure.
  • Perfect Genetic Knowledge” (Dawn Field, Aeon) The essayist thinks about a future in which we’ve achieved “perfect knowledge” of whole-planet genetics.
  • A Strangely Funny Russian Genius” (Ian Frazier, The New York Review of Books) Daniil Kharms was a very funny writer, if you appreciate slapstick that ends in a body count.
  • Tomorrow’s Advance Man” (Tad Friend, The New Yorker) Profile of Silicon Valley strongman Marc Andreessen and his milieu, an enchanted land in which adults dream of riding unicorns.
  • Build-a-Brain” (Michael Graziano, Aeon) The neuroscientist’s ambitious thought experiment about machine intelligence is a piece I thought about continuously throughout the year.
  • Ask Me Anything (Stephen Hawking, Reddit) Among other things, the physicist warns that the real threat of superintelligent machines isn’t malice but relentless competence.
  • Engineering Humans for War” (Annie Jacobsen, The Atlantic) War is inhuman, it’s been said, and the Pentagon wants to make it more so by employing bleeding-edge biology and technology to create super soldiers.
  • The Wrong Head” (Mike Jay, London Review of Books) A look at insanity in 1840s France, which demonstrates that mental illness is often expressed in terms of the era in which it’s experienced.
  • Death Is Optional” (Daniel Kahneman and Noah Yuval Harari, Edge) Two of my favorite big thinkers discuss the road ahead, a highly automated tomorrow in which medicine, even mortality, may not be an egalitarian affair.
  • Where the Bodies Are Buried,” (Patrick Radden Keefe, The New Yorker) Ceasefires, even treaties, don’t completely conclude wars, as evidenced by this haunting revisitation of the heartbreaking IRA era.
  • Porntopia” (Molly Lambert, Grantland) The annual Adult Video News Awards in Las Vegas, the Oscars of oral, allows the writer to look into a funhouse-mirror reflection of America.
  • The Robots Are Coming” (John Lanchester, London Review of Books) A remarkably lucid explanation of how quickly AI may remake our lives and labor in the coming decades.
  • Last Girl in Larchmont” (Emily Nussbaum, The New Yorker) The great TV critic provides a postmortem of Joan Rivers and her singular (and sometimes disquieting) brand of feminism.
  • “President Obama & Marilynne Robinson: A Conversation, Part 1 & Part 2” (Barack Obama and Marilynne Robinson, New York Review of Books) Two monumental Americans discuss the state of the novel and the state of the union.
  • Ask Me Anything (Elizabeth Parrish, Reddit) The CEO of BioViva announces she’s patient zero for the company’s experimental age-reversing gene therapies. Strangest thing I read all year.
  • Why Alien Life Will Be Robotic” (Sir Martin Rees, Nautilus) The astronomer argues that ETs in our inhospitable universe have likely already transitioned into conscious machines.
  • Ask Me Anything (Anders Sandberg, Reddit) Heady conversation about existential risks, Transhumanism, economics, space travel and future technologies conducted by the Oxford researcher. 
  • Alien Rights” (Lizzie Wade, Aeon) Manifest Destiny will, sooner or later, became a space odyssey. What ethics should govern exploration of the final frontier?
  • Peeling Back the Layers of a Born Salesman’s Life” (Michael Wilson, The New York Times) The paper’s gifted crime writer pens a posthumous profile of a protean con man, a Zelig on the make who crossed paths with Abbie Hoffman, Otto Preminger and Annie Leibovitz, among others.
  • The Pop Star and the Prophet” (Sam York, BBC Magazine) Philosopher Jacques Attali, who predicted, back in the ’70s, the downfall of the music business, tells the writer he now foresees similar turbulence for manufacturing.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The trouble with everyone being connected and quantified isn’t only that we’re sharing, intentionally or otherwise, so much personal information, but also what that data can further reveal once algorithms have had their way with it. It’s like an inverse game of telephone in the Smartphone Age, the information becoming more precise as it travels.

From “You Are Your Phone,” a sharp Rough Type post by Nicholas Carr:

The Wall Street Journal reports today that Silicon Valley lending startups are looking to base personal loan decisions on analyses of data from individuals’ phones. The apps running on a person’s device, entrepreneurs have found, “generate huge amounts of data — texts, emails, GPS coordinates, social-media posts, retail receipts, and so on — indicating thousands of subtle patterns of behavior that correlate with repayment or default.” How you use your phone reveals more than you think:

Even obscure variables such as how frequently a user recharges the phone’s battery, how many incoming text messages they receive, how many miles they travel in a given day or how they enter contacts into their phone — the decision to add last name correlates with creditworthiness — can bear on a decision to extend credit.

Meanwhile, the New York Times today reports on a new study published in Science that reveals how a person’s economic status can be determined through a fairly simple analysis of phone use. The researchers, working in Africa, collected details “about when calls were made and received and the length of the calls” as well as “when text messages were sent, and which cellphone towers the texts and calls were routed through.” They analyzed this metadata to “build an algorithm that predicts how wealthy or impoverished a given cellphone user is. Using the same model, the researchers were able to answer even more specific questions, like whether a household had electricity.”

I am not a number, you declare. I am more than a credit score. You may well be. But the tell-tale phone reveals more than one’s financial standing and trustworthiness.•

Tags:

Nicholas Carr’s The Glass Cage, a must-read if you want to understand all sides of this new machine age, is now out in paper. I like Carr’s thinking when I agree with him, and I like it when I don’t. He always makes me see things in a fresh way, and he’s a miraculously graceful writer. Carr put an excerpt from the book, one about the history of automation, on his blog. Here’s a smaller section from that: 

Automated machines existed before World War II. James Watt’s steam engine, the original prime mover of the Industrial Revolution, incorporated an ingenious feedback device — the fly-ball governor — that enabled it to regulate its own operation. The Jacquard loom, invented in France around 1800, used steel punch cards to control the movements of spools of different-colored threads, allowing intricate patterns to be woven automatically. In 1866, a British engineer named J. Macfarlane Gray patented a steamship steering mechanism that was able to register the movement of a boat’s helm and, through a gear-operated feedback system, adjust the angle of the rudder to maintain a set course.

But the development of fast computers, along with other sensitive electronic controls, opened a new chapter in the history of machines. It vastly expanded the possibilities of automation. As the mathematician Norbert Wiener, who helped write the prediction algorithms for the Allies’ automated antiaircraft gun, explained in his 1950 book The Human Use of Human Beings, the advances of the 1940s enabled inventors and engineers to go beyond “the sporadic design of individual automatic mechanisms.” The new technologies, while designed with weaponry in mind, gave rise to “a general policy for the construction of automatic mechanisms of the most varied type.” They opened the way for “the new automatic age.”

Beyond the pursuit of progress and productivity lay another impetus for the automatic age: politics.•

Tags:

dt2

If Donald Trump grew a small, square mustache above his lip, would his poll numbers increase yet again? For a candidate running almost purely on attention, can any shock really be deleterious?

Howard Dean was the first Internet candidate and Barack Obama the initial one to ride those new rules to success. But things have already markedly changed: That was a time of bulky machines on your lap, and the new political reality rests lightly in your pocket. A smartphone’s messages are brief and light on details, and its buzzing is more important than anything it delivers.

The diffusion of media was supposed to make it impossible for a likable incompetent like George W. Bush to rise. How could such a person survive the scrutiny of millions of “citizen journalists” like us? If anything, it’s made it easier, even for someone who’s unlikable and incompetent. For a celeb with a Reality TV willingness to be ALL CAPS all the time, facts get lost in the noise, at least for awhile.

That doesn’t mean Donald Trump, an adult baby with an attention span that falls somewhere far south of 15 months, will be our next President, but it does indicate that someone ridiculously unqualified and hugely bigoted gets to be on the national stage and inform our political discourse. The same way Jenny McCarthy used her platform to play doctor and spearhead the anti-vaccination movement, Trump gets to be a make-believe Commander-in-Chief for a time.

Unsurprisingly, Nicholas Carr has written the best piece on the dubious democracy the new tools have delivered, a Politico Magazine article that analyzes election season in a time that favors a provocative troll, a “snapchat personality,” as he terms it. The opening:

Our political discourse is shrinking to fit our smartphone screens. The latest evidence came on Monday night, when Barack Obama turned himself into the country’s Instagrammer-in-Chief. While en route to Alaska to promote his climate agenda, the president took a photograph of a mountain range from a window on Air Force One and posted the shot on the popular picture-sharing network. “Hey everyone, it’s Barack,” the caption read. “I’ll be spending the next few days touring this beautiful state and meeting with Alaskans about what’s going on in their lives. Looking forward to sharing it with you.” The photo quickly racked up thousands of likes.

Ever since the so-called Facebook election of 2008, Obama has been a pacesetter in using social media to connect with the public. But he has nothing on this year’s field of candidates. Ted Cruz live-streams his appearances on Periscope. Marco Rubio broadcasts “Snapchat Stories” at stops along the trail. Hillary Clinton and Jeb Bush spar over student debt on Twitter. Rand Paul and Lindsey Graham produce goofy YouTube videos. Even grumpy old Bernie Sanders has attracted nearly two million likers on Facebook, leading the New York Times to dub him “a king of social media.”

And then there’s Donald Trump. If Sanders is a king, Trump is a god. A natural-born troll, adept at issuing inflammatory bulletins at opportune moments, he’s the first candidate optimized for the Google News algorithm.•

Tags: ,

In one of his typically bright, observant posts, Nicholas Carr wryly tackles Amazon’s new scheme of paying Kindle Unlimited authors based on how many of their pages are read, a system which reduces the written word to a granular level of constant, non-demanding engagement. 

There’s an argument to be made that like systems have worked quite well in the past: Didn’t Charles Dickens publish under similar if not-as-precisely-quantified circumstances when turning out his serial novels? Sort of. Maybe not to the same minute degree, but he was usually only as good as his last paragraph (which, thankfully, was always pretty good).

The difference is while it worked for Dickens, this process hasn’t been the motor behind most of the great writing in our history. James Joyce would not have survived very well on this nano scale. Neither would have Virginia Woolf, William Faulkner, Marcel Proust, etc. Their books aren’t just individual pages leafed together but a cumulative effect, a treasure that comes only to those who clear obstacles.

Shakespeare may have had to pander to the groundlings to pay the theater’s light bill, but what if the lights had been turned off mid-performance if he went more than a page without aiming for the bottom of the audience?

Carr’s opening:

When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed.•

 

Tags:

Wow, this is wonderful: Nicholas Carr posted a great piece from a recent lecture in which he addressed Marshall McLuhan’s idea of automation as media. In this excerpt, he tells a history of how cartography, likely the first medium, went from passive to active player as we transitioned from paper to software:

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving.•

Tags:

It’s perplexing the American school system (and no other that I know of) doesn’t employ video games as teaching tools, since they’re both satisfying and edifying and can allow students to pursue knowledge at a personalized pace. It’s a real lost opportunity to think learning can’t be vibrant and fun.

Beyond the classroom, Nicholas Carr wonders why software is created to pose no obstacles to us, to not challenge us but replace us. He addresses this point, among others, in an excellent discussion with Tom Chatfield of BBC Future. An excerpt:

Should life be more like a video game?

Tom Chatfield:

I was glad to see that you use video games in the book as an example of human-machine interactions where the difficulty is the point rather than the problem. Successful games are like a form of rewarding work, and can offer the kind of complex, constant, meaningful feedback that we have evolved to find deeply satisfying. Yet there is also a bitter irony, for me, in the fact that the work some people do on a daily basis is far-less skilled and enjoyable and rewarding. 

Nicholas Carr:

Video games are very interesting because in their design they go against all of the prevailing assumptions about how you design software. They’re not about getting rid of friction, they’re not about making sure that the person using them doesn’t have to put in much effort or think that much. The reason we enjoy them is because they don’t make it easy for us. They constantly push us up against friction – not friction that simply frustrates us, but friction that leads to ever-higher levels of talent.

If you look at that and compare it to what we know about how people gain expertise, how we build talent, it’s very, very similar. We know that in order to gain talent you have to come up against hard challenges in which you exercise your skills to the utmost, over and over again, and slowly you gain a new level of skill, and then you are challenged again. 

And also I think, going even further, that the reason people enjoy videogames is the same reason that people enjoy building expertise and overcoming challenges. It’s really fundamentally enjoyable to be struggling with a hard challenge that we then ultimately overcome, and that gives us the talent necessary to tackle an even harder challenge.

One of the fundamental concerns of the book is the fear that we are creating a world based on the assumption that the less we have to engage in challenging tasks, the better. It seems to me that that is antithetical to everything we know about what makes us satisfied and fulfilled and happy.•

Tags: ,

I wish everyone writing about technology could turn out prose as sparkling and lucid as Nicholas Carr. In a New York Times opinion piece, he stresses that while people are flawed, so are computers, and our silicon counterparts thus far lack the dexterity we possess to react to the unforeseen. He suggests humans and machines permanently remain a team, allowing us to benefit from the best of both.

I think that’s the immediate future, but I still believe market forces will ultimately cede to robots anything they can do as well (or nearly as well) as humans. And I’m curious as to the effects of Deep Learning on the impromptu responses of machinery.

From Carr:

While our flaws loom large in our thoughts, we view computers as infallible. Their scripted consistency presents an ideal of perfection far removed from our own clumsiness. What we forget is that our machines are built by our own hands. When we transfer work to a machine, we don’t eliminate human agency and its potential for error. We transfer that agency into the machine’s workings, where it lies concealed until something goes awry.
 
Computers break down. They have bugs. They get hacked. And when let loose in the world, they face situations that their programmers didn’t prepare them for. They work perfectly until they don’t.
 
Many disasters blamed on human error actually involve chains of events that are initiated or aggravated by technological failures. Consider the 2009 crash of Air France Flight 447 as it flew from Rio de Janeiro to Paris. The plane’s airspeed sensors iced over. Without the velocity data, the autopilot couldn’t perform its calculations. It shut down, abruptly shifting control to the pilots. Investigators later found that the aviators appeared to be taken by surprise in a stressful situation and made mistakes. The plane, with 228 passengers, plunged into the Atlantic.

The crash was a tragic example of what scholars call the automation paradox. Software designed to eliminate human error sometimes makes human error more likely. When a computer takes over a job, the workers are left with little to do. Their attention drifts. Their skills, lacking exercise, atrophy. Then, when the computer fails, the humans flounder.

Tags:

In her NYRB piece on Nicholas Carr’s The Glass Cage, Sue Halpern runs through periods of the twentieth century when fears of technological unemployment were raised before receding, mentioning a 1980 Time cover story about the Labor-destabilizing force of machines. These projections seemed to have been proved false as job creation increased considerably during the Reagan Administration, but as Halpern goes on to note, that feature article may have been prescient in ways we didn’t then understand. Income inequality began to boom during the last two decades of the previous century, a worrying trajectory that’s only been exacerbated as we’ve moved deeper into the Digital Revolution. Certainly there are other causes but automation is likely among them, with the new wealth in the hands of fewer, algorithms and robots managing a good portion of the windfall-creating toil. And if you happen to be working in many of the fields likely to soon be automated (hotels, restaurants, warehouses, etc.), you might want to ask some former travel agents and record-store owners for resume tips. 

Halpern zeroes in on a Carr topic often elided by economists debating whether the next few decades will be boon or bane for the non-wealthy: the hole left in our hearts when we’re “freed” of work. Is that something common to us because we were born on the other side of the transformation, or are humans marked indelibly with the need to produce beyond tweets and likes? Maybe it’s the work, not the play, that’s the thing. From Halpern:

Here is what that future—which is to say now—looks like: banking, logistics, surgery, and medical recordkeeping are just a few of the occupations that have already been given over to machines. Manufacturing, which has long been hospitable to mechanization and automation, is becoming more so as the cost of industrial robots drops, especially in relation to the cost of human labor. According to a new study by the Boston Consulting Group, currently the expectation is that machines, which now account for 10 percent of all manufacturing tasks, are likely to perform about 25 percent of them by 2025. (To understand the economics of this transition, one need only consider the American automotive industry, where a human spot welder costs about $25 an hour and a robotic one costs $8. The robot is faster and more accurate, too.) The Boston group expects most of the growth in automation to be concentrated in transportation equipment, computer and electronic products, electrical equipment, and machinery.

Meanwhile, algorithms are writing most corporate reports, analyzing intelligence data for the NSA andCIA, reading mammograms, grading tests, and sniffing out plagiarism. Computers fly planes—Nicholas Carr points out that the average airline pilot is now at the helm of an airplane for about three minutes per flight—and they compose music and pick which pop songs should be recorded based on which chord progressions and riffs were hits in the past. Computers pursue drug development—a robot in the UK named Eve may have just found a new compound to treat malaria—and fill pharmacy vials.

Xerox uses computers—not people—to select which applicants to hire for its call centers. The retail giant Amazon “employs” 15,000 warehouse robots to pull items off the shelf and pack boxes. The self-driving car is being road-tested. A number of hotels are staffed by robotic desk clerks and cleaned by robotic chambermaids. Airports are instituting robotic valet parking. Cynthia Breazeal, the director of MIT’s personal robots group, raised $1 million in six days on the crowd-funding site Indiegogo, and then $25 million in venture capital funding, to bring Jibo, “the world’s first social robot,” to market. …

There is a certain school of thought, championed primarily by those such as Google’s Larry Page, who stand to make a lot of money from the ongoing digitization and automation of just about everything, that the elimination of jobs concurrent with a rise in productivity will lead to a leisure class freed from work. Leaving aside questions about how these lucky folks will house and feed themselves, the belief that most people would like nothing more than to be able to spend all day in their pajamas watching TV—which turns out to be what many “nonemployed” men do—sorely misconstrues the value of work, even work that might appear to an outsider to be less than fulfilling. Stated simply: work confers identity. When Dublin City University professor Michael Doherty surveyed Irish workers, including those who stocked grocery shelves and drove city buses, to find out if work continues to be “a significant locus of personal identity,” even at a time when employment itself is less secure, he concluded that “the findings of this research can be summed up in the succinct phrase: ‘work matters.’”

How much it matters may not be quantifiable, but in an essay in The New York Times, Dean Baker, the codirector of the Center for Economic and Policy Research, noted that there was

a 50 to 100 percent increase in death rates for older male workers in the years immediately following a job loss, if they previously had been consistently employed.

One reason was suggested in a study by Mihaly Csikszentmihalyi, the author of Flow: The Psychology of Optimal Experience (1990), who found, Carr reports, that “people were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours.”

Tags: , , , ,

Because of computerized autopilot systems and a greater understanding of wind shears, flying has never been safer than it is right now. Boarding a domestic carrier in the United States is a particularly low-risk means of travel. But increasingly automated aviation can cause human pilots to experience skill fade, something which has alarmed Nicholas Carr, and now Steve Casner of Slate is concerned about two-pilot cockpits being halved. My assumption is that if accidents remain the rare exception, the automation process will continue apace. An excerpt:

Now that we’ve gone from four pilots to two, and with more automation on the way, you don’t need to be a mind reader to know what the industry is thinking next. The aircraft manufacturer Embraer has already revealed plans for a single-pilot regional jet, and Cessna has produced several small single-pilot jets. (I’m rated to fly this one.) And as my colleagues at NASA are busy studying the feasibility of large single-pilot airliners, a Delta Air Lines pilot made it look easy a few weeks ago when the other pilot was accidentally locked out of the cockpit. But should we be a little nervous about the idea of having just one pilot up there in the front office? The research says maybe so.

Studies show that pilots make plenty of errors. That’s why we have two pilots in the airline cockpit—to construct a sort of human safety net. While one pilot operates the aircraft’s controls, the other pilot keeps watch for occasional errors and tries to point them out before they cause any harm. NASA engineer Everett Palmer likes to sum up the idea with a quip: “To err is human, to be error-tolerant is divine.” Keeping the error-maker and getting rid of the error-catcher may not prove to be very error-tolerant.

Besides, automation doesn’t eliminate human error—it just relocates it. The engineers and programmers who design automation are humans, too. They write complex software that contains bugs and nuances. Pilots often speak of automation surprises in which the computers do something unexpected, occasionally resulting in accidents. Having only one pilot in the cockpit might compromise our ability to make sense of these technological noodle-scratchers when they pop up.•

Tags: ,

The Penguin blog has a Nicholas Carr essay about modern navigation devices and the effect they have on the “maps” in our brains, “Welcome to Nowheresville,” which is adapted from a piece of his most recent book, The Glass Cage. Carr is one of those blessed thinkers I always enjoy reading whether I agree with him or not. I don’t necessarily share his concerns about how GPS is redefining what it is to be human (we’ve always been and always will be fluidly defined) or “skill fade” causing transportation fatalities (the net number of such deaths will likely decline as travel becomes more autonomous), but it’s certainly worth considering the unknown neurological consequences of offloading our piloting skills. Are we unwittingly creating a new mismatch disease? An excerpt:

A loss of navigational acumen can have dire consequences for airline pilots and lorry drivers. Most of us, in our daily routines of driving and walking and otherwise getting around, are unlikely to find ourselves in such perilous spots. Which raises the obvious question: Who cares? As long as we arrive at our destination, does it really matter whether we maintain our navigational sense or offload it to a machine? Those of us living in lands crisscrossed by well marked roads and furnished with gas stations, motels, and 7-Elevens long ago lost both the custom of and the capacity for prodigious feats of wayfinding. Our ability to perceive and interpret topography, especially in its natural state, is already much reduced. Paring it away further, or dispensing with it altogether, doesn’t seem like such a big deal, particularly if in exchange we get an easier go of it.

But while we may no longer have much of a cultural stake in the conservation of our navigational prowess, we still have a personal stake in it. We are, after all, creatures of the earth. We’re not abstract dots proceeding along thin blue lines on computer screens. We’re real beings in real bodies in real places. Getting to know a place takes effort, but it ends in fulfillment and in knowledge. It provides a sense of personal accomplishment and autonomy, and it also provides a sense of belonging, a feeling of being at home in a place rather than passing through it. …

The harder people work at building cognitive maps of space, the stronger their underlying memory circuits seem to become. They can actually grow grey matter in the hippocampus—a phenomenon documented in cab drivers—in a way that’s analogous to the building of muscle mass through physical exertion.

But when they simply follow turn-by-turn instructions in “a robotic fashion,” Bohbot warns, they don’t “stimulate their hippocampus” and as a result may leave themselves more susceptible to memory loss. Bohbot worries that, should the hippocampus begin to atrophy from a lack of use in navigation, the result could be a general loss of memory and a growing risk of dementia. “Society is geared in many ways toward shrinking the hippocampus,” she told an interviewer. “In the next twenty years, I think we’re going to see dementia occurring earlier and earlier.”•

Tags:

« Older entries