Books

You are currently browsing the archive for the Books category.

Yuval Noah Harari writes this in his great book Sapiens:

Were, say, Spanish peasant to have fallen asleep in A.D. 1000 and woken up 500 years later, to the din of Columbus’ sailors boarding the Nina, Pinta, and Santa Maria, the world would have seemed to him quite familiar. Despite many changes in technology, manners and political boundaries, this medieval Rip Van Winkle would have felt at home. But had one of Columbus’ sailors fallen into a similar slumber and woken up to the ringtone of a twenty-first century iPhone, he would have found himself in a world strange beyond comprehension. ‘Is this heaven?’ he might well have asked himself. ‘Or perhaps — hell?’

What kind of peasants will we be? Is the road forward a high-speed one that will render tomorrow unrecognizable? It would seem so, except if calamity were to sideswipe us and delay (or permanently make impossible) the next phase. But if we are fortunate enough to have a safe travel, will a ruin of our own making await us in the form of Strong AI? I doubt it’s right around the bend as some feel, but it can’t hurt to consider such a scenario. From philosopher Stephen Cave’s Financial Times review of a slate of recent books about the perils of superintelligence:

It is tempting to suppose that AI would be a tool like any other; like the wheel or the laptop, an invention that we could use to further our interests. But the brilliant British mathematician IJ Good, who worked with Alan Turing both on breaking the Nazis’ secret codes and subsequently in developing the first computers, realised 50 years ago why this would not be so. Once we had a machine that was even slightly more intelligent than us, he pointed out, it would naturally take over the intellectual task of designing further intelligent machines. Because it was cleverer than us, it would be able to design even cleverer machines, which could in turn design even cleverer machines, and so on. In Good’s words: “There would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Good’s prophecy is at the heart of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, in which writer and film-maker James Barrat interviews leading figures in the development of super-clever machines and makes a clear case for why we should be worried. It is true that progress towards human-level AI has been slower than many predicted — pundits joke that it has been 20 years away for the past half-century. But it has, nonetheless, achieved some impressive milestones, such as the IBM computers that beat grandmaster Garry Kasparov at chess in 1997 and won the US quiz show Jeopardy! in 2011. In response to Barrat’s survey, more than 40 per cent of experts in the field expected the invention of intelligent machines within 15 years from now and the great majority expected it by mid-century at the latest.

Following Good, Barrat then shows how artificial intelligence could become super-intelligence within a matter of days, as it starts fixing its own bugs, rewriting its own software and drawing on the wealth of knowledge now available online. Once this “intelligence explosion” happens, we will no longer be able to understand or predict the machine, any more than a mouse can understand or predict the actions of a human.Good’s prophecy is at the heart of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, in which writer and film-maker James Barrat interviews leading figures in the development of super-clever machines and makes a clear case for why we should be worried. It is true that progress towards human-level AI has been slower than many predicted — pundits joke that it has been 20 years away for the past half-century. But it has, nonetheless, achieved some impressive milestones, such as the IBM computers that beat grandmaster Garry Kasparov at chess in 1997 and won the US quiz show Jeopardy! in 2011. In response to Barrat’s survey, more than 40 per cent of experts in the field expected the invention of intelligent machines within 15 years from now and the great majority expected it by mid-century at the latest.•

 

Tags: , ,

It’s just perfect that Monopoly, which brought cutthroat capitalism to the living room, allowing you to bankrupt grandma, was birthed through dubious business deals. In Mary Pilon’s new book, The Monopolists, the author traces the key role in the game’s invention of Elizabeth ­Magie, whose Landlord’s Game, which preceded Charles Darrow’s blockbuster, has largely been lost to history. From James McManus in the New York Times

Our favorite board game, of course, is Monopoly, which has also gone global, and for similar reasons. Played by everyone from Jerry Hall and Mick ­Jagger to Carmela and Tony Soprano, it apparently scratches an itch to wheel and deal few of us can reach in real life. The game is sufficiently redolent of capitalism that in 1959 Fidel Castro ordered the ­destruction of every Monopoly set in Cuba, while these days Vladimir Putin seems to be its ultimate aficionado.

What dyed-in-the-wool free marketeer invented this cardboard facsimile of real estate markets, and who owns it now? From whose ideas did it evolve? These are the questions Mary Pilon, formerly a reporter at The New York Times and The Wall Street Journal, proposes to answer in her briskly enlightening first book, The Monopolists. For decades the ­official story, slipped into every Monopoly box, was that Charles Darrow, an unemployed salesman, had a sudden light-bulb moment about a game to amuse his poor family during the Depression. After selling it to Parker Brothers in 1935, he lived lavishly ever after on the proceeds.

To trace how far removed this was from the truth, Pilon introduces Elizabeth ­Magie. Born in 1866, she was an ­unmarried stenographer whose passions included politics and — even more rare among women of that era — inventing. In 1904 she received a patent for the Landlord’s Game, a board contest she designed to cultivate her progressive, proto-feminist values, and as a rebuke to the slumlords and other monopolists of the Gilded Age.

Her game featured spaces for railroads and rental properties on each side of a square board, with water and electricity companies and a corner labeled “Go to Jail.” Players earned wages, paid taxes; the winner was the one who best foiled landlords’ attempts to send her to the poorhouse. Magie helped form a company to market it, but it never really took off. The game appealed mostly to socialists and Quakers, many of whom made their own sets; other players renamed properties and added things like Chance and Community Chest cards. Even less auspiciously for Magie, many people began referring to it as “monopoly” and giving it as gifts. Then in 1932, Charles Darrow received one with spaces named for streets in Atlantic City.

No light bulb necessary.•

Tags: , , ,

In her NYRB piece on Nicholas Carr’s The Glass Cage, Sue Halpern runs through periods of the twentieth century when fears of technological unemployment were raised before receding, mentioning a 1980 Time cover story about the Labor-destabilizing force of machines. These projections seemed proved false as job creation increased considerably during the Reagan Administration, but as Halpern goes on to note, that feature article may have been prescient in ways we didn’t then understand. Income inequality began to boom during the last two decades of the previous century, a worrying trajectory that’s only been exacerbated as we’ve moved deeper into the Digital Revolution. Certainly there are other causes but automation is likely among them, with the new wealth in the hands of fewer, algorithms and robots managing a good portion of the windfall-creating toil. And if you happen to be working in many of the fields likely to soon be automated (hotels, restaurants, warehouses, etc.), you might want to ask some former travel agents and record-store owners for resume tips. 

Halpern zeroes in on a Carr topic often elided by economists debating whether the next few decades will be boon or bane for the non-wealthy: the hole left in our hearts when we’re “freed” of work. Is that something common to us because we were born on the other side of the transformation, or are humans marked indelibly with the need to produce beyond tweets and likes? Maybe it’s the work, not the play, that’s the thing. From Halpern:

Here is what that future—which is to say now—looks like: banking, logistics, surgery, and medical recordkeeping are just a few of the occupations that have already been given over to machines. Manufacturing, which has long been hospitable to mechanization and automation, is becoming more so as the cost of industrial robots drops, especially in relation to the cost of human labor. According to a new study by the Boston Consulting Group, currently the expectation is that machines, which now account for 10 percent of all manufacturing tasks, are likely to perform about 25 percent of them by 2025. (To understand the economics of this transition, one need only consider the American automotive industry, where a human spot welder costs about $25 an hour and a robotic one costs $8. The robot is faster and more accurate, too.) The Boston group expects most of the growth in automation to be concentrated in transportation equipment, computer and electronic products, electrical equipment, and machinery.

Meanwhile, algorithms are writing most corporate reports, analyzing intelligence data for the NSA andCIA, reading mammograms, grading tests, and sniffing out plagiarism. Computers fly planes—Nicholas Carr points out that the average airline pilot is now at the helm of an airplane for about three minutes per flight—and they compose music and pick which pop songs should be recorded based on which chord progressions and riffs were hits in the past. Computers pursue drug development—a robot in the UK named Eve may have just found a new compound to treat malaria—and fill pharmacy vials.

Xerox uses computers—not people—to select which applicants to hire for its call centers. The retail giant Amazon “employs” 15,000 warehouse robots to pull items off the shelf and pack boxes. The self-driving car is being road-tested. A number of hotels are staffed by robotic desk clerks and cleaned by robotic chambermaids. Airports are instituting robotic valet parking. Cynthia Breazeal, the director of MIT’s personal robots group, raised $1 million in six days on the crowd-funding site Indiegogo, and then $25 million in venture capital funding, to bring Jibo, “the world’s first social robot,” to market. …

There is a certain school of thought, championed primarily by those such as Google’s Larry Page, who stand to make a lot of money from the ongoing digitization and automation of just about everything, that the elimination of jobs concurrent with a rise in productivity will lead to a leisure class freed from work. Leaving aside questions about how these lucky folks will house and feed themselves, the belief that most people would like nothing more than to be able to spend all day in their pajamas watching TV—which turns out to be what many “nonemployed” men do—sorely misconstrues the value of work, even work that might appear to an outsider to be less than fulfilling. Stated simply: work confers identity. When Dublin City University professor Michael Doherty surveyed Irish workers, including those who stocked grocery shelves and drove city buses, to find out if work continues to be “a significant locus of personal identity,” even at a time when employment itself is less secure, he concluded that “the findings of this research can be summed up in the succinct phrase: ‘work matters.’”

How much it matters may not be quantifiable, but in an essay in The New York Times, Dean Baker, the codirector of the Center for Economic and Policy Research, noted that there was

a 50 to 100 percent increase in death rates for older male workers in the years immediately following a job loss, if they previously had been consistently employed.

One reason was suggested in a study by Mihaly Csikszentmihalyi, the author of Flow: The Psychology of Optimal Experience (1990), who found, Carr reports, that “people were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours.”

Tags: , , , ,

The Electra and Oedipus of the Apollo space program, Oriana Fallaci and Norman Mailer were two writers with egos massive enough to observe humankind’s mission to the Moon as not only material for New Journalism reportage of an historical quest but also as backdrop to investigations of their own psyches. In 1968, two years after Fallaci published If the Sun Dies… and the year before Mailer stormed through a series of long-form articles for Life magazine that became Of a Fire on the Moon, the pair sat down for an interview with Fallaci serving as the inquisitor. In Mailer’s face–“noble and vulgar,” she called it–Fallaci claimed to be searching for America. It actually wasn’t a bad place to look: Like his country, Mailer could be at turns soaringly brilliant and shockingly brutal–and completely delusional about his behavior in regards to the latter. His remarks about domestic violence, for instance, were beyond horrifying, and they unfortunately weren’t merely macho showboating. The discussion opened Fallaci’s collection of (mostly) non-political interrogations, The Egotists. Three excerpts follow.

_____________________________

 

Oriana Fallaci:

The problem I want to talk about is a difficult one, but we have to deal with it. The fact is we Europeans used to love you Americans. When you came to liberate us twenty years ago, we used to look up to you as if you were angels. And now many of us don’t love you anymore; indeed some hate you. Today the United States might be the most hated country in the world.

Norman Mailer:

You used to love us because love is hope, and we Americans were your hope. And also, perhaps, because twenty years ago we were a better people, although not as good as you believed then–the seeds of the present ugliness were already there. The soldiers with whom I fought in the Pacific, for example, were a little better than the ones who are fighting now in Vietnam, but not by much. We were quite brutal even then. One could write a novel about Vietnam along the lines of The Naked and the Dead, and the characters would not need to be worse than they are in the book.The fact is that you have lost the hope you have vested in us, and so you have lost your love; therefore you see us in a much worse light than you did before, and you don’t understand that the roots of our ugliness are the old ones. It is true that the evil forces in America have triumphed only after the war–with the enormous growth of corporations and the transformation of man into mass-man, the alienation of men from their own existence–but these forces were already there in Roosevelt’s time. Roosevelt, you see, was a great President, but he wasn’t a great thinker. Indeed, he was a very superficial one. When he took power, America stood at a crossroad; either a proletarian revolution would take place or capitalism would enter a new phase. What happened was that capitalism took a new turn, transforming itself into a subtle elaboration of state capitalism–it is not by chance that the large corporations in effect belong to the government. They belong to the right. And just as the Stalinists have murdered Marxism, so these bastards of the right are now destroying what is good in American life. They are the same people who build the expressways, who cut the trees, who pollute the air and the water, who transform life into a huge commodity.

Oriana Fallaci:

We Europeans are also very good at this. I mean this is not done by only right-wing Americans.

Norman Mailer:

Of course. It is a worldwide process. But its leader is America, and this is why we are hated. We are the leaders of the technological revolution that is taking over the twentieth century, the electronic revolution that is dehumanizing mankind.•

_____________________________

 

Norman Mailer:

I still have hope you seem to have lost. Because of the youth. Some of them are subhuman, but most of them are intelligent.

Oriana Fallaci:

That is true. But they are also stuffed with drugs, violence, LSD. Does that help your hoping?

Norman Mailer:

Theirs is an extraordinary complex generation to live in. The best thing I can say about them is that I can’t understand them. The previous generation, the one fifteen years ago, was so predictable, without surprises. This one is a continuing surprise. I watch the young people of today, I listen to them, and l realize that I’m not twenty years older than they are but a hundred. Perhaps because in five years they went through changes that usually take half a century to complete, their intelligence has been speeded up so incredibly that there is no contact between them and the generation around thirty. Not to speak of those around forty or fifty. Yes, I know that this does not happen only in America; this too is a global process. But the psychology of American youth is more modern than that of any other group in the world; it belongs not to 1967 but to 2027. If God could see what would happen in the future–as he perhaps does–he would see people everywhere acting and thinking in 2027 as American youth do now. It’s true they take drugs. But they don’t take the old drugs such as heroin and cocaine that produce only physical reactions and sensations and dull you at the same time. They take LSD, a drug that can help you explore your mind. Now let’s get this straight: I can’t justify the use of LSD. I know too well that you don’t get something for nothing, and it may well be that we’ll pay a tragic price for LSD: it seems that it can break the membrane of the chromosomes in the cells and produce who knows what damage in future children. But LSD is part of a search, a desperate search, as if all these young people felt at the same time the need to explore as soon as possible their minds so as to avoid a catastrophe. Technology has stripped our minds until we have become like pygmies driving chariots drawn by dinosaurs. Now, if we want to keep the dinosaurs in harness, our minds will have to develop at a forced pace, which will require a frightening effort. The young have felt the need to harness the dinosaurs, and if they have found the wrong means, it’s still better than nothing. My fear had been that America was slowly freezing and hardening herself in a pygmy’s sleep. But no, she’s awake.•

_____________________________

 
 

Norman Mailer:

Damn it, I don’t like violence. But there’s something I like even less, and that’s a need for security. It smells of the grave and forces you to react with blood. 

Oriana Fallaci:

You dislike violence? You who knifed a wife and can’t miss a boxing match?

Norman Mailer:

The knife in my wife’s belly was a crime. It was a grave crime, but it had nothing to do with violence. And as for the fights, well, boxing is not violence. It’s a conversation, an exchange between two men who talk to each other with their hands instead of their voices: hitting at the ear, the nose, the mouth, the belly, instead of hitting at each other’s minds. Boxing is a noble art. When a man fights in a ring, he is not expressing brutality. He expresses a complex, subtle nature like that of a true intellectual, a real aristocrat. A pugilist is less brutal, or not at all brutal after a fight, because with his fists he transforms violence into something beautiful, noble and disciplined. It’s a real triumph of the spirit. No, I’m not violent. To be violent means to pick fights, and I can’t remember ever having started a fight. Nor can I remember ever having hit a woman–a strange woman, I mean. I may have hit a wife, but that’s different. If you are married you have two choices: either you beat your wife, or you don’t. Some people live their whole life without ever beating her, others maybe beat her once and thereon are labeled “violent.” I like to marry women whom I can beat once in a while, and who fight back. All my wives have been very good fighters. Perhaps I need women who are capable of violence, to offset my own. Am I not American, after all? But the act of hitting is hateful because it implies a judgement, and judgement itself is hateful. Not that I think of myself as being a good man in the Christian sense. But at certain times I have a clear consciousness of what is good and what is evil, and then my concept of the good resembles that of the Christian.•

Tags: ,

Pushing back at Bill Gates’ favorite book of the last decade, Steven Pinker’s The Better Angels of Our Nature, philosopher John Gray argues in the Guardian that those who believe global violence to be on the wane are using accounting that’s too messy and theories too neat. We assign violence to backwardness when the cutting edge has the potential to be the sharpest of all. The essay comes from Gray’s new book, The Soul of the Marionette. An excerpt:

There is something repellently absurd in the notion that war is a vice of “backward” peoples. Destroying some of the most refined civilisations that have ever existed, the wars that ravaged south-east Asia in the second world war and the decades that followed were the work of colonial powers. One of the causes of the genocide in Rwanda was the segregation of the population by German and Belgian imperialism. Unending war in the Congo has been fuelled by western demand for the country’s natural resources. If violence has dwindled in advanced societies, one reason may be that they have exported it.

Then again, the idea that violence is declining in the most highly developed countries is questionable. Judged by accepted standards, the United States is the most advanced society in the world. According to many estimates the US also has the highest rate of incarceration, some way ahead of China and Russia, for example. Around a quarter of all the world’s prisoners are held in American jails, many for exceptionally long periods. Black people are disproportionately represented, many prisoners are mentally ill and growing numbers are aged and infirm. Imprisonment in America involves continuous risk of assault by other prisoners. There is the threat of long periods spent in solitary confinement, sometimes (as in “supermax” facilities, where something like Bentham’s Panopticon has been constructed) for indefinite periods – a type of treatment that has been reasonably classified as torture. Cruel and unusual punishments involving flogging and mutilation may have been abolished in many countries, but, along with unprecedented levels of mass incarceration, the practice of torture seems to be integral to the functioning of the world’s most advanced state.

It may not be an accident that torture is often deployed in the special operations that have replaced more traditional types of warfare. The extension of counter-terrorism to include assassination by unaccountable mercenaries and remote-controlled killing by drones is part of this shift. A metamorphosis in the nature is war is under way, which is global in reach. With the state of Iraq in ruins as a result of US-led regime change, a third of the country is controlled by Isis, which is able to inflict genocidal attacks on Yazidis and wage a campaign of terror on Christians with near-impunity. In Nigeria, the Islamist militias of Boko Haram practise a type of warfare featuring mass killing of civilians, razing of towns and villages and sexual enslavement of women and children. In Europe, targeted killing of journalists, artists and Jews in Paris and Copenhagen embodies a type of warfare that refuses to recognise any distinction between combatants and civilians. Whether they accept the fact or not, advanced societies have become terrains of violent conflict. Rather than war declining, the difference between peace and war has been fatally blurred.

Deaths on the battlefield have fallen and may continue to fall. From one angle this can be seen as an advancing condition of peace. From another point of view that looks at the variety and intensity with which violence is being employed, the Long Peace can be described as a condition of perpetual conflict.

***

Certainly the figures used by Pinker and others are murky, leaving a vast range of casualties of violence unaccounted for.•

Tags: ,

Michael Tennesen is a glass-half-full kind of guy. The author of the newly published The Next Species: The Future of Evolution in the Aftermath of Man tells Lindsay Abrams of Salon that something may extinct humans (his guess: overpopulation), but it’s not that big a deal. Maybe something less shitty will come along and replace us. 

A tangent before the interview excerpt: I’ve heard a million times that no one reads anymore and that Amazon has destroyed publishing and that books are dead, but have you noticed how one great title after another keeps emrging, almost more than it’s possible to keep up with? Something there doesn’t compute.

The interview excerpt:

Question:

A lot of us look at these studies about pollution and climate change and extinction on a very day-by-day, headline basis. What was the value for you of stepping back and taking a more pulled-back, planetary perspective on these issues?

Michael Tennesen:

I was influenced by a paper that Anthony Barnosky from the University of California at Berkeley wrote, about his idea that we are entering a mass extinction event. People who study life on Earth think that extinction has a dual side: it could be a catastrophe or it could be an opportunity. The comet that fell out of the sky at the end of the Cretaceous period knocked out the dinosaurs, but made way for mammals and man.

So I’m trying to look at what can happen next. And to get an idea of what can happen next, I kind of had to pull back and look at the history of life on Earth with the idea: how does life recover from catastrophe? What things can you see in both events that might possibly be repeated in the future?  I wanted to look at the whole concept. There was a book by Alan Weisman, The World Without Us, where he talked about what it would be like tomorrow if man disappeared and how long it would take for man’s infrastructure to come down, for New York to fall.  I just wanted to look at it from more of a reality standpoint: What would the biology be like in such an event?

Question:

When you’re looking back at some of these lessons we can learn from past mass extinctions, what are some of the most important things you came across, that we should be paying attention to?

Michael Tennesen:

If you look at the past, the driver of four out of the five mass extinctions has been carbon dioxide. I went to Guadalupe National Park and took a hike with the national park biologist Jonena Hearst to Capitan Reef, which was just this explosion of life that existed back in the Permian Era, 250 million years ago, just before the Permian extinction. It showed just how susceptible life is to chemicals in the environment, and the litany of things that was going on during the Permian extinction, which was the greatest extinction we’ve ever had: 90 percent of life was knocked out of the ocean; 70 to 75 percent on land. The high CO2 content and greenhouse gases and other problems — sulfur dioxide release, major changes in the ocean currents — these are some of the things we’re dealing with now. I don’t know if we’re going to be heading into that massive of an event, but there are lessons there. A lot of people want to go, “Well, what’s CO2? What’s the big deal?” It’s 400 parts per million. That’s a lot.

Question:

As you said, there is sort of a more optimistic way of looking at mass extinction, because there are some positive potential outcomes…

Michael Tennesen:

In an extinction event, you’ve got a new playing board. I went up to Mt. St. Helens and looked at the land around that volcano. They’ve actually separated a portion of the volcanic area as a natural experiment to see how life would come back. Nature actually does a pretty fabulous job pretty quickly.•

Tags: , , , ,

Speaking about Going Clear, Andrew O’Hehir a new Salon Q&A with Lawrence Wright. Many religions begin as bizarre cults and only survive if they can (mostly) shed the weirdness and stabilize, the sideshow far from the middle ring. Wright believes that could happen with Scientology. An excerpt:

Andrew O’Hehir:

You just said that you think this film could provoke a crisis that might help Scientology. I think it’s useful to point out, as you have done many times, that you did not actually set out to do a gotcha or an exposé.

Lawrence Wright:

Why bother? It’s the most stigmatized religion in America. An exposé, so what? But it is really interesting to understand why people are drawn in to the church. What do they get out of it and why do they stay? If you can understand that, in reference to a belief system that most people regard as very bizarre and has a reputation for being incredibly vindictive and litigious, then you might understand other social and religious and political movements that arise and take very good, kind, idealistic, intelligent, skeptical people and turn them into people they wouldn’t otherwise recognize.

Andrew O’Hehir:

The larger question here that you’re beginning to hint at is what makes a religion a religion? What does that word mean? The IRS has its own ideas, but …

Lawrence Wright:

Let’s start with the IRS because they’re the only agency empowered to make this distinction. It’s not exactly stocked full of theologians either. The way they determined that Scientology was a religion was to make a deal, because they were under legal siege of 2,400 lawsuits. Essentially, Scientology bludgeoned them into this tax exemption, which now denominates them as a religion. Previously, they were seen as a business enterprise and that’s the way they are seen in some European countries. Also, they are seen as a cult or a sect in Europe. But we call them a religion and I’m willing to accept that. It stretches the boundaries, clearly, but if you think of a religion having a set of scriptures – well L. Ron Hubbard still holds the Guinness record for the number of titles by a single author, as far as I know, more than 1,000. It’s a record that’s very hard to eclipse. Everything he wrote is considered a scripture by Scientology, even his novels.

Andrew O’Hehir:

Really? Battlefield Earth is a work of scripture?

Lawrence Wright:

Yes, it’s all scripture. It’s tax-exempt. There’s a huge body of work, not all of it fiction, having to do with ethics and psychology and so on that the church considers its literature. It functions as a community. Really, a religion is only separated from the rest of society by a circle of beliefs. So in that sense, sometimes the stranger the beliefs and the more exotic, the more bound together the community inside that circle is, and I think that’s true of Scientology. There is an origin story that may be a little bit bizarre, but bizarre beliefs are common in religion because religion is a belief in irrational things.•

Tags: , ,

As much as living in an endlessly public, hyperconnected world may be, perhaps, an evolutionary necessity, that doesn’t mean it isn’t the root cause of a global mismatch disease, that it isn’t bad for us on the granular level. You and I, remember, we don’t amount to a hill of beans in this crazy world. There’s something medieval in the new order, the way privacy has vanished and judgement is ubiquitous. But unlike during the Middle Ages, we’re now not exposed to just the village but to the entire Global Village. What effect does that have? From Yuval Noah Harari’s Sapiens:

The imagined order is embedded in the material world. Though the imagined order exists only in our minds, it can be woven into the material reality around us, and even set in stone. Most Westerners today believe in individualism. They believe that every human is an individual, whose worth does not depend on what other people think of him or her. Each of us has within ourselves a brilliant ray of light that gives value and meaning to our lives. In modern Western schools teachers and parents tell children that if their classmates make fun of them, they should ignore it. Only they themselves, not others, know their true worth.

In modern architecture, this myth leaps out of the imagination to take shape in stone and mortar. The ideal modern house is divided into many small rooms so that each child can have a private space, hidden from view, providing for maximum autonomy. This private room almost invariably has a door, and in many households it is accepted practice for the child to close, and perhaps lock, the door. Even parents are forbidden to enter without knocking and asking permission. The room is decorated as the child sees fit, with rock-star posters on the wall and dirty socks on the floor. Somebody growing up in such a space cannot help but imagine himself ‘an individual’, his true worth emanating from within rather than from without.

Medieval noblemen did not believe in individualism. Someone’s worth was determined by their place in the social hierarchy, and by what other people said about them. Being laughed at was a horrible indignity. Noblemen taught their children to protect their good name whatever the cost. Like modern individualism, the medieval value system left the imagination and was manifested in the stone of medieval castles. The castle rarely contained private rooms for children (or anyone else, for that matter). The teenage son of a medieval baron did not have a private room on the castle’s second floor, with posters of Richard the Lionheart and King Arthur on the walls and a locked door that his parents were not allowed to open. He slept alongside many other youths in a large hall. He was always on display and always had to take into account what others saw and said. Someone growing up in such conditions naturally concluded that a man’s true worth was determined by his place in the social hierarchy and by what other people said of him.•

Tags:

The German postal system grew from the nation’s military courier apparatus to become a multifaceted marvel, contributing subsequently to networks all over the world, leaving its mark on Soviet socialism and American capitalism. It has a latter-day parallel, of course, in the Internet, which was incubated and nurtured by the U.S. Defense Department wing DARPA. The Financial Times has a passage from David Graeber’s The Utopia of Rules about the mixed blessing of bureaucracy, which allows for large-scale progress, making the unthinkable manageable, before beginning to succumb to its own weight, a sideshow giant who wows until his heart gives out. An excerpt:

All these fantasies of postal utopia now seem rather quaint. Today we usually associate national postal systems with the arrival of things we never wanted in the first place: utility bills, overdraft alerts, tax audits, one-time-only credit-card offers, charity appeals, and so on. Insofar as Americans have a popular image of postal workers, it has become increasingly squalid.

Yet at the same time that symbolic war was being waged on the postal service, something remarkably similar to the turn-of-the-century infatuation with the postal service was happening again. Let us summarise the story so far:

1. A new communications technology develops out of the military.

2. It spreads rapidly, radically reshaping everyday life.

3. It develops a reputation for dazzling efficiency.

4. Since it operates on non-market principles, it is quickly seized on by radicals as the first stirrings of a future, non-capitalist economic system already developing within the shell of the old.

5. Despite this, it quickly becomes the medium, too, for government surveillance and the dissemination of endless new forms of advertising and unwanted paperwork.

This mirrors the story of the internet. What is email but a giant, electronic, super-efficient post office? Has it not, too, created a sense of a new, remarkably effective form of cooperative economy emerging from within the shell of capitalism itself, even as it has deluged us with scams, spam and commercial offers, and enabled the government to spy on us in new and creative ways?

It seems significant that while both postal services and the internet emerge from the military, they could be seen as adopting military technologies to quintessential anti-military purposes. Here we have a way of taking stripped-down, minimalistic forms of action and communication typical of military systems and turning them into the invisible base on which everything they are not can be constructed: dreams, projects, declarations of love and passion, artistic effusions, subversive manifestos, or pretty much anything else.

But all this also implies that bureaucracy appeals to us — that it seems at its most liberating — precisely when it disappears: when it becomes so rational and reliable that we are able to just take it for granted that we can go to sleep on a bed of numbers and wake up with all those numbers still snugly in place.

In this sense, bureaucracy enchants when it can be seen as a species of what I like to call “poetic technology” — when mechanical forms of organisation, usually military in their ultimate inspiration, can be marshalled to the realisation of impossible visions: to create cities out of nothing, scale the heavens, make the desert bloom. For most of human history this kind of power was only available to the rulers of empires or commanders of conquering armies, so we might even speak here of a democratisation of despotism. Once, the privilege of waving one’s hand and having a vast invisible army of cogs and wheels organise themselves in such a way as to bring your whims into being was available only to the very most privileged few; in the modern world, it can be subdivided into millions of tiny portions and made available to everyone able to write a letter, or to flick a switch.•

 

Tags:

Unless the People magazine archives are lying to me, the first mention of the word “computer” in the publication occurred in the April 4, 1977 edition. It was used in reference to Richard Dawkins’ publication of The Selfish Gene. An excerpt:

It looks like a scene in a mad-scientist movie, but Oxford’s Dr. Richard Dawkins is studying the response of female crickets to the computer-simulated mating calls of the male. Dawkins is a sociobiologist, one of a new breed of scientists who specialize in the biological causes of animal behavior. “I love to solve the intellectual problems of my specialty,” he says. “It’s the kind of game people like me play.”

Based on his studies, Dawkins, 36, has developed a theory about the survival of species. It is described in his book The Selfish Gene, which recently was published in the U.S. He says the seemingly “altruistic” acts of many species are the result of genes trying to perpetuate themselves. “Even man,” says Dawkins, “is a gene machine, a robot vehicle blindly programmed to preserve its selfish genes. Let us try to teach generosity and altruism. Let us understand what our selfish genes are up to because we may then have the chance to upset their designs—something no other species has ever aspired to.”•

Tags:

America’s obituary has been written prematurely many times, and, no, fucking ISIS won’t be the death of us. There’s always hope for a bright future for the U.S. as long as our immigration policies aren’t guided by politicians pandering to xenophobic impulses. From an Economist review of Joseph Nye’s Is the American Century Over?:

Europe is hardly a plausible challenger. Though its economy and population are larger than America’s, the old continent is stagnating. In 1900 a quarter of the world’s people were European; by 2060 that figure could be just 6%, and a third of them will be over 65.

By 2025 India will be the most populous nation on Earth. It has copious “soft power”—a term Mr Nye coined—in its diaspora and popular culture. But only 63% of Indians are literate, and none of its universities is in the global top 100. India could only eclipse America if it were to form an anti-American alliance with China, reckons Mr Nye, but that is unlikely: Indians are well-disposed towards Washington and highly suspicious of Beijing.

China is the likeliest contender to be the next hyperpower: its army is the world’s largest and its economy will soon be. (In purchasing-power-parity terms, it already is.) But it will be decades before China is as rich or technologically sophisticated as America; indeed, it may never be. By 2030 China will have more elderly dependants than children, which will sap its vitality. It has yet to figure out how to change governments peacefully. And its soft power is feeble for a country of its size. It has few real friends or allies, unless you count North Korea and Zimbabwe.

Hu Jintao, the previous president, tried to increase China’s soft power by setting up “Confucius Institutes” to teach its language and culture. Yet such a strategy is unlikely to win hearts in, say, Manila, when China is bullying the Philippines over islands in the South China Sea. The staging of the 2008 Olympics in Beijing was a soft-power success, but was undercut by the jailing of Liu Xiaobo, a pro-democracy activist, and the resulting empty chair at the ceremony to award him the Nobel peace prize. “Marketing experts call this ‘stepping on your own message’,” says Mr Nye.•

Tags:

David Graeber, who’s just published The Utopia of Rules, explaining to Elias Isquith of Salon why free markets don’t actually supplant bureaucracy but actually beget more of it:

Salon:

The idea that free-market policies create bureaucracies is pretty counterintuitive, at least for most Americans. So why is it the case that laissez-faire policy creates bureaucracy?

David Graeber:

Part of the reason is because in fact what we call the market is not really the market.

First of all, we have this idea that the market is a thing that just happens. This is the debate in the 19th century: market relations creeped up within feudalism and then it overthrew [feudalism]. So gradually the market is just the natural expression of human freedom; and since it regulates itself, it will gradually displace everything else and bring about a free society. Libertarians still think this.

In fact, if you look at what actually happens historically, this is just not true. Self-regulating markets were basically created with government intervention. It was a political project. Certain assumptions of how these things work just aren’t true. People don’t do wage labor if they have any choice, historically, for example. So in order to get a docile labor force, you have to create police and [a] large apparatus to ensure that the people you kick off the land actually will get the kinds of jobs you want them to … this is the very beginning of creating a market.

Basically, we assume that market relations are natural, but you need a huge institutional structure to make people behave the way that economists say they are “supposed” to behave. So, for example, think about the way the consumer market works. The market is supposed to work on grounds of pure competition. Nobody has moral ties to each other other than to obey the rules. But, on the other hand, people are supposed to do anything they can to get as much as possible off the other guy — but won’t simply steal the stuff or shoot the person.

Historically, that’s just silly; if you don’t care at all about a guy, you might as well steal his stuff. In fact, they’re encouraging people to act essentially how most human societies, historically, treated their enemies — but to still never resort to violence, trickery or theft. Obviously that’s not going to happen. You can only do that if you set up a very strictly enforced police force. That’s just one example.•

Tags: ,

James Salter turned out some beautiful pieces for People magazine during that publication’s infancy, usually profiling other great writers of earlier generations who were living in some state of exile. (Earlier I posted a passage from his Graham Greene article.) In 1975, he coerced Vladimir Nabokov, living in Switzerland two years before his death, into grudgingly sitting for an interview, and recorded the writer’s dislike for many things: fame, hippies, Dostoevsky, etc. It’s not a portrait of only one novelist but also of a different time for writers in general, when one could still find pockets of a less-disposable age. An excerpt:

Novelists, like dictators, have long reigns. It is remarkable to think of Nabokov’s first book, a collection of love poems, appearing in his native Russia in 1914. Soon after, he and his family were forced to flee as a result of the Bolshevik uprising and the civil war. He took a degree at Cambridge and then settled in the émigré colony in Berlin. He wrote nine novels in Russian, beginning with Mary, in 1926, and including Glory, The Defense, and Laughter in the Dark. He had a certain reputation and a fully developed gift when he left for America in 1940 to lecture at Stanford. The war burst behind him.

Though his first novel written in English, The Real Life of Sebastian Knight, in 1941, went almost unnoticed, and his next, Bend Sinister, made minor ripples, the stunning Speak, Memory, an autobiography of his lost youth, attracted respectful attention. It was during the last part of 10 years at Cornell that he cruised the American West during the summers in a 1952 Buick, looking for butterflies, his wife driving and Nabokov beside her making notes as they journeyed through Wyoming, Utah, Arizona, the motels, the drugstores, the small towns. The result was Lolita, which at first was rejected everywhere, like many classics, and had to be published by the Olympia Press in Paris (Nabokov later quarreled with and abandoned his publisher, Maurice Girodias). A tremendous success and later a film directed by Stanley Kubrick, the book made the writer famous. Nabokov coquettishly demurs. “I am not a famous writer,” he says, “Lolita was a famous little girl. You know what it is to be a famous writer in Montreux? An American woman comes up on the street and cries out, ‘Mr. Malamud! I’d know you anywhere.’ ”

He is a man of celebrated prejudices. He abhors student activists, hippies, confessions, heart-to-heart talks. He never gives autographs. On his list of detested writers are some of the most brilliant who have ever lived: Cervantes, Dostoevsky, Faulkner and Henry James. His opinions are probably the most conservative, among important writers, of any since Evelyn Waugh’s. “You will die in dreadful pain and complete isolation,” his fellow exile, the Nobel Prize winner Ivan Bunin, told him. Far from pain these days and beyond isolation, Nabokov is frequently mentioned for that same award. “After all, you’re the secret pride of Russia,” he has written of someone unmistakably like himself. He is far from being cold or uncaring. Outraged at the arrest last year of the writer Maramzin, he sent this as yet unpublished cable to the Soviet writers’ union: “Am appalled to learn that yet another writer martyred just for being a writer. Maramzin’s immediate release indispensable to prevent an atrocious new crime.” The answer was silence.

Last year Nabokov published Look at the Harlequins!, his 37th book. It is the chronicle of a Russian émigré writer named Vadim Vadimych whose life, though he had four devastating wives, has many aspects that fascinate by their clear similarity to the life of Vladimir Vladimirovich. The typical Nabokovian fare is here in abundance, clever games of words, sly jokes, lofty knowledge, all as written by a “scornful and austere author, whose homework in Paris had never received its due.” It is probably one of the final steps toward a goal that so many lesser writers have striven to achieve: Nabokov has joined the current of history not by rushing to take part in political actions or appearing in the news but by quietly working for decades, a lifetime, until his voice seems as loud as the detested Stalin’s, almost as loud as the lies. Deprived of his own land, of his language, he has conquered something greater. As his aunt in Harlequins! told young Vadim, “Play! Invent the world! Invent reality!” Nabokov has done that. He has won.

“I get up at 6 o’clock,” he says. He dabs at his eyes. “I work until 9. Then we have breakfast together. Then I take a bath. Perhaps an hour’s work afterward. A walk, and then a delicious siesta for about two-and-a-half hours. And then three hours of work in the afternoon. In the summer we hunt butterflies.” They have a cook who comes to their apartment, or Véra does the cooking. “We do not attach too much importance to food or wine.” His favorite dish is bacon and eggs. They see no movies. They own no TV.

They have very few friends in Montreux, he admits. They prefer it that way. They never entertain. He doesn’t need friends who read books; rather, he likes bright people, “people who understand jokes.” Véra doesn’t laugh, he says resignedly. “She is married to one of the great clowns of all time, but she never laughs.”

The light is fading, there is no one else in the room or the room beyond. The hotel has many mirrors, some of them on doors, so it is like a house of illusion, part vision, part reflection, and rich with dreams.•

Tags: , ,

At the Forbes site, John Tamny, author of the forthcoming pop culture-saturated book Popular Economics argues that robots will be job creators, not killers, and breathlessly asserts that the Digital Revolution will follow the arc of the Industrial one. Perhaps. But there could be a very bumpy number of decades while that potential transition takes place. Although, as I’ve said before, you wouldn’t want to live in a country left behind in the race to greater AI.

But robots or no robots, here’s one job that should be created: someone to design a site for Forbes that isn’t a complete piece of shit. It’s really like Web 1.0 over there. The opening of Tamny’s reasoning:

As robots increasingly adopt human qualities, including those that allow them to replace actual human labor, economists are starting to worry.  As the Wall Street Journal reported last week, some “wonder if automation technology is near a tipping point, when machines finally master traits that have kept human workers irreplaceable.”

The fears of economists, politicians and workers themselves are way overdone.  They should embrace the rise of robots precisely because they love job creation.  As my upcoming book Popular Economics points out with regularity, abundant job creation is always and everywhere the happy result of technological advances that tautologically lead to job destruction.

Robots will ultimately be the biggest job creators simply because aggressive automation will free us up to do new work by virtue of it erasing toil that was once essential.  Lest we forget, there was a time in American history when just about everyone worked whether they wanted to or not — on farms — just to survive.  Thank goodness technology destroyed lots of agricultural work that freed Americans up to pursue a wide range of vocations off the farm.

With their evolution as labor inputs, robots bring the promise of new forms of work that will have us marveling at labor we wasted in the past, and that will make past job destroyers like wind, water, the cotton gin, the car, the internet and the computer seem small by comparison.  All the previously mentioned advances made lots of work redundant, but far from forcing us into breadlines, the destruction of certain forms of work occurred alongside the creation of totally new ways to earn a living.  Robots promise a beautiful multiple of the same.•

Tags:

Allen Ginsberg was a great poet and performer, if a dubious person in other ways. Here he is in 1965 giving one of his rapturous readings at the Royal Albert Hall. 

Tags:

Paul Krugman is continually taken to task for predicting in 1998 that the Internet would be no more important economically than the fax machine by 2005. Culturally, of course, this new medium has been a watershed event. But he had a point on some level: the Internet–and computers, more broadly–still disappoint from a productivity perspective. Either that or all conventional measurements are insufficient to gauge this new machine. At his Financial Times blog, Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s wonderful The Second Machine Age, wonders about the confusing state of contemporary economics. An excerpt:

The economy’s behaviour is puzzling these days. No matter what you think is going on, there are some facts — important ones — that don’t fit your theory well at all, and/or some important things left unexplained.

For example, if you believe that technological progress is reshaping the economy (as Erik and I do) then you’ve got to explain why productivity growth is so low. As Larry Summers pointed out on the first panel, strong labour productivity growth is the first thing you’d expect to see if tech progress really were taking off and reshaping the economy, disrupting industries, hollowing out the middle class, and so on. So why has it been so weak for the past 10 years? Is it because of mismeasurement? William Baumol’s “Cost Disease” (the idea that all the job growth has come in manual, low-productivity sectors)? Or is it that recent tech progress is in fact economically unimpressive, as Robert Gordon and others believe?

If you believe that tech progress has not been that significant, however, you’ve got to explain why labor’s share of income is declining around the world.•

Tags: , ,

In a belated London Review of Books assessment of The Second Machine Age and Average Is Over, John Lanchester doesn’t really break new ground in considering Deep Learning and technological unemployment, but in his customarily lucid and impressive prose he crystallizes how quickly AI may remake our lives and labor in the coming decades. Two passages follow: The opening, in which he charts the course of how the power of a supercomputer ended up inside a child’s toy in a few short years; and a sequence about the way automation obviates workers and exacerbates income inequality.

__________________________________

In 1996, in response to the 1992 Russo-American moratorium on nuclear testing, the US government started a programme called the Accelerated Strategic Computing Initiative. The suspension of testing had created a need to be able to run complex computer simulations of how old weapons were ageing, for safety reasons, and also – it’s a dangerous world out there! – to design new weapons without breaching the terms of the moratorium. To do that, ASCI needed more computing power than could be delivered by any existing machine. Its response was to commission a computer called ASCI Red, designed to be the first supercomputer to process more than one teraflop. A ‘flop’ is a floating point operation, i.e. a calculation involving numbers which include decimal points (these are computationally much more demanding than calculations involving binary ones and zeros). A teraflop is a trillion such calculations per second. Once Red was up and running at full speed, by 1997, it really was a specimen. Its power was such that it could process 1.8 teraflops. That’s 18 followed by 11 zeros. Red continued to be the most powerful supercomputer in the world until about the end of 2000.

I was playing on Red only yesterday – I wasn’t really, but I did have a go on a machine that can process 1.8 teraflops. This Red equivalent is called the PS3: it was launched by Sony in 2005 and went on sale in 2006. Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred quid. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world’s richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.

The force at work here is a principle known as Moore’s law. This isn’t really a law at all, but rather the extrapolation of an observation made by Gordon Moore, one of the founders of the computer chip company Intel. By 1965, Moore had noticed that silicon chips had for a number of years been getting more powerful, in relation to their price, at a remarkably consistent rate. He published a paper predicting that they would go on doing so ‘for at least ten years’. That might sound mild, but it was, as Erik Brynjolfsson and Andrew McAfee point out in their fascinating book, The Second Machine Age, actually a very bold statement, since it implied that by 1975, computer chips would be five hundred times more powerful for the same price. ‘Integrated circuits,’ Moore said, would ‘lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles and personal portable communications equipment’. Right on all three. If anything he was too cautious.•

__________________________________

Note that in this future world, productivity will go up sharply. Productivity is the amount produced per worker per hour. It is the single most important number in determining whether a country is getting richer or poorer. GDP gets more attention, but is often misleading, since other things being equal, GDP goes up when the population goes up: you can have rising GDP and falling living standards if the population is growing. Productivity is a more accurate measure of trends in living standards – or at least, it used to be. In recent decades, however, productivity has become disconnected from pay. The typical worker’s income in the US has barely gone up since 1979, and has actually fallen since 1999, while her productivity has gone up in a nice straightish line. The amount of work done per worker has gone up, but pay hasn’t. This means that the proceeds of increased profitability are accruing to capital rather than to labour. The culprit is not clear, but Brynjolfsson and McAfee argue, persuasively, that the force to blame is increased automation.

That is a worrying trend. Imagine an economy in which the 0.1 per cent own the machines, the rest of the 1 per cent manage their operation, and the 99 per cent either do the remaining scraps of unautomatable work, or are unemployed. That is the world implied by developments in productivity and automation. It is Pikettyworld, in which capital is increasingly triumphant over labour. We get a glimpse of it in those quarterly numbers from Apple, about which my robot colleague wrote so evocatively. Apple’s quarter was the most profitable of any company in history: $74.6 billion in turnover, and $18 billion in profit. Tim Cook, the boss of Apple, said that these numbers are ‘hard to comprehend’. He’s right: it’s hard to process the fact that the company sold 34,000 iPhones every hour for three months. Bravo – though we should think about the trends implied in those figures. For the sake of argument, say that Apple’s achievement is annualised, so their whole year is as much of an improvement on the one before as that quarter was. That would give them $88.9 billion in profits. In 1960, the most profitable company in the world’s biggest economy was General Motors. In today’s money, GM made $7.6 billion that year. It also employed 600,000 people. Today’s most profitable company employs 92,600. So where 600,000 workers would once generate $7.6 billion in profit, now 92,600 generate $89.9 billion, an improvement in profitability per worker of 76.65 times. Remember, this is pure profit for the company’s owners, after all workers have been paid. Capital isn’t just winning against labour: there’s no contest. If it were a boxing match, the referee would stop the fight.•

Tags: , , , ,

I haven’t yet read Naomi Klein’s book, This Changes Everything: Capitalism vs. the Climate, the one that Elizabeth Kolbert took to task for not being bold enough. (Kolbert’s own volume on the topic, The Sixth Extinction, was one of my favorite books of 2014.) In an often-contentious Spiegel interview conducted by Klaus Brinkbäumer, Klein contends that capitalism and ecological sanity are incompatible and calls out supposedly green captains of industry like Michael Bloomberg and Richard Branson. An excerpt:

Spiegel:

The US and China finally agreed on an initial climate deal in 2014.

Naomi Klein:

Which is, of course, a good thing. But anything in the deal that could become painful won’t come into effect until Obama is out of office. Still, what has changed is that Obama said: “Our citizens are marching. We can’t ignore that.” The mass movements are important; they are having an impact. But to push our leaders to where they need to go, they need to grow even stronger.

Spiegel:

What should their goal be?

Naomi Klein:

Over the past 20 years, the extreme right, the complete freedom of oil companies and the freedom of the super wealthy 1 percent of society have become the political standard. We need to shift America’s political center from the right fringe back to where it belongs, the real center.

Spiegel:

Ms. Klein, that’s nonsense, because it’s illusory. You’re thinking far too broadly. If you want to first eliminate capitalism before coming up with a plan to save the climate, you know yourself that this won’t happen.

Naomi Klein:

Look, if you want to get depressed, there are plenty of reasons to do so. But you’re still wrong, because the fact is that focusing on supposedly achievable incremental changes light carbon trading and changing light bulbs has failed miserably. Part of that is because in most countries, the environmental movement remained elite, technocratic and supposedly politically neutral for two-and-a-half decades. We are seeing the result of this today: It has taken us in the wrong direction. Emissions are rising and climate change is here. Second, in the US, all the major legal and social transformations of the last 150 years were a consequence of mass social movements, be they for women, against slavery or for civil rights. We need this strength again, and quickly, because the cause of climate change is the political and economic system itself. The approach that you have is too technocratic and small.

Spiegel:

If you attempt to solve a specific problem by overturning the entire societal order, you won’t solve it. That’s a utopian fantasy.

Naomi Klein:

Not if societal order is the root of the problem. Viewed from another perspective, we’re literally swimming in examples of small solutions: There are green technologies, local laws, bilateral treaties and CO2 taxation. Why don’t we have all that at a global level?

Spiegel:

You’re saying that all the small steps — green technologies and CO2 taxation and the eco-behavior of individuals — are meaningless?

Naomi Klein:

No. We should all do what we can, of course. But we can’t delude ourselves that it’s enough. What I’m saying is that the small steps will remain too small if they don’t become a mass movement. We need an economic and political transformation, one based on stronger communities, sustainable jobs, greater regulation and a departure from this obsession with growth. That’s the good news. We have a real opportunity to solve many problems at once.•

Tags: , , , ,

You could argue that Tunisia’s uprising was the match that lit the Middle East, as some struggles reverberate beyond their borders because they speak to a widespread dissatisfaction. The Paris Commune was viewed this way by outsiders during the late 1800s. Via the lovely Delancey Place, a passage from James Green’s Death in the Haymarket about the American interpretation of the French uprising:

When the French army laid siege to Paris and hostilities began, the Chicago Tribune’s reporters covered the fighting much as they had during the American Civil War. Many Americans, notably Republican leaders like Senator Charles Sumner, identified with the citizens of Paris who were fighting to create their own republic against the forces of a corrupt regime whose leaders had surrendered abjectly to the Iron Duke and his Prussian forces.

As the crisis deepened, however, American newspapers increasingly portrayed the Parisians as communists who confiscated property and as atheists who closed churches. The brave citizens of Paris, first described as rugged democrats and true republicans, now seemed more akin to the uncivilized elements that threatened America — the ‘savage tribes’ of Indians on the plains and the ‘dangerous classes’ of tramps and criminals in the cities. When the Commune’s defenses broke down on May 21, 1871, the Chicago Tribune hailed the breach of the city walls. Comparing the Communards to the Comanches who raided the Texas frontier, its editors urged the ‘mowing down’ of rebellious Parisians ‘without compunction or hesitation.’

La semaine sanglante — the week of blood — had begun as regular army troops took the city street by street, executing citizen soldiers of the Parisian National Guard as soon as they surrendered. In retaliation, the Communards killed scores of hostages and burned large sections of the city to the ground. By the time the killing ended, at least 25,000 Parisians, including many unarmed citizens, had been slaughtered by French army troops.

These cataclysmic events in France struck Americans as amazing and distressing. The bloody disaster cried out for explanation. In response, a flood of interpretations appeared in the months following the civil war in France. Major illustrated weeklies published lurid drawings of Paris scenes, of buildings gutted by fire, monuments toppled, churches destroyed and citizens executed, including one showing the death of a ‘petroleuse’ — a red-capped, bare-breasted woman accused of incendiary acts. Cartoonist Thomas Nast drew a picture of what the Commune would look like in an American city. Instant histories were produced, along with dime novels, short stories, poems and then, later in the fall, theatricals and artistic representations in the form of panoramas.

News of the Commune seemed exotic to most Americans, but some commentators wondered if a phenomenon like this could appear in one of their great cities, such as New York or Chicago, where vast hordes of poor immigrants held mysterious views of America and harbored subversive elements in their midst.•

Tags:

So sad to learn of Oliver Sacks’ terminal illness. I read The Man Who Mistook His Wife for a Hat at a young age, and I didn’t know what the hell to make of it, so stunned was I to find out that we’re not necessarily in control of our minds. In this piece of writing and so many others, Sacks examined the brain, that mysterious and scary thing, and because of his work as an essayist as well as a doctor, that organ is today a little less mysterious, a little less scary. It doesn’t mean he was always right, but how could anyone be when sailing in such dark waters? Sacks was accused sometimes of being a modern Barnum who used as diverting curiosities those with the misfortune of having minds that played tricks on them–even stranger tricks than the rest of us experience–and sometimes I cringed at the very personal things he would reveal about his subjects, but I always felt he strived to be ethical. We certainly live in an era when the freak show still thrives, albeit in a slickly produced form, but I don’t think that’s where Sacks’ work has ever lived. His prose and narrative abilities grew markedly during his career as he he came to realize–be surprised by?–his own brain’s capabilities. I hope he has a peaceful and productive final chapter. 

A profile of Sacks by Diane Sawyer with good 1969 footage of his work as a young doctor.

Tags: ,

Audio of Oriana Fallaci being interviewed in 1972 by Stephen Banker at the time of the publication of Nothing, and So Be It, her account of the dangerous season she spent as a war correspondent in Vietnam.

Tags:

I’ll be perplexed if Yuval Noah Harari’s great book Sapiens: A Brief History of Humankind, just published in the U.S., doesn’t wind up on many “Best of 2015″ lists at the end of the year. It’s such an amazing, audacious, lucid thing. Salon has run a piece from the volume. Here’s an excerpt about the seemingly eternal search for eternity:

The Gilgamesh Project

Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live for ever here on earth. The best minds were busy giving meaning to death, not trying to escape it.

That is the theme of the most ancient myth to come down to us – the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men and finding his way into the underworld. There he shattered the mysterious “stone things” of Urshanabi, the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.

Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures – a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.

Until recently, you would not have heard scientists, or anyone else, speak so bluntly. ‘Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis and Alzheimer’s disease,’ they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilisation methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth-century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no one could help the king. He died in great agony two weeks later.

As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anaesthetics. The first anaesthetics – ether, chloroform and morphine – entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.

In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which premodern people simply accepted as part of life. The average life expectancy jumped from around twenty-five to forty years, to around sixty-seven in the entire world, and to around eighty years in the developed world.

Death suffered its worst setbacks in the arena of child mortality. Until the twentieth century, between a quarter and a third of the children of agricultural societies never reached adulthood. Most succumbed to childhood diseases such as diphtheria, measles and smallpox. In seventeenth-century England, 150 out of every 1,000 newborns died during their first year, and a third of all children were dead before they reached fifteen. Today, only five out of 1,000 English babies die during their first year, and only seven out of 1,000 die before age fifteen.

We can better grasp the full impact of these figures by setting aside statistics and telling some stories. A good example is the family of King Edward I of England (1237–1307) and his wife, Queen Eleanor (1241–90). Their children enjoyed the best conditions and the most nurturing surroundings that could be provided in medieval Europe. They lived in palaces, ate as much food as they liked, had plenty of warm clothing, well-stocked fireplaces, the cleanest water available, an army of servants and the best doctors. The sources mention sixteen children that Queen Eleanor bore between 1255 and 1284:

1. An anonymous daughter, born in 1255, died at birth.

2. A daughter, Catherine, died either at age one or age three.

3. A daughter, Joan, died at six months.

4. A son, John, died at age five.

5. A son, Henry, died at age six.

6. A daughter, Eleanor, died at age twenty-nine.

7. An anonymous daughter died at five months.

8. A daughter, Joan, died at age thirty-five.

9. A son, Alphonso, died at age ten.

10. A daughter, Margaret, died at age fifty-eight.

11. A daughter, Berengeria, died at age two.

12. An anonymous daughter died shortly after birth.

13. A daughter, Mary, died at age fifty-three.

14. An anonymous son died shortly after birth.

15. A daughter, Elizabeth, died at age thirty-four.

16. A son, Edward.

The youngest, Edward, was the first of the boys to survive the dangerous years of childhood, and at his father’s death he ascended the English throne as King Edward II. In other words, it took Eleanor sixteen tries to carry out the most fundamental mission of an English queen – to provide her husband with a male heir. Edward II’s mother must have been a woman of exceptional patience and fortitude. Not so the woman Edward chose for his wife, Isabella of France. She had him murdered when he was forty-three.

To the best of our knowledge, Eleanor and Edward I were a healthy couple and passed no fatal hereditary illnesses on to their children. Nevertheless, ten out of the sixteen – 62 per cent – died during childhood. Only six managed to live beyond the age of eleven, and only three – just 18 per cent – lived beyond the age of forty. In addition to these births, Eleanor most likely had a number of pregnancies that ended in miscarriage. On average, Edward and Eleanor lost a child every three years, ten children one after another. It’s nearly impossible for a parent today to imagine such loss.

How long will the Gilgamesh Project – the quest for immortality – take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently managed to double the average life expectancy of Caenorhabditis elegans worms. Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells and even reverse ageing processes. A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).•

Tags:

Marc Goodman, law-enforcement veteran and author of the forthcoming book Future Crimes, sat for an interview with Jason Dorrier of Singularity Hub about the next wave nefariousness, Internet-enabled and large-scale. A question about the potential for peril writ relatively small with Narrow AI and on a grand scale if we create Artificial General Intelligence. An excerpt::

Question:

Elon Musk, Stephen Hawking, and Bill Gates have expressed concern about artificial general intelligence. It’s a hotly debated topic. Might AI be our “final invention?” It seems even narrow AI in the wrong hands might be problematic.

Marc Goodman:

I would add Marc Goodman to that list. To be clear, I think AI, narrow AI, and the agents around us have tremendous opportunity to be incredibly useful. We’re using AI every day, whether it’s in our GPS devices, in our Netflix recommendations, what we see on our Facebook status updates and streams—all of that is controlled via AI.

With regard to AGI, however, I put myself firmly in the camp of concern.

Historically, whatever the tool has been, people have tried to use it for their own power. Of course, typically, that doesn’t mean that the tool itself is bad. Fire wasn’t bad. It could cook your meals and keep you warm at night. It comes down to how we use it. But AGI is different. The challenge with AGI is that once we create it, it may be out of our hands entirely, and that could certainly make it our “final invention.”

I’ll also point out that there are concerns about narrow AI too.

We’ve seen examples of criminals using narrow AI in some fascinating ways. In one case, a University of Florida student was accused of killing his college roommate for dating his girlfriend. Now, this 18-year-old freshman had a conundrum. What does he do with the dead body before him? Well, he had never murdered anybody before, and he had no idea how to dispose of the body. So, he asked Siri. The answers Siri returned? Mine, swamp, and open field, among others.

So, Siri answered his question. This 18-year-old kid unknowingly used narrow AI as an accomplice after the fact in his homicide. We’ll see many more examples of this moving forward. In the book, I say we’re leaving the world of Bonnie and Clyde and joining the world of Siri and Clyde.•

Tags: ,

If you read this blog regularly, you know I adored David Carr, someone I never met except through his writing. His success was improbable, having previously survived a pitiless drug addiction–a surrender and an onslaught. Almost as unlikely was that he maintained his soulfulness inside a corporate behemoth like the New York Times, appearing unchanged, unreconstructed, unvanquished, perhaps inoculated from the plague of phoniness by the earlier taste of poison. It doesn’t surprise me that in his final column he hoped for a second chance for Brian Williams. Carr himself was one of the best second chances ever. He will be missed. From his book The Night of the Gun, in which he searched for a face that was strange yet his own:

Am I a lunatic? Yes. When am I going to cut this stuff out? Apparently never. Does God see me right now? Yes. God sees everything, including the blind.

Trapped in drug-induced paranoia, I began to think of the police as God’s emissaries, arriving not to seek vengeance but a cease-fire, a truce that would put me up against a wall of well-deserved consequences, and the noncombatants, the children, out of harm’s way.

On this night — it was near the end — every hit sent out an alarm along my vibrating synapses. If the cops were coming — Any. Minute. Now. — I should be sitting out in front of the house. That way I could tell them that yes, there were drugs and paraphernalia in the house, but no guns. And there were four blameless children. They could put the bracelets on me, and, head bowed, I would solemnly lead them to the drugs, to the needles, to the pipes, to what was left of the money. And then some sweet-faced matrons would magically appear and scoop up those babies and take them to that safe, happy place. I had it all planned out.

I took another hit, and Barley and I walked out and sat on the steps. My eyes, my heart, the veins in my forehead, pulsed against the stillness of the night. And then they came. Six unmarked cars riding in formation with lights off, no cherries, just like I pictured. It’s on.

A mix of uniforms and plainclothes got out, and in the weak light of the street, I could see long guns held at 45-degree angles. I was oddly proud that I was on the steps, that I now stood between my children and the dark fruits of the life I had chosen. I had made the right move after endless wrong ones. And then they turned and went to the house across the street.

Much yelling. “Facedown! Hug the carpet! No sudden movements!” A guy dropped out of the second-floor window in just gym shorts, but they were waiting. More yelling and then quiet. I went back inside the house and watched the rest of it play out through the corner of the blind. Their work done, the cops loaded several cuffed people into a van. I let go of the blind and got back down to business. It wasn’t my turn.

Twenty years later, now sober and back for a look at my past, I sat outside that house on Oliver Avenue on a hot summer day in a rental car, staring long and hard to make sense of what had and had not happened there. The neighborhood had turned over from white to black, but it was pretty much the same. Nice lawns, lots of kids, no evidence of the mayhem that had gone on inside. Sitting there in a suit with a nice job in a city far away and those twins on their way to college, I almost would have thought I’d made it up. But I don’t think I did. While I sat there giving my past the once over, someone lifted up the corner of the blind in the living-room window. It was time to go.•

Tags:

Donald Trump, a human oil spill, apparently requested that the Obama Administration make him czar of the BP cleanup effort, according to David Axelrod’s new book. From Amy Chozick in the New York Times:

Question:

Some anecdotes in the book make clear that, as a senior adviser to the president, you dealt with some odd requests. Donald Trump asked you to put him in charge of cleaning up the BP oil spill.

David Axelrod:

You owe it to the president to be polite and to give folks a hearing. But even as I was going through these conversations, I had this sense of surreality. I was watching the scene and thinking, Man, this is really bizarre. I gotta write about this someday. Nobody will believe this.•

Tags: , , ,

« Older entries § Newer entries »