Books

You are currently browsing the archive for the Books category.

In a smart Guardian piece by Hannah Devlin, novelist Kazuo Ishiguro wonders if liberal democracy will be doomed by a new type of wealth inequality, the biological kind, in which gene editing and other tools make enhancement and improved health available only to the haves. It’s likewise a major theme in Yuval Noah Harari’s second book, Homo Deus, which wonders where we’ll take biotech, or perhaps more likely, where it will take us. Ishiguro isn’t a fatalist on the topic, encouraging more public engagement.

Some believe exorbitantly priced technologies created for the moneyed few will rapidly decrease in price and make their way inside everyone’s pockets (and bodies and brains), the same distribution path blazed by consumer electronics. That’s possible but certainly not definite. Of course, as the chilling political winds of 2016 have demonstrated, liberal democracy may be too fragile to even survive to that point.

The opening:

Imagine a two-tiered society with elite citizens, genetically engineered to be smarter, healthier and to live longer, and an underclass of biologically run-of-the-mill humans. It sounds like the plot of a dystopian novel, but the world could be sleepwalking towards this scenario, according to one of Britain’s most celebrated writers.

Kazuo Ishiguro argues that the social changes unleashed by gene editing technologies, such as Crispr, could undermine core human values.

“We’re going into a territory where a lot of the ways in which we have organised our societies will suddenly look a bit redundant,” he said. “In liberal democracies, we have this idea that human beings are basically equal in some very fundamental way. We’re coming close to the point where we can, objectively in some sense, create people who are superior to others.”•

Tags: ,

30-romney-w710-h473

While lessons can be learned from utilitarian philosophy, taken as a whole it seems to run counter to human nature, holding that we can constantly view the world with clinical, calculated precision. That’s just not so.

Moral philosopher Peter Singer believes utilitarianism should be applied by those considering working in the Trump Administration. He suggests that if you feel you can mitigate the considerable pain about to be administered to our most vulnerable, take the job, but also be prepared to resign if you’re forced to contribute to evil.

That might work on paper but not so much in real life, and not just for pragmatic concerns like someone being unable to quit a position because they need to care for their family. If you’re on the inside of Team Trump, you will in some way casue harm. Even if you believe another would be doing more damage, there’s no way to know how your small efforts to protect will be used to do a much greater bad a few steps down the line.

In her New York Review of Books essay “Trump: The Choice We Face,” Masha Gessen reflects on the thorny compromise of Jewish Councils during the age of Nazism, concluding that “we need to shift from realist to moral reasoning.” That seems a more apt and human response to our moment.

In a wonderful Five Books interview, Nigel Warburton discusses his favorite philosophy titles from 2016, including Singer’s Ethics in the Real World. An excerpt:

Question:

Does [Singer’s] controversiality stem from his utilitarianism? Say in the article in this book, about Thabo Mbeki refusing to admit that HIV causes AIDS. He is saying, ‘OK Mbeki killed more people because of this view on HIV/AIDS than the entire apartheid regime did, so how do we compare these two?’ That’s what he is measuring up. Is that the kind of thing a utilitarian does, count up numbers of people killed?

Nigel Warburton:

There are lots of different forms of utilitarianism. The basic principle is that it focuses on the consequences of actions and not the intentions (though the intentions might have consequences as well — in terms of how other people perceive what you do if you express them, for instance).

This is a case of somebody who, without an explicit intention to bring about people’s deaths, through their actions has done so. Utilitarianism, traditionally, looks for a currency that can measure different actions through the probable consequences and plays off those different consequences. Weighing the consequences against each other is the basic benefit of utilitarianism.

You can work out the best course of action because it is the one with the best consequences, or most likely to have the best consequences. So, if you take that really seriously, if you had to choose a world without apartheid, or a world without this statement from Mbeki, the world without Mbeki would be better (in terms of lives lost), even if it had apartheid. There might be other negative consequences of apartheid, there certainly were, but just on that factor, a consequentialist approach would lead to that conclusion.

Obviously in some situations, the consequences are incredibly important. But the obsession with consequences can seem inhumane in many situations. It’s a kind of straightforward cost-benefit analysis, and when applied to people close to you, it seems incredibly cruel and lacking in compassion.

Question:

Is compassion not important then?

Nigel Warburton:

Utilitarians find a value in compassion, but they celebrate a clinical assessment of outcomes above all. I think, in a sense, that is both the strength and weakness of the approach.•

Tags: ,

Despite the god-awful results of the recent American Presidential election, nightmares rarely play out, thankfully, but conjuring worst-case scenarios can speak truth to very justifiable fears.

Those who bought into the Trumpian nostalgia for a more alabaster America delivered the country to a cohort of profiteers, polluters and plutocrats that will leave the nation bleeding from the wherever. The Simon Cowell-ish strongman’s biggest supporters, older and whiter and working-class, will be hurt as badly as anyone with the Affordable Care Act, Medicare and Social Security resting in the hands of the merciless. Labor unions will also be attacked relentlessly and climate change ignored, further endangering this demographic, which already has little shelter from the storm.

With the selection of Tom Price as Health and Human Services Secretary, a war will also be waged on the poor. As Politico states: “Price wants to limit federal Medicaid spending to give states a lump sum, or block grant, and more control over how they could use it — a dream of conservative Republicans for years and a nightmare for advocates for the poor who fear many would lose coverage.” There is basis for grave concern.

Peter Frase’s newly published book, Four Futures: Life After Capitalism, looks at the potential paths ahead should automation obviate too many jobs. Such a revolution in production could be boon or bane and probably will ultimately reside somewhere in between, but in the author’s most dire prediction, a post-apocalyptic explosion of wealth inequality emerges, with some obscenely rich and the rest meat for crows. Tomorrow probably won’t work out that horribly, but the hard-right shift we’re likely to now endure makes it frighteningly easy to visualize.

From Ben Tarnoff at the Guardian:

There are far worse things than boredom, however. Frase’s fourth and final future, “exterminism,” is truly terrifying. Exterminism has the robots and scarcity of socialism, minus the egalitarianism. The result is a neo-feudal nightmare: the rich retreat to heavily fortified enclaves where the robots do all the work, and everyone else is trapped outside in the hot, soggy hell of a rapidly warming planet. “The great danger posed by the automation of production, in the context of a world of hierarchy and scarce resources,” Frase says, “is that it makes the great mass of people superfluous from the standpoint of the ruling elite.” The elite can always warehouse this surplus humanity in prisons and refugee camps. But at a certain point, the rich might find it more convenient to simply exterminate the poor altogether, now that they’re no longer needed as workers.

It is a testament both to Frase’s ability as a writer and the barbarism of our present moment that exterminism feels like the most realistic of his futures. I lost sleep over it. Yet he is careful to counsel his readers against despair.•

Tags:

Top_Women_at_U.S._Steels_Gary_Indiana_Works_1940-1945

“No one wants to admit that our half-measures aren’t working and won’t work” is something that needed to be said by the Presidential candidates this election season in regards to the dying of the Industrial Age, the irreversible decline of manufacturing jobs and the potential attenuation of many other types of employment. No such discussion was had, however, as a vulgar Mussolini-Lampanelli clown shouted and taunted and danced. The future will likely arrive most ferociously for those very working-class people most drawn to his spectacle.

Those quote was actually delivered to Sean Illing of Vox, in an interview with Andy Stern, former union president and current Universal Basic Income advocate who fears a Hunger Games future for most of America if policy doesn’t address the challenges that will attend widespread automation. The Raising the Floor author also addresses another perplexing topic: What if we get UBI and most people aren’t working? What would we do with all that free time? I almost shudder. Lots of people who have a life-or-death need for Medicare, Social Security and the Affordable Care Act just voted for a party desperate to be rid of those things. Imagine the trouble we could get into if food and shelter were assured.

An excerpt:

Question:

Let’s pivot from unions to universal basic income, which is a cardinal issue for you these days. In your book, Raising the Floor, you conclude that a UBI will eventually be necessary. Can you say, first, what UBI means and, second, why you think we need it?

Andy Stern:

A universal basic income is essentially giving every single working-age American a check every month, much like we do with social security for elderly people. It’s an unconditional stipend, as it were.

The reason it’s necessary is we’re now learning through lots of reputable research that technological change is accelerating, and that this process will continue to displace workers and terminate careers. A significant number of tasks now performed by humans will be performed by machines and artificial intelligence. We could very well see 5 million jobs eliminated by the end of the decade because of technology.

We’ve already seen Uber-deployed driverless cars in Pittsburgh, and driverless trucks will be deployed in the next five to six years — we’ve already seen them across Europe. The largest job in 29 states is driving a truck. There are 3 and a half million people who operate trucks and 5 million more who support them in various ways.

So there’s a tsunami of change on its way, and the question is twofold. One is how does America go through a transition to what will be I think an economy with far fewer jobs — particularly middle-class jobs? What policies will guide us through this transition? And second, what do we want this to look like on the other end?

I believe a UBI is a way to ease the transition, and it’s also a way to provide a floor for people — not necessarily a substitute for work, but a supplement to work that allows them to have a sense of economic security, have consumer buying power. We want to allow people to be entrepreneurs, to take risks and raise kids and do other things without turning the world into the Hunger Games.

Question:

Obviously you’re an advocate for a UBI, but I’d like to hear what you think is the most compelling counterargument against UBI.

Andy Stern:

Certainly our concept of work is problematic. This is a country in which people have not figured out what to do if they don’t work for money. I think there are many other ways that people potentially can work but, psychologically, the Protestant work ethic is embedded in the psyche of our country. The idea that someone would get something for nothing is anathema here. People that work feel like those who don’t shouldn’t be rewarded. It’s just an alien concept.•

Tags: ,

joan-rivers-donald-trump

A few decades ago, J.G. Ballard believed we were headed for a future in which fiction and reality would become reversed, people would disappear into their homes (and devices and selves) and that we’d experience an age of shocks and sensations. Sounds familiar.

In a smart new Guardian essay about her father’s prescience, his daughter Bea writes that “when reality television became a massive new trend, my father was unsurprised.” Below is an excerpt from that piece followed by a few clips from Dr. Christopher Evans’ 1979 interview with the writer that was published in the UK version of Penthouse.


From Bea Ballard:

Where the developments in media technology have had the most profound impact is on the world of politics. I would love to know what my father would have to say about Donald Trump’s rapid political rise and his race for the White House. He would have found it fascinating. Given that he predicted a B-movie actor (Ronald Reagan) becoming president, I doubt he would have been surprised that a TV reality star ended up doing the same.

But it’s hard to imagine it happening in a world before television and social media. TV loves Trump’s soundbite style – his brashness and machismo. In the American Apprentice series, Trump sat in a boardroom similar to the Oval Office. It was a dramatic setting clearly designed to accentuate his power: the mahogany oak table, the lighting, the American flag in the background and the informed advisers conjure an image of Trump as pseudo-president.

Perhaps he was plausible as a leader to the American public because he had appeared in their living rooms for so long, playing a fictional commander. The idea of a powerful and decisive leader who dismisses people with a fierce “You’re fired!” can become a reality. In Britain, the recent European Union referendum was also played out like a TV show. The television debates were staged like a talent show, with the politicians on stage debating in short, soundbite style. One of the debates was held at Wembley Arena – in front of a vast crowd who either hollered approval or jeered, in the style of the Hunger Games films.•


From Evans:

On the transition from the Space Age to the Personal Computer Age:

J.G. Ballard:

In the summer of ’74 I remember standing out in my garden on a bright, clear night and watching a moving dot of light in the sky which I realised was Skylab. I remember thinking how fantastic it was that there were men up there, and I felt really quite moved as I watched it. Through my mind there even flashed a line from every Hollywood aviation movie of the 40s, ‘it takes guts to fly those machines.’ But I meant it. Then my neighbour came out into his garden to get something and I said, ‘Look, there’s Skylab,’ and he looked up and said, ‘Sky-what?’ And I realised that he didn’t know about it, and he wasn’t interested. No, from that moment there was no doubt in my mind that the space age was over.

Dr. Christopher Evans:

What is the explanation for this. Why are people so indifferent?

J.G. Ballard:

I think it’s because we’re at the climactic end of one huge age of technology which began with the Industrial Revolution and which lasted for about 200 years. We’re also at the beginning of a second, possibly even greater revolution, brought about by advances in computers and by the development of information-processing devices of incredible sophistication. It will be the era of artificial brains as opposed to artificial muscles, and right now we stand at the midpoint between these two huge epochs. Now it’s my belief that people, unconsciously perhaps, recognise this and also recognise that the space programme and the conflict between NASA and the Soviet space effort belonged to the first of these systems of technological exploration, and was therefore tied to the past instead of the future. Don’t misunderstand me – it was a magnificent achievement to put a man on the moon, but it was essentially nuts and bolts technology and therefore not qualitatively different from the kind of engineering that built the Queen Mary or wrapped railroads round the world in the 19th century. It was a technology that changed peoples lives in all kinds of ways, and to a most dramatic extent, but the space programme represented its fast guttering flicker.

__________________________

On the PC bringing the world into the home, from social to pornography:

Dr. Christopher Evans:

How do you see the future developing?

J.G. Ballard:

I see the future developing in just one way – towards the home. In fact I would say that if one had to categorise the future in one word, it would be that word ‘home.’ Just as the 20th century has been the age of mobility, largely through the motor car, so the next era will be one in which instead of having to seek out one’s adventures through travel, one creates them, in whatever form one chooses, in one’s home. The average individual won’t just have a tape recorder, a stereo HiFi, or a TV set. He’ll have all the resources of a modern TV studio at his fingertips, coupled with data processing devices of incredible sophistication and power. No longer will he have to accept the relatively small number of permutations of fantasy that the movie and TV companies serve up to him, but he will be able to generate whatever he pleases to suit his whim. In this way people will soon realise that they can maximise the future of their lives with new realms of social, sexual and personal relationships, all waiting to be experienced in terms of these electronic systems, and all this exploration will take place in their living rooms.

But there’s more to it than that. For the first time it will become truly possible to explore extensively and in depth the psychopathology of one’s own life without any fear of moral condemnation. Although we’ve seen a collapse of many taboos within the last decade or so, there are still aspects of existence which are not counted as being legitimate to explore or experience mainly because of their deleterious or irritating effects on other people. Now I’m not talking about criminally psychopathic acts, but what I would consider as the more traditional psychopathic deviancies. Many, perhaps most of these, need to be expressed in concrete forms, and their expression at present gets people into trouble. One can think of a million examples, but if your deviant impulses push you in the direction of molesting old ladies, or cutting girl’s pig tails off in bus queues, then, quite rightly, you find yourself in the local magistrates court if you succumb to them. And the reason for this is that you’re intruding on other people’s life space. But with the new multi-media potential of your own computerised TV studio, where limitless simulations can be played out in totally convincing style, one will be able to explore, in a wholly benign and harmless way, every type of impulse – impulses so deviant that they might have seemed, say to our parents, to be completely corrupt and degenerate.

__________________________

On media decentralization, the camera-saturated society, Reality TV, Slow TV:

Dr. Christopher Evans:

Will people really respond to these creative possibilities themselves? Won’t the creation of these scenarios always be handed over to the expert or professional?

J.G. Ballard:

I doubt it. The experts or professionals only handle these tools when they are too expensive or too complex for the average person to manage them. As soon as the technology becomes cheap and simple, ordinary people get to work with it. One’s only got to think of people’s human responses to a new device like the camera. If you go back 30 or 40 years the Baby Brownie gave our parents a completely new window on the world. They could actually go into the garden and take a photograph of you tottering around on the lawn, take it down to the chemists, and then actually see their small child falling into the garden pool whenever and as often as they wanted to. I well remember my own parents’ excitement and satisfaction when looking at these blurry pictures, which represented only the simplest replay of the most totally commonplace. And indeed there’s an interesting point here. Far from being applied to mammoth productions in the form of personal space adventures, or one’s own participation in a death-defying race at Brands Hatch it’s my view that the incredibly sophisticated hook-ups of TV cameras and computers which we will all have at our fingertips tomorrow will most frequently be applied to the supremely ordinary, the absolutely commonplace. I can visualise for example a world ten years from now where every activity of one’s life will be constantly recorded by multiple computer-controlled TV cameras throughout the day so that when the evening comes instead of having to watch the news as transmitted by BBC or ITV – that irrelevant mixture of information about a largely fictional external world – one will be able to sit down, relax and watch the real news. And the real news of course will be a computer-selected and computer-edited version of the days rushes. ‘My God, there’s Jenny having her first ice cream!’or ‘There’s Candy coming home from school with her new friend.’ Now all that may seem madly mundane, but, as I said, it will be the real news of the day, as and how it affects every individual. Anyone in doubt about the compulsion of this kind of thing just has to think for a moment of how much is conveyed in a simple family snapshot, and of how rivetingly interesting – to oneself and family only of course – are even the simplest of holiday home movies today. Now extend your mind to the fantastic visual experience which tomorrow’s camera and editing facilities will allow. And I am not just thinking about sex, although once the colour 3-D cameras move into the bedroom the possibilities are limitless and open to anyone’s imagination. But let’s take another level, as yet more or less totally unexplored by cameras, still or movie, such as a parent’s love for one’s very young children. That wonderful intimacy that comes on every conceivable level – the warmth and rapport you have with a two-year-old infant, the close physical contact, his pleasure in fiddling with your tie, your curious satisfaction when he dribbles all over you, all these things which make up the indefinable joys of parenthood. Now imagine these being viewed and recorded by a very discriminating TV camera, programmed at the end of the day, or at the end of the year, or at the end of the decade, to make the optimum selection of images designed to give you a sense of the absolute and enduring reality of your own experience. With such technology interfaced with immensely intelligent computers I think we may genuinely be able to transcend time. One will be able to indulge oneself in a kind of continuing imagery which, for the first time will allow us to dominate the awful finiteness of life. Great portions of our waking state will be spent in a constant mood of self-awareness and excitement, endlessly replaying the simplest basic life experiences.•

Tags: ,

amazonbooks54

Nobody shops in brick-and-mortar stores anymore, if you don’t count about 90% of purchases.

Because so many of the physical businesses we connected to on an emotional level were killed by the Internet (book and video stores, record shops, newsstands, etc.), it seems online is predominant in retail. But that’s not nearly true, at least not yet. In order to keep expanding market share, Silicon Valley powers like Amazon are venturing off into the real world, a phenomenon that may increase exponentially. I doubt it will work very well with Amazon Books stores and their shallow selections, but perhaps the planned convenience store chain will make a go of it? Tough to say: Corporations great at one type of platform often flounder in others.

In a Technology Review piece, Nicholas Carr visits a new Amazon Books and explains the key role of the smartphone in this surprising turn of events. An excerpt:

Amazon Books may be just the vanguard of a much broader push into brick-and-mortar retailing by the company. In October, the Wall Street Journal revealed that Amazon is planning to open a chain of convenience stores, mainly for groceries, along with drive-in depots where consumers will be able to pick up merchandise ordered online. It has also begun rolling out small “pop-up” stores to hawk its electronic devices. It already has more than two dozen such kiosks in malls around the country, and dozens more are said to be in the works.

Even after 20 years of rapid growth, e-commerce still accounts for less than 10 percent of total retail sales. And now the rise of mobile computing places new constraints on Web stores. They can’t display or promote as many products as they could when their wares were spread across desktop or laptop monitors. That limits the stores’ cross-selling and upselling opportunities and blunts other merchandising tactics.

At the same time, the smartphone, with its apps, its messaging platforms, and its constant connectivity, gives retailers more ways to communicate with and influence customers, even when they’re shopping in stores. This is why the big trend in retailing today is toward “omnichannel” strategies, which blend physical stores, Web stores, and mobile apps in a way that makes the most of the convenience of smartphones and overcomes their limitations. Some omnichannel pioneers, like Sephora and Nordstrom, come from the brick-and-mortar world. But others, like Warby Parker and Bonobos, come from the Web world. Now, with its physical stores, Amazon is following in their tracks. “Pure-play Web retailing is not sustainable,” New York University marketing professor Scott Galloway told me. He points out that the deep discounting and high delivery costs that characterize Web sales have made it hard for Amazon to turn a profit. If Amazon were to remain an online-only merchant, he says, its future success would be in jeopardy. He believes the company will end up opening “hundreds and then thousands of stores.”•

Tags: ,

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Discussion of the ideas in David Gelernter’s new book, The Tides of Mind: Uncovering the Spectrum of Consciousness, which just landed in my mailbox, forms the crux of the latest episode of EconTalk with Russ Roberts. The computer scientist talks about the variety of cognizance that forms our days, an idea he believes lost in the unstudied acceptance of binary labels “conscious” or “unconscious.” He thinks, for instance, that we operate at various levels of up- or down-spectrum consciousness, which permits us to function in different ways. 

Clearly the hard problem is still just that, and the creativity that emerges from consciousness, often the development of new symbols or the successful comparison and combination of seemingly disparate thoughts, isn’t yet understood. Someday we’ll comprehend the chemical reactions that enable these mysterious and magnificent syntheses, but for now we can enjoy though not understand them. In one passage, the author wonderfully articulates the creative process, the parts that are knowable and those that remain inscrutable. The excerpt:

David Gelernter:

You also mention, which is important, the fact that you have a focused sense when you are working on lyrics or writing poetry, let’s say. And I’ve argued, on the other hand, that you need to be well down-spectrum in order to get creativity started. That is, you can’t be at your creative peak when you’ve just got up in the morning: your attention is focused and you are tapping your pencil; you want to get to work and start, you know, getting through the day’s business at a good clip. It’s not the mood in which one can make a lot of progress writing poetry. But that’s exactly why–that’s one of the important reasons why creativity is no picnic. It’s not easily achieved. I think it’s fair to say that everybody is creative in a certain way. In the sort of daily round of things we come up with new solutions to old problems routinely. But the kind of creativity that yields poetry that other people value, that yields original work in any area, is highly valued, is more highly valued than any other human project, because it’s rare. And it’s rare not because it requires a gigantic IQ (Intelligence Quotient), but because it requires a certain kind of balance, which is not something everybody can achieve. On the one hand–it’s not my observation; it’s a general observation–that creativity often hinges on inventing new analogies. When I think of a new resemblance and an analogy between a tree and a tent pole, which is a new analogy let’s say that nobody else has ever thought of before, I take the new analogy and can perhaps use it in a creative way. One of a million other, a billion, a trillion other possible analogies. Now, what makes me come up with a new analogy? What allows me to do that? Generally, it’s a lower-spectrum kind of thinking, a down-spectrum kind of thinking, in which I’m allowing my emotions to emerge. And, I’m allowing emotional similarity between two memories that are in other respects completely different. I’m maybe thinking as a graduate student in computing about an abstract problem involving communication in a network like the ARPANET (Advanced Research Projects Agency Network) or the Internet, in which bits get stuck. And I may suddenly find myself thinking about traffic on a late Friday afternoon in Grand Central Station in Manhattan. And the question is–and that leads to a new approach. And I write it up; and I prove a theorem, and I publish a paper. And there’s like a million other things in the sciences and in engineering technology. But the question is: Where does the analogy come from? And it turns out in many cases–not in every case–that there are emotional similarities. Emotion is a tremendously powerful summarizer, abstractor. We can look at a complex scene involving loads of people rushing back and forth because it’s Grand Central Station, and noisy announcements on [?] to understand, loudspeakers, and you’re being hot and tired, and lots of advertisements, and colorful clothing, and a million other things; and smells, and sounds, and–we can take all that or any kind of complex scene or situation, the scene out your window, the scene on the TV (television) when you turn on the news, or a million other things. And take all those complexities and boil them down to a single emotion: it makes me feel some way. Maybe it makes me happy. Maybe it makes me happy. It’s not very usual to have an emotion as simple as that. But it might be. I see my kids romping in the backyard, and I just feel happy. Usually the emotion to which a complex scene has boiled down is more complex than that–is more nuanced. Doesn’t have a name. It’s not just that I’m happy or sad or excited. It’s a more nuanced; it’s a more–it’s a subtler emotion which is cooked up out of many bits and pieces of various emotions. But the distinctive emotion, the distinctive feeling that makes me feel a certain way, the feeling that I get when I look at some scene can be used as a memory cue when I am in the right frame of mind. And that particular feeling–let’s say, Happiness 147–a particular subtle kind of happiness which is faintly shaded by doubts about the coming week and by serious questions I have about what I’m supposed to do tomorrow morning but which is encouraged by the fact that my son is coming home tonight and I’m looking forward to seeing him–so that’s Happiness 147. And it may be that when I look out at some scene and feel Happiness 147, that some other radically different scene that also made me feel that way comes to mind–looking out at that complex thing and I think of some abstract problem in network communications, or I think of a mathematics problem, or I think of what color chair we should get for the living room, or one of a million other things. Any number of things can be boiled down in principle, can be reduced, can be summarized or abstracted by this same emotion. My emotions are so powerful because the phrase, ‘That makes me feel like x,’ can apply to so many situations. So many different things give us a particular feeling. And that feeling can drive in a new analogy. And a new analogy can drive creativity. But the question is: Where does the new analogy come from? And it seems to come often from these emotional overlaps, from a special kind of remembering. And I can only do that kind of remembering when I am paying attention to my emotions. We tend to do our best to suppress emotions when we’re up-spectrum. We’re up-spectrum: We have jobs to do, we have work to do, we have tasks to complete; our minds are moving briskly along; we’re energetic. We generally don’t like indulging in emotions when we are energetic and perky and happy and we want to get stuff done. Emotions tend to bring thought to a halt, or at any rate to slow us down. It tends to be the case as we move lower on the spectrum, we pay more attention to emotions. Emotions get a firmer grip on us. And when we are all the way at the bottom of the spectrum–when we are asleep and dreaming–it’s interesting that although we–often we think of dreaming as emotionally neutral except in the rare case of a nightmare or a euphoria dream, and neither of those happen very often–we think of dreams as being sort of gray and neutral. But if you read the biological[?] literature and the sleep-lab literature, you’ll find that most dreams are strongly colored emotionally. And that’s what we would expect. They occur at the bottom of the spectrum. Life becomes more emotional, just as when you are tired you are more likely to lose your temper; you are more likely to lose your self-control–to be cranky, to yell at your kids, or something like that. We are less self-controlled, we are less self-disciplined; we give freer rein to our emotions as we move down spectrum. And that has a good side. It’s not good to yell at your kids. But as you allow your emotions to emerge, you are more likely to remember things that yield new analogies. You are more likely to be reminded in a fresh way of things that you hadn’t thought of together before.•

Tags: ,

740x-1

The amazing, Zeitgeist-capturing photograph above, taken by Brett Gundlock of Bloomberg, shows drivers in Mexico City gridlock being peppered with advertisements floated by Uber drones. While you might think it dangerous that even slow-moving vehicles are besieged by hovering appeals sent from the heavens or thereabouts, Travis Kalanick, the leading ridesharer’s CEO, wants to remove that worry, eliminating the burden of drivers so they can instead plug their ears and eyes into other machines. Why stop and smell the roses when you can count the drones?

Autonomous vehicles are likely upon us, whether that means they arrive at high speed or merge more gradually with the Digital Age. While making the roads and highways safer was the early selling point for these cars, their establishment will have a profound effect on surveillance, employment, urban design, ethics, capitalism and even human nature itself. Of course, there will be unintended consequences we can’t yet even appreciate.

It’s also worthwhile to mention that the intervening period between fully human driving and fully automated control will not be without incidence, in much the way that horse-drawn carts and internal combustion engines made for uneasy partners on the road during that earlier transition. One thing I’m sure of is driverless cars will not create a “utopian society,” a promise often assigned to new technological tools at their outset before we remember that the function they provide was never the main problem with us to start with.

In a New York Review of Books piece on Hod Lipson and Melba Kurman’s Driverless: Intelligent Cars and the Road AheadSue Halpern looks at the industry’s dream scenario of fleets of autonomous taxis and the significant roadblocks to its realization. Even if the challenges are met, cheaper rides might not reduce wealth inequality but exacerbate the problem.

An excerpt:

The major car makers, rushing to make alliances with tech companies, understand their days of dominance are numbered. “We are rapidly becoming both an auto company and a mobility company,” Bill Ford, the chairman of Ford Motor Company, told an audience in Kansas City in February. He knows that if the fleet model prevails, Ford and other car manufacturers will be selling many fewer cars. More crucially, the winners in this new system will be the ones with the best software, and the best software will come from the most robust data, and the companies with the most robust data are the tech companies that have been hoovering it up for years: Google most of all.

“The mobility revolution is going to affect all of us personally and many of us professionally,” Ford said that day in Kansas City. He might have been thinking about car salespeople, whose jobs are likely to become obsolete, but before that it will be the taxi drivers and truckers who will be displaced by vehicles that drive themselves. Historically these have been the jobs that have provided incomes to recently arrived immigrants and to people without college degrees. Without them yet another trajectory into the middle class will be eliminated.

What of Uber drivers themselves? These are the poster people for the gig-economy, “entrepreneurs”—which is to say freelancers—who use their own cars to ferry people around. “Obviously the self-driving car thing is freaking people out a little bit,” an Uber driver in Pittsburgh named Ryan told a website called TechRepublic. And, he went on, he learned about Uber’s plans from the media, not from the company. “If it’s a negative thing, they let you find out for yourself.” As media critic Douglas Rushkoff has written, “Uber’s drivers are the R&D for Uber’s driverless future. They are spending their labor and capital investments (cars) on their own future unemployment.”

All economies have winners and losers. It does not take a sophisticated algorithm to figure out that the winners in the decades ahead are going to be those who own the robots, for they will have vanquished labor with their capital. In the case of autonomous vehicles, a few companies are now poised to control a necessary public good, the transportation of people to and from work, school, shopping, recreation, and other vital activities. This salient fact is often lost in the almost unanimously positive reception of the coming “mobility revolution,” as Bill Ford calls it.

Tags: , ,

mars9

mars8

mars7

It didn’t begin auspiciously for George and Willie Muse, born black, poor and albino to a sharecropper family in the Jim Crow South. It seemed to get even less promising when they were kidnapped in 1899 from their doting mother in Virginia and forced to appear in itinerant freak shows as “Eko and Iko, sheep-headed, cannibalistic Ambassadors from Mars.”

The siblings were given room, board and mandolin lessons by a parade of handlers but were otherwise kept a safe distance from their earnings. Ultimately, their mother reclaimed them 28 years later through the legal system, liberating her boys who then signed a deal with Ringling Brothers that allowed them to retain complete rights to their merchandising. The two grew quite well-off, selling out Madison Square Garden numerous times and performing for the Queen of England. They were international superstars in an era before mass media. One brother, Willie, lived to 108, dying in 2001, having left a footprint in three centuries.

It’s likely a wilder tale than that of any sideshow act from the twentieth century, more than Chang & Eng or the “Two-Headed Nightingale” or anyone. In Truevine, a book by Beth Macy published last month, the author ponders the troubling question of whether the kidnapping and sideshow existence were ultimately better for the Muses than the privations and prejudices of the South would have been. Perhaps, though clearly neither was ideal. Reports are Paramount is angling to acquire big-screen rights to the book.

Two Brooklyn Daily Eagle articles are embedded below, the first documenting their mother first finding her sons after a nearly three-decade search, and the second revealing the men’s intelligence, which belied how the circus presented them to the public.


From October 20, 1927:

ambassadorfrommars34


From May 14. 1928:

brothers-frommars

Tags: , , ,

august-engelhardt

321mansonfamilygirlsrevjimjones

321moon-obit-1-jumbo-768x501

USA-SECT-SUICIDE

In a smart Five Books interviewEllen Wayland-Smith, author of Oneida, discusses a group of titles on the topic of Utopia. She surmises that attempts at such communities aren’t prevalent like they were in the 1840s or even the 1960s because most of us realize they don’t normally end well, whether we’re talking about the bitter financial and organizational failures of Fruitlands and Brook Farm or the utter madness of Jonestown. That’s true on a micro-community level, though I would argue that there have never been more people dreaming of large-scale Utopias–and corresponding dystopias–then there are right now. The visions have just grown significantly in scope.

In macro visions, Silicon Valley technologists speak today of an approaching post-scarcity society, an automated, quantified, work-free world in which all basic needs are met and drudgery has disappeared into a string of zeros and ones. These thoughts were once the talking points of those on the fringe, say, a teenage guru who believed he could levitate the Houston Astrodome, but now they (and Mars settlements, a-mortality and the computerization of every object) are on the tongues of the most important business people of our day, billionaires who hope to shape the Earth and beyond into a Shangri-La. 

Perhaps much good will come from these goals, and maybe a few disasters will be enabled as well. 

One exchange from the Five Books Q&A:

Question:

Speaking of the Second Coming, the last book on your list is Paradise Now, by Chris Jennings.

Ellen Wayland-Smith:

It’s called Paradise Now: The Story of American Utopianism. He goes through five utopian experiments in nineteenth century America. It’s a beautifully written book and interesting as well because he takes the odd era of 1840s America and shows how it gave rise to five very different experiments in alternative living. He does a sensitive job of exploring their differences and similarities but he also examines how crazy they seem today. Some of the ideas seem mystical and fabulous; certainly Noyes had some spectacularly strange ideas about gaining immortality through sexual intercourse. The fact that so many of these strange communities sprung up seems unbelievable to the twenty-first century reader. Chris Jennings points out that we seem to have lost something, there seems to be a diminishment of expectations, a loss of energy.

Question:

In the wake of the American Revolution over a hundred experimental communities were formed in the United States. Do societies become less experimental as they age into their institutions? Is the West losing the audacity necessary for experimentation?

Ellen Wayland-Smith:

That is an interesting question. The 1840s were an incredibly weird time. It was a crossroads. It was the beginning of the Industrial Revolution. Class identification and geographical identification suddenly became uncertain, that was upsetting. There were also an explosion of religious sects at this time, with the disestablishment of state and church. I think it was a time when people felt very vulnerable. All these changes and uncertainties crystallized attempts to live otherwise.

Question:

Jennings writes that a present day “deficit of imagination” accounts for the fact that there are no utopias at present. Do you see a strong foundation for that analysis?

Ellen Wayland-Smith:

There does seem to be a lack of interest in what is transcendent, which keeps people from finding more meaningful ways of constructing their lives. But what accounts for the absence of utopian schemes at present is probably less a ‘deficit of imagination’ than a cynicism about whether these things can work. As I began by saying, utopian projects usually end disastrously.•

Tags: ,

19802639494_fea92a56c8_c

942b57574e98b1cc8e73a24e3ef4c9d9

Since Bob Dylan was the surprise winner of this year’s Nobel Prize, those aghast at the announcement (mostly writers without Nobel Prizes) have taken comfort in Kevin P. Simonson’s 1991 Hustler interview with Kurt Vonnegut, in which the author labeled the songwriter the “worst poet alive.” This insult from the guy who turned out Slapstick!

In addition to being wrong about Dylan, Vonnegut’s hatred for the magazine’s infamous owner, Larry Flynt, also seems off-base. It’s not that the publisher was or is a charmer (he’s not), but his “literary output” proved much more influential than Vonnegut’s, with pornography today available on every phone in every pocket. He was right about human nature, whether we like it or not.

If you think that’s good or not depends on what you prefer: a repressed though less outwardly ugly society where things are hidden, or one in which there’s way too much information and everything may be revealed. The latter can be discombobulating, but I think the former is more dangerous.

Click on the exchange below to read a bigger version.

screenshot-2016-10-21-at-10-38-47-pm

Tags: , ,

astro2 (3)

The great Margaret Atwood has dystopic vision, an eerie end to us all: We build an ever-growing, plugged-in societal machine reliant on cheap energy that eventually runs out. Collapse comes, and we’re swept away with it. It’s a chilling, if unlikely, scenario.

More realistic: We keep shoveling fossil fuels into the system until it’s the death of us, or we wisely adapt ASAP and develop solar energy and such to the point were we can sustain life for eons. 

In a Guardian piece that surveys science and sci-fi writers, Atwood, Richard Dawkins and others ponder the future of humanity, if we have one. The opening:

Richard Dawkins, author of The Selfish Gene and The God Delusion
There’s a serious risk of climate catastrophe and it could be soon. Another alarmingly plausible possibility during the present century is that weapons of mass destruction, which are designed to deter, will be acquired by deluded people for whom deterrence has no meaning. Assuming we survive such manmade disasters, external peril may be averted by technology growing out of the brilliant feat of landing on a comet. The dinosaurs’ world ended when a comet or large meteorite unleashed titanic destructive forces. That will eventually happen again, and smaller but still dangerous strikes are a perennial danger in every century. Telescopes of the future will improve the range of detection, increase the warning time, and give engineers the notice they will need to intercept the bolide and nudge it into a harmless orbit.

In the world of science, DNA sequencing will become ever faster and cheaper and this will revolutionise medicine, taxonomy and my own field of evolution, not to mention forensic evidence in courts of law. Embryology and cell biology will advance mightily. Novel imaging techniques may enable palaeontologists and archeologists to see down into the ground without digging it up. The rendering of virtual reality will improve to the point where the distinction from external reality may become blurred. I expect unmanned space exploration to continue, albeit with economically imposed hiatuses. Out beyond 50 years, self-sustaining colonies may be established on Mars. Human travel to other star systems lies way beyond 50 years, but radio communication from extraterrestrial scientists is an ever-present possibility. However, the intervening light centuries will rule out conversation.

Margaret Atwood, author of Hag-Seed
Will we still have a liveable planet 50 years from now? Kill the oceans and it’s game over for oxygen-breathing mid-range mammals – the oceans make 60 to 80% of our oxygen. Superheating them and dumping them full of plastic may spell our doom. I hope that we’ll be smart enough to avoid this fate. From ideas proposed in my fiction, many are equally horrible, but it seems as if the use of the blood of young people to rejuvenate rich older people – as posited in The Heart Goes Last – is already in process. I do try to avoid predicting “the future” because there are so many variables; thus, so many possible futures. But here’s a safe bet: in 25 years I won’t be on the planet, unless of course I get my tentacles on some of that rejuvenating blood.•

Tags: ,

1plimpton0

Author George Plimpton, front left, and J.W. Gallivan, Jr., a Rober

plimptonlions

  • George Plimpton seemed to have lost the will to live soon after I interviewed him in 2003. Two weeks later he was dead. It was unintentional, I swear.
  • The best part of Plimpton’s journalism, from being an embed Bedouin on the set of Lawrence of Arabia to playing quarterback in a preseason game for the Detroit Lions, was that he realized the business sometimes served an important purpose, but the vast majority of it was a lark to have fun in between visits from the Time Inc. drink cart. I cant say I approve of his mixing fiction into his fact, but the lust for life was admirable. Perhaps being in close proximity to Robert F. Kennedy as he was assassinated–he helped wrest the gun from Sirhan Sirhan’s hand–gave him perspective that life and death is life and death, and everything else is not.
  • Plimpton began writing for Sports Illustrated in the 1950s, one of the young literary lights recruited by editor Sid James to write for his publication in that era. Plimpton thrived, with the magazine nurturing his flair for participatory journalism. One who did less well was Kurt Vonnegut, whose first assignment was to write a full-length article about a spooked racehorse that jumped over a fence. Before grabbing his coat and exiting the offices to never return, he typed these words: “The horse jumped over the fucking fence.”
  • I’m sure there was some great national prank after Plimpton’s Sidd Finch story on April Fools Day in 1985, but that was one of the last hurrahs of the pre-Information Age, a story that would unravel now on Twitter in minutes. We still get fooled a lot, but by nothing nearly so wonderful. 

In a New York Review of Books piece about Plimpton’s sports journalism, Nathaniel Rich acknowledges that sometimes the writer dropped the ball, as he did in underplaying that racial hatred directed at Henry Aaron as the Atlanta slugger closed in on Babe Ruth’s home-run record, but his close proximity to the game often allowed him to digest small details about the games, including points about class, something not every patrician would appreciate. An excerpt:

Sports memoirs, like humor collections, rarely outlive their authors, but Plimpton’s books have aged gracefully and even matured. Today they have the additional (and unintended) appeal of vivid history, bearing witness to a mythical era that, as Rick Reilly writes in his foreword to The Bogey Man, “historians classify as ‘Before Insurance Lawyers Ruined Everything.’” (Journalists might classify it as Before Fact-Checkers Ruined Everything.) Plimpton writes about baseball locker rooms “heavy with cigarette and cigar smoke,” star players humbled by their off-season jobs (Pro-Bowler Alex Karras fills jelly doughnuts), and teams that cheat by positioning a spy with binoculars on a roof near the opponent’s practice field. He is able to convince major league All-Stars to take part in his scheme by offering, to the players on the team that gets the most hits off him, a reward of $125, the equivalent today of about $1,000. (By comparison, the Detroit Tigers’ slugger Miguel Cabrera earned $19,000 per inning this season.) It was also an age in which the press was powerful enough to convince professional teams to grant full, unfettered access to a journalist. Today a writer for a major national magazine is lucky to be allowed more than one hour with the subject of a cover article. Plimpton spent a full month living in a dormitory with the Lions.

As enjoyable as it is to read about Plimpton being treated roughly by professional gladiators in front of large crowds, the participatory approach also has its journalistic benefits. He understood that within every professional athlete is an amateur who, through some combination of born talent and luck, is surprised to find himself elevated to divine status. As a writer who, after the success of Paper Lion, was a bigger celebrity than most of his subjects, Plimpton had a special sensitivity to the hidden vulnerabilities of giants.

The weigh-in ceremony before Cassius Clay’s first championship fight against Sonny Liston is best remembered for Clay’s rumbling taunts, but Plimpton notes that Clay’s pulse was taken at 180; the doctor concluded that he was “scared to death.” We learn that Roger Maris, after the stress of breaking Babe Ruth’s single-season home run record, changed his batting style the following year to avoid reliving the experience. Plimpton devotes a chapter in One for the Record to the pitchers who allowed the most famous home runs in baseball history. Ralph Branca tells him that, after yielding “The Shot Heard Round the World,” he left the Polo Grounds to find his sobbing fiancée waiting for him in the parking lot with a priest. Branca’s second career, Plimpton notes, was in life insurance.•

Tags: ,

ctd71

How do we reconcile a highly automated economy with a free-market one? If jobs of all skill levels emerge that can’t be outsourced beyond our species, we’ll be fine. But if average truly ls over, as Tyler Cowen and others have predicted, we could be in for some turbulence. Everyone and his brother can’t be quickly upskilled, and even if they could, there will be a limit to the number of driverless-car engineers needed. Abundance is great if we can figure out a good distribution system, something America has never perfected. If the macro financial picture is good but the micro is harsh, political solutions may become necessary.

From an PC World interview with The Wealth of Humans author Ryan Avent:

If this abundance of labor is indeed what is sparking global unrest, things will probably get a lot more chaotic before stability returns, unless the world embraces an Amish-style rejection of technology. So how should civilization proceed moving forward? Proposals include universal basic income (UBI), which is quite fashionable in Silicon Valley circles, or shortening the work week to four days.

“Eventually, we’re going to have to change the ways we do things so that people are working less and are also still able to buy the things they need—that would be where redistribution or basic income comes into the picture,” according to Avent. “There’s a couple of things to consider though. People don’t necessarily want to live in a world without work. Even though work is a drag, it creates structure for our day. It creates purpose and meaning. You can imagine that society would be kind of a messy place if nobody ever had to do anything. The other tricky thing is you need to find a way to pay for everything, which means that you have to tax somebody or create common ownership. Something that’s going to require a big political change.”

The big changes may be far in the future. Many of us may be able to wait out the big transformation, but where does that leave the next generation? Are they just completely screwed? As a parent of a young child, I am keenly interested in what skills—if any—will have any value in the decades to come.

“Those with a PhD in computer engineering will probably be okay. I don’t think that’s going to be something that goes away over the next few decades. The skills that will be applicable in a lot of parts of the economy will actually be the softer skills,” Avent says. “The ability to learn from others, to teach yourself new things, to get along in different cultural settings. Basically to be adaptive and be able to pick up new skills. That’s useful now, but in an environment where new sectors and new jobs are constantly be introduced will be critical to being successful.”•

Tags:

It surprises me that most of us usually think things are worse than they are in the big picture, because we’re awfully good at selective amnesia when it comes to our own lives. Homes in NYC that were demolished by Hurricane Sandy are mostly valued more highly now than right before that disaster, even though they’re located in the exact some lots near the ever-rising sea levels, in the belly of the beast. The buyers are no different than the rest of us who conveniently forget about investment bubbles that went bust and life choices that laid us low. When it comes to our own plans, we can wave away history as a fluke that wouldn’t dare interfere.

When we consider the direction of our nation, however, we often believe hell awaits our handbasket. Why? Maybe because down deep we’re suspicious about the collective, that anything so unwieldy can ever end up well, so we surrender to both recency and confirmations biases, which skew the way we view today and tomorrow. 

While I don’t believe the endless flow of information has made us more informed, it is true that by many measures we’re in better shape now than humans ever have been. On that topic, the Economist reviews Johan Norberg’s glass-half-full title, Progress: Ten Reasons to Look Forward to the Future. The opening:

HUMANS are a gloomy species. Some 71% of Britons think the world is getting worse; only 5% think it is improving. Asked whether global poverty had fallen by half, doubled or remained the same in the past 20 years, only 5% of Americans answered correctly that it had fallen by half. This is not simple ignorance, observes Johan Norberg, a Swedish economic historian and the author of a new book called “Progress”. By guessing randomly, a chimpanzee would pick the right answer (out of three choices) far more often.

People are predisposed to think that things are worse than they are, and they overestimate the likelihood of calamity. This is because they rely not on data, but on how easy it is to recall an example. And bad things are more memorable. The media amplify this distortion. Famines, earthquakes and beheadings all make gripping headlines; “40m Planes Landed Safely Last Year” does not. 

Pessimism has political consequences. Voters who think things were better in the past are more likely to demand that governments turn back the clock. A whopping 81% of Donald Trump’s supporters think life has grown worse in the past 50 years. Among Britons who voted to leave the European Union, 61% believe that most children will be worse off than their parents. Those who voted against Brexit tend to believe the opposite.

Mr Norberg unleashes a tornado of evidence that life is, in fact, getting better.

Tags:

nixon-laughing-with-astronauts-p

Even your average Silicon Valley billionaire would find it difficult to gain and maintain a footing on the Moon and Mars and more if playing by the established economics of Big Space. Fortunately, the increasing power and diminishing costs of components in the supercomputers in our pockets have enabled Little Space to compete, turning out satellites and such for a fraction of what it would cost NASA.

It’s this Space Race 2.0 at the heart of Freeman Dyson’s New York Review of Books piece in which he reviews a raft of recent titles on the topic. The scientist-writer frets about NASA and other government bodies for their excessive risk-aversion while acknowledging the cheap-enough-to-fail model may not be bold enough to enable us to fan out among the stars this century. He also analyzes the medium-size methods of deep-pocketed entrepreneurs like Elon Musk, who dream big while trying to trim costs like the little guys.

Ultimately, the reviewer is dissatisfied with all the books because they each focus on engineering to the exclusion of biotechnology, ignoring outré-but-not-impossible visions from the febrile mind of pioneering Russian rocket scientist Konstantin Tsiolkovsky, who prophesied there would be a time centuries in the future when we could alter and create species to enable them to assimilate in space. 

In one passage about more immediate matters, Dyson offers a common-sense retort to NASA’s fear of falling, arguing mistakes we’ve made on Earth with enclosed habitats like Biosphere 2 aren’t ones we’re destined to repeat. The excerpt::

Charles Wohlforth and Amanda Hendrix’s Beyond Earth describes the prospects for future manned space missions conducted within the Big Space culture. The prospects are generally dismal, for two reasons. The authors suppose that a main motivation for such missions is a desire of humans to escape from catastrophic climate change on Earth. They also suppose any serious risks to the life and health of astronauts to be unacceptable. Under these conditions, few missions are feasible, and most of them are unattractive. Their preferred mission is a human settlement on Titan, the moon of Saturn that most resembles Earth, with a dense atmosphere and a landscape of gentle hills, rivers, and lakes.

But the authors would not permit the humans to grow their own food on Titan. Farming is considered to be impossible because an enclosed habitat with the name Biosphere Two was a failure. It was built in Arizona and occupied in 1991 by eight human volunteers who were supposed to be ecologically self-sufficient, recycling air and water and food in a closed system. The experiment failed because of a number of mistakes in the design. The purpose of such an experiment should be to learn from the failure how to avoid such mistakes in the future. The notion that the failure of a single experiment should cause the abandonment of a whole way of life is an extreme example of the risk-averseness that has come to permeate the Big Space culture.

Farming is an art that achieved success after innumerable failures. So it was in the past and so it will be in the future. Any successful human settlement in space will begin as the Polynesian settlements in the Pacific islands began, with people bringing pigs and chickens and edible plants on their canoes, along with the skills to breed them. The authors of Beyond Earth imagine various possible futures for human settlement in various places, but none of their settlers resemble the Polynesians.

Tags:

eclectic-artwork

Extrapolating current economic trends into the future is a tricky business. Things change.

The American middle class, besieged for decades by tax codes, globalization, automation, Silicon Valley creative destruction, Washington gridlock and the Great Recession, seems more like a dinosaur each day. Men in the U.S. have particularly watched their opportunities crater, with millions more jobs poised to vanish as soon as driverless vehicles take the wheel in trucking, the taxi industry and delivery. (The last of those occupations will also be emptied out by air and ground drones.)

Nicholas Eberstadt’s Men Without Work suggests this story has been seriously under-reported, the subtitle being “America’s Invisible Crisis.” Like Charles Murray, whom I’m not fond of, the author believes misguided social safety nets have played a large role in creating this unintended consequence. I call bullshit on that theory, which seems more driven by ideology than reality.

In a Financial Times review of the book, Lawrence Summers also disagrees with Eberstadt on how we got into this mess, but  he sees a potentially even bleaker future for American males than the author does. Maybe that won’t come to pass, but this is exactly the type of possible outcome we should be discussing right now.

An excerpt:

Now comes Nicholas Eberstadt’s persuasive and important monograph Men Without Work, demonstrating that these issues are not just matters of futurology. Eberstadt, a political economist based at the American Enterprise Institute, marshals a vast amount of data to highlight trends that have been noticed but not adequately emphasised before in the work experience of men in the US. The share of the male population who are neither working, looking for work, in school or old enough to retire has more than doubled over the past 50 years, even though the population has become much healthier and more educated. Today, even with a low overall unemployment rate, roughly one in six men between the ages of 25 and 54 is out of work.

Eberstadt goes on to show that, as one might expect, non-work is a larger issue for those with less education, without spouses or dependent children, for African-Americans and for those who have been convicted of crimes. He finds little redeeming in what those without work are doing, noting that the primary contrast in time use between those in and out of work is in time spent watching TV.

Finally, he highlights that men in the US are doing considerably worse than men in the rest of the industrial world, where even countries with notoriously sclerotic labour markets and bloated welfare systems such as France, and even Greece, enjoy higher rates of prime age male labour force participation.

One can cavil with Eberstadt’s emphasis on labour force withdrawal as distinct from unemployment in looking at the data, particularly when it comes to international comparisons, but overall the evidence he marshals that non-work is currently a crisis is entirely persuasive. As he notes, the impact of non-work on economic growth is the least of it. A society where large numbers of adults in the prime of life are without vocation is unlikely to provide opportunity for all its children, to maintain strong communities or have happy, cohesive families. As we are seeing this fall, such a society is prone to embrace toxic populist politics.

Indeed, Eberstadt understates the significance of what he studies by not highlighting the fact that, if current trends continue, a quarter of men between 25 and 54 will be out of work by mid-century. I would expect Eberstadt’s sorry trends to accelerate as IT accelerates job destruction on the one hand, and developments such as virtual reality make non-work more attractive and addictive on the other, so I can imagine scenarios in which a third or more of men in this cohort are out of work in the US by 2050.

Why is this happening?•

Tags: ,

stepford-wives-1975-pic

Many dark fictions about technology focus on machines going rogue and running amok, but couldn’t things progress as planned and still lead to trouble if we have poor priorities and make the wrong decisions?

On a 1979 Dick Cavett Show, Ira Levin was asked how he dreamed up the scenario for his chilling novel The Stepford Wives. He answered that after reading about the possibility of robotic domestic servants in Alvin Toffler’s Future Shock, he wondered what would happen if we achieved that goal at a very high level. You know, if everything went according to plan.

Humanoid robots aren’t in our near future, but chatbots and digital assistants will be an increasing part of our lives in the short run. They may eventually get so good that we won’t know sometimes if we’re speaking to a human or not. Perhaps we will be aware, but that won’t stop us from speaking to them as if “they” were people. There will be a relationship. That’s the plan, anyhow.

Some excerpts on that topic from Alvin Toffler’s book:

Whether we grow specialized animals to serve us or develop household robots depends in part on the uneven race between the life sciences and the physical sciences. It may be cheaper to make machines for our purposes, than to raise and train animals. Yet the biological sciences are developing so rapidly that the balance may well tip within our lifetimes. Indeed, the day may even come when we begin to grow our machines. …

We are hurtling toward the time when we will be able to breed both super- and subraces. As Theodore J. Gordon put it in The Future, “Given the ability to tailor the race, I wonder if we would “create all men equal,’ or would we choose to manufacture apartheid? Might the races of the future be: a superior group, the DNA controllers; the humble servants; special athletes for the ‘games’; research scientists with 200 IQ and diminutive  bodies …” We shall have the power to produce races of morons or of mathematical savants. …

Technicians at Disneyland have created extremely life-like computer-controlled humanoids capable of moving their arms and legs, grimacing, smiling, glowering, simulating fear, joy and a wide range of other emotions. Built of clear plastic that, according to one reporter, “does everything but bleed,” the robots chase girls, play music, fire pistols, and so closely resemble human forms that visitors routinely shriek with fear, flinch and otherwise react as though they were dealing with real human beings. The purposes to which these robots are put may seem trivial, but the technology on which they are based is highly sophisticated. It depends heavily on knowledge acquired from the space program—and this knowledge is accumulating rapidly.

There appears to be no reason, in principle, why we cannot go forward from these present primitive and trivial robots to build humanoid machines capable of extremely varied behavior, capable even of “human” error and seemingly random choice—in short, to make them behaviorally indistinguishable from humans except by means of highly sophisticated or elaborate tests. At that point we shall face the novel sensation of trying to determine whether the smiling, assured humanoid behind the airline reservation counter is a pretty girl or a carefully wired robot.

The likelihood, of course, is that she will be both.

The thrust toward some form of man-machine symbiosis is furthered by our increasing ingenuity in communicating with machines. A great deal of much-publicized work is being done to facilitate the interaction of men and computers. But quite apart from this, Russian and American scientists have both been experimenting with the placement or implantation of detectors that pick up signals from the nerve ends at the stub of an amputated limb. These signals are then amplified and used to activate an artificial limb, thereby making a machine directly and sensitively responsive to the nervous system of a human being. The human need not “think out” his desires; even involuntary impulses are transmittable. The responsive behavior of the machine is as automatic as the behavior of one’s own hand, eye or leg.•

Tags:

rv-ad325_darrow_g_20110624013836

20150102futurama-robot-lawyer-1

The past isn’t necessarily prologue. Sometimes there’s a clean break from history. The Industrial Age transformed Labor, moving us from an agrarian culture to an urban one, providing new jobs that didn’t previously exist: advertising, marketing, car mechanic, etc. That doesn’t mean the Digital Age will follow suit. Much of manufacturing, construction, driving and other fields will eventually fall, probably sooner than later, and Udacity won’t be able to rapidly transition everyone into a Self-Driving Car Engineer. That type of upskilling can take generations to complete.

Not every job has to vanish. Just enough to make unemployment scarily high to cause social unrest. And those who believe Universal Basic Income is a panacea must beware truly bad versions of such programs, which can end up harming more than helping. 

Radical abundance doesn’t have to be a bad thing, of course. It should be a very good one. But we’ve never managed plenty in America very well, and this level would be on an entirely different scale.

Excerpts from two articles on the topic.


From Giles Wilkes’ Economist review of Ryan Avent’s The Wealth of Humans:

What saves this work from overreach is the insistent return to the problem of abundant human labour. The thesis is rather different from the conventional, Malthusian miserabilism about burgeoning humanity doomed to near-starvation, with demand always outpacing supply. Instead, humanity’s growing technical capabilities will render the supply of what workers produce, be that physical products or useful services, ever more abundant and with less and less labour input needed. At first glance, worrying about such abundance seems odd; how typical that an economist should find something dismal in plenty.

But while this may be right when it is a glut of land, clean water, or anything else that is useful, there is a real problem when it is human labour. For the role work plays in the economy is two-sided, responsible both for what we produce, and providing the rights to what is made. Those rights rely on power, and power in the economic system depends on scarcity. Rob human labour of its scarcity, and its position in the economic hierarchy becomes fragile.

A good deal of the Wealth of Humans is a discussion on what is increasingly responsible for creating value in the modern economy, which Mr Avent correctly identifies as “social capital”: that intangible matrix of values, capabilities and cultures that makes a company or nation great. Superlative businesses and nation states with strong institutions provide a secure means of getting well-paid, satisfying work. But access to the fruits of this social capital is limited, often through the political system. Occupational licensing, for example, prevents too great a supply of workers taking certain protected jobs, and border controls achieve the same at a national level. Exceptional companies learn how to erect barriers around their market. The way landholders limit further development provides a telling illustration: during the San Fransisco tech boom, it was the owners of scarce housing who benefited from all that feverish innovation. Forget inventing the next Facebook, be a landlord instead.

Not everyone can, of course, which is the core problem the book grapples with. Only a few can work at Google, or gain a Singaporean passport, inherit property in London’s Mayfair or sell $20 cheese to Manhattanites. For the rest, there is a downward spiral: in a sentence, technological progress drives labour abundance, this abundance pushes down wages, and every attempt to fight it will encourage further substitution towards alternatives.•


From Duncan Jefferies’ Guardian article “The Automated City“:

Enfield council is going one step further – and her name is Amelia. She’s an “intelligent personal assistant” capable of analysing natural language, understanding the context of conversations, applying logic, resolving problems and even sensing emotions. She’s designed to help residents locate information and complete application forms, as well as simplify some of the council’s internal processes. Anyone can chat to her 24/7 through the council’s website. If she can’t answer something, she’s programmed to call a human colleague and learn from the situation, enabling her to tackle a similar question unaided in future.

Amelia is due to be deployed later this year, and is supposed to be 60% cheaper than a human employee – useful when you’re facing budget cuts of £56m over the next four years. Nevertheless, the council claims it has no plans to get rid of its 50 call centre workers.

The Singaporean government, in partnership with Microsoft, is also planning to roll out intelligent chatbots in several stages: at first they will answer simple factual questions from the public, then help them complete tasks and transactions, before finally responding to personalised queries.

Robinson says that, while artificially intelligent chatbots could have a role to play in some areas of public service delivery: “I think we overlook the value of a quality personal relationship between two people at our peril, because it’s based on life experience, which is something that technology will never have – certainly not current generations of technology, and not for many decades to come.”

But whether everyone can be “upskilled” to carry out more fulfilling work, and how many staff will actually be needed as robots take on more routine tasks, remains to be seen.•

Tags: , ,

H.G. Wells hoped the people of Earth would someday live in a single world state overseen by a benign central government–if they weren’t first torn apart by yawning wealth inequality abetted by technology. He was correctly sure you couldn’t decouple the health of a society with the machines it depended on, which could have an outsize impact on economics.

In a smart The Conversation essay, Simon John James advises that the author’s social predictions have equal importance to his scientific ones. The opening:

No writer is more renowned for his ability to foresee the future than HG Wells. His writing can be seen to have predicted the aeroplane, the tank, space travel, the atomic bomb, satellite television and the worldwide web. His fantastic fiction imagined time travel, alien invasion, flights to the moon and human beings with the powers of gods.

This is what he is generally remembered for today, 150 years after his birth. Yet for all these successes, the futuristic prophecy on which Wells’s heart was most set – the establishment of a world state – remains unfulfilled. He envisioned a Utopian government which would ensure that every individual would be as well educated as possible (especially in science), have work which would satisfy them, and the freedom to enjoy their private life.

His interests in society and technology were closely entwined. Wells’s political vision was closely associated with the fantastic transport technologies that Wells is famous for: from the time machine to the Martian tripods to the moving walkways and aircraft in When the Sleeper Wakes. In Anticipations (1900), Wells prophesied the “abolition of distance” by real-life technologies such as the railway. He stressed that since the inhabitants of different nations could now travel towards each other more quickly and easily, it was all the more important for them to do so peacefully rather than belligerently.•

Tags: ,

David Frost was a jester, then a king. After that, he was somewhere in between but always closer to royalty than risible. The Frost-Nixon interview saw to that.

Below is an excerpt from a more-timely-than ever interview from Frost’s 1970 book, The Americansan exchange about privacy the host had with Ramsey Clark, who served as U.S. Attorney General, who is still with us, doing a Reddit Ask Me Anything just last year. At the outset of this segment, Clark is commenting about wiretapping, though he broadens his remarks to regard privacy in general.

Ramsey Clark:

[It’s] an immense waste, an immoral sort of thing.

David Frost:

Immoral in what sense?

Ramsey Clark:

Well, immoral in the sense that government has to be fair. Government has to concede the dignity of its citizens. If the government can’t protect its citizens with fairness, we’re in real trouble, aren’t we? And it’s always ironic to me that those who urge wiretapping strongest won’t give more money for police salaries to bring real professionalism and real excellence to law enforcement, which is so essential to our safety.

They want an easy way, they want a cheap way. They want a way that demeans the integrity of the individual, of all of our citizens. We can’t overlook the capabilities of our technology. We can destroy privacy, we really can. We have techniques now–and we’re only on the threshold of discovery–that can permeate brick walls three feet thick. 

David Frost:

How? What sorts of things?

Ramsey Clark:

You can take a laser beam and you put it on a resonant surface within the room, and you can pick up any vibration in that room, any sound within that room, from half a mile away.

David Frost:

I think that’s terrifying.

Ramsey Clark:

You know, we can do it with sound and lights, in other words, visual-audio invasion of privacy is possible, and if we really worked at it with the technology that we have, in a few years we could destroy privacy as we know it.

Privacy is pretty hard to retain anyway in a mass society, a highly urbanized society, and if we don’t discipline ourselves now to traditions of privacy and to traditions of the integrity of the individual, we can have a generation of youngsters quite soon that won’t know what it meant because it wasn’t here when they came.•

edward_albee_writing_new_york_corbis_be061295_lqhh1x-1

Edward Albee, one of the best playwrights America has ever produced, just died.

At the end of his privileged youth, the future dramatist worked delivering telegrams and selling music albums at Bloomingdale’s, and he didn’t care to advance much technologically beyond the record player and the typewriter. Albee despised Digital Era tools, never wanting to own a smartphone or look at the Internet, haughtily sneering at them the way intelligentsia in an earlier age derided TV as the “idiot box.” His New York Times obituary includes this 2012 quote from the writer: “All of my plays are about people missing the boat, closing down too young, coming to the end of their lives with regret at things not done.” Whether or not that applies to his defiant technophobia or not depends on your perspective. At any rate, it worked for him.

From Claudine Ko’s 2010 Vice Q&A:

Question:

Do you have a specific writing space?

Edward Albee:

I do my writing in my head. There are tables around for whenever I feel like writing something down. I don’t care where I do it. It’s called a manuscript, so I write by hand.

Question:

That’s pretty old school.

Edward Albee:

I don’t believe in all those machines.

Question:

And the internet?

Edward Albee:

I know it exists. I don’t use it.

Question:

Do you have a cell phone?

Edward Albee:

No. It’s a waste of time. I might as well watch television. I walk along the streets of New York and I find people bumping into each other, bumping into things, and they have these things in their ears or in their face. They’re not seeing anything of the real world.•

200f12da85c3bdb6d96e07dc72357ee8

In a Literary Review piece about Kyle Arnold’s new title, The Divine Madness Of Philip K. Dick, Mike Jay, who knows a thing or two about the delusions that bedevil us, writes about the insane inner world of the speed-typing, speed-taking visionary who lived during the latter stages of his life, quite appropriately, near the quasi-totalitarian theme park Disneyland, a land where mice talk and corporate propaganda is endlessly broadcast. Dick was a hypochondriac about the contents of his head, and it’s no surprise his life was littered with amphetamines, anorexia and anxiety, which drove his brilliance and abbreviated it.

The opening:

Across dozens of novels and well over a hundred short stories, Philip K Dick worried away at one theme above all others: the world is not as it seems. He worked through every imaginable scenario: consensus reality was variously a set of implanted memories, a drug-induced hallucination, a time slip, a covert military simulation, an illusion projected by mega-corporations or extraterrestrials, or a test set by God. His typical protagonist was conspired against, drugged, hypnotised, paranoid, schizophrenic – or, possibly, the only person in possession of the truth.

The preoccupation all too clearly reflected the author’s life. Dick was a chronic doubter, tormented, like René Descartes, by the suspicion that the world was the creation of an evil demon ‘who has directed his entire effort to misleading me’. But cogito ergo sum was not enough to rescue someone who in 1972, during one of his frequent bouts of persecution mania, called the police to confess to being an android. Dick took scepticism to a level that he made his own. It became his brand, and since his death it has been franchised across popular culture. He isn’t credited on Hollywood blockbusters such as The Matrix (in which reality is a simulation created by machines from the future) or The Truman Show (about a reality TV programme in which all but the protagonist are complicit), but their mind-bending plot twists are his in all but name.

As Kyle Arnold acknowledges early in his lucid and accessible study, it would be impossible to investigate the roots of Dick’s cosmic doubt more doggedly than he did himself. He was ‘his own best psychobiographer”…

Tags: ,

robot-congo-2

In an Atlantic Q&A, Derek Thompson has a smart conversation with the Economist’s Ryan Avent, the author of the soon-to-be-published The Wealth of Humans, a book whose sly title suggests abundance may not arrive without a degree of menace. Avent is firmly in the McAfee-Brynjolfsson camp, believing the Digital Age will rival the Industrial one in its spurring of economic and societal disruption. An excerpt:

The Atlantic:

There is an ongoing debate about whether technological growth is accelerating, as economists like Erik Brynjolfsson and Andrew McAfee (the authors of The Second Machine Age) insist, or slowing down, as the national productivity numbers indicate. Where do you come down?

Ryan Avent:

I come down squarely in the Brynjolfsson and McAfee camp and strongly disagree with economists like Robert Gordon, who have said that growth is basically over. I think the digital revolution is probably going to be as important and transformative as the industrial revolution. The main reason is machine intelligence, a general-purpose technology that can be used anywhere, from driving cars to customer service, and it’s getting better very, very quickly. There’s no reason to think that improvement will slow down, whether or not Moore’s Law continues.

I think this transformative revolution will create an abundance of labor. It will create enormous growth in [the supply of workers and machines], automating a lot of industries and boosting productivity. When you have this glut of workers, it plays havoc with existing institutions.

I think we are headed for a really important era in economic history. The Industrial Revolution is a pretty good guide of what that will look like. There will have to be a societal negotiation for how to share the gains from growth. That process will be long and drawn out. It will involve intense ideological conflict, and history suggests that a lot will go wrong.•

Tags: ,

images (2)

The introduction to Nicholas Carr’s soon-to-be published essay collection, Utopia Is Creepy, has been excerpted at Aeon, and it’s a beauty. The writer argues (powerfully) that we’ve defined “progress as essentially technological,” even though the Digital Age quickly became corrupted by commercial interests, and the initial thrill of the Internet faded as it became “civilized” in the most derogatory, Twain-ish use of that word. To Carr, the something gained (access to an avalanche of information) is overwhelmed by what’s lost (withdrawal from reality). The critic applies John Kenneth Galbraith’s term “innocent fraud” to the Silicon Valley marketing of techno-utopianism. 

You could extrapolate this thinking to much of our contemporary culture: binge-watching endless content, Pokémon Go, Comic-Con, fake Reality TV shows, reality-altering cable news, etc. Carr suggests we use the tools of Silicon Valley while refusing the ethos. Perhaps that’s possible, but I doubt you can separate such things.

An excerpt:

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim inThe Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.•

Tags: ,

« Older entries § Newer entries »