Excerpts

You are currently browsing the archive for the Excerpts category.

Following up on the post about technological unemployment in the office-support sector, here’s an excerpt from a 1975 Bloomberg piece that wondered about the office of the future. A business revolution threatened to not only decrease paper but people as well (though this article focused more on the former than the latter.) A “collection of these electronic terminals linked to each other and to electronic filing cabinets…will change our daily life,” one analyst promised, and it certainly has. 

The opening:

The office is the last corporate holdout to the automation tide that has swept through the factory and the accounting department. It has changed little since the invention of the typewriter 100 years ago. But in almost a matter of months, office automation has emerged as a full-blown systems approach that will revolutionize how offices work.

At least this is the gospel being preached by office equipment makers and the research community. And because the labor-intensive office desperately needs the help of technology, nearly every company with large offices is trying to determine how this onrushing wave of new hardware and procedures can help to improve its office productivity.

Will the office change all that much? Listen to George E. Pake, who heads Xerox Corp.’s Palo Alto (Calif.) Research Center, a new think tank already having a significant impact on the copier giant’s strategies for going after the office systems market: “There is absolutely no question that there will be a revolution in the office over the next 20 years. What we are doing will change the office like the jet plane revolutionized travel and the way that TV has altered family life.”

Pake says that in 1995 his office will be completely different; there will be a TV-display terminal with keyboard sitting on his desk. “I’ll be able to call up documents from my files on the screen, or by pressing a button,” he says. “I can get my mail or any messages. I don’t know how much hard copy [printed paper] I’ll want in this world.”

The Paperless Office

Some believe that the paperless office is not that far off. Vincent E. Giuliano of Arthur D. Little, Inc., figures that the use of paper in business for records and correspondence should be declining by 1980, “and by 1990, most record-handling will be electronic.”

But there seem to be just as many industry experts who feel that the office of the future is not around the corner. “It will be a long time—it always takes longer than we expect to change the way people customarily do their business,” says Evelyn Berezin, president of Redactron Corp., which has the second-largest installed base (after International Business Machines Corp.) of text-editing typewriters. “The EDP [data-processing] industry in the 1950s thought that the whole world would have made the transition to computers by 1960. And it hasn’t happened yet.”•

 

I think, sadly, that Aubrey de Grey will die soon, as will the rest of us. A-mortality is probably theoretically possible, though I think it will be awhile. But the SENS radical gerontologist has probably done more than anyone to get people to rethink aging as a disease to be cured rather than an inevitability to be endured. In the scheme of things, that paradigm shift has enormous (and hopefully salubrious) implications. De Grey just did an Ask Me Anything at Reddit. A few exchanges follow.

__________________________

Question:

What is the likelihood that someone who is 40 (30? 20? 10?) today will have their life significantly extended to the point of practical immortality? 

Is it a slow, but rapidly rising collusion of things that are going to cause this, or is it something that is going to kind of snap into effect one day?

Will the technology be accessible to everyone, or will it be reserved for the rich?

What are your thoughts on cryonics?

What is your personal preferred method of achieving practical immortality? Nanotechnology? Cyborgs? Something else?

Aubrey de Grey:

I’d put it at 60, 70, 80, 90% respectively.

Kind of snap, in that we will reach longevity escape velocity.

For everyone, absolutely for certain.

Cryonics (not cryogenics) is a totally reasonable and valid research area and I am signed up with Alcor.

Anything that works! – but I expect SENS to get there first.

__________________________

Question:

Before defeating aging, what if we were to first defeat cardiovascular disease or cancer or Alzheimer’s disease? Do you think this would be enough to make people snap of of their “pro-aging trance” and be more optimistic about the feasibility & desirability of SENS and other rejuvenation therapies?

EDIT: Do you think people would be more convinced by more cosmetic rejuvenation therapies instead (reversal of hair loss/graying, reduction of wrinkles and spots in the skin)?

Aubrey de Grey:

Not a chance. People’s main problem is that they have a microbe in their brains called “aging” that they think means something distinct from diseases. The only way that will change is big life extension in mice.

__________________________

Question:

Long life for our ability to continue to develop ourselves, explore the world, gain knowledge, and create is great. The goal however has different paths, from genetic manipulation to body cyborgization. Some speak of mind uploading but who knows if that’s possible, as mind transfer implies dualism.

Is one preferable over another?

And what is your opinion on the potential of our unstable interconnected world to negatively impact our potential for progress from things like ecological collapse, global warming, etc.? I feel like it’s a race between disaster and scientific progress, can we out run chaos? Or is this a false dichotomy, maybe the future is a world of suffering and a few individuals have military grade cyborg tech.

Aubrey de Grey:

I don’t think mind transfer necessarily implies dualism, and I’m all for exploring all options.

I am quite sure we can outrun chaos.

__________________________

Question:

I’ve been learning more and more lately about the work that you do in the fight to end aging, and fully believe that it is both possible and just over the horizon. How can the general public get involved in the fight other than donating?

Aubrey de Grey:

Money is the bottleneck, I’m afraid, so the next best thing to donating is getting others to donate.

__________________________

Question:

We really appreciate all your work. Some people have expressed concerns that these anti-aging techniques and treatments won’t be available to everyone, but only to the extremely wealthy. Are there strategies to prevent this?

Aubrey de Grey:

Yes – they are called elections. Those in power want to stay there.•

Tags:

Like most Atlantic readers, I go to the site for the nonstop Shell ads but stay for the articles. 

Jerry Kaplan, author of Humans Need Not Apply, has written a piece for the publication which argues that women will fare much better than men if technological unemployment becomes widespread and entrenched, the gender biases among jobs and careers favoring them. I half agree with him. 

Take for instance his argument that autonomous cars will decimate America’s three million truck drivers (overwhelmingly men) but not disrupt the nation’s three million secretaries (overwhelmingly women). That’s not exactly right. The trucking industry, when you account for support work, is estimated to provide eight million jobs, including secretarial positions. Truckers spend cash at diners and coffee shops and such, providing jobs that are still more often filled by females. And just because autonomous trucks won’t eliminate secretarial positions, that doesn’t mean other technologies won’t. That effort to displace office-support staff has been a serious goal for at least four decades, and the technology is probably ready to do so now.

This, of course, also doesn’t account for the many women who’ve entered into white-collar professions long dominated by men, many of which are under threat. But I think Kaplan is correct in saying that the middle-class American male is a particularly endangered species if this new reality takes hold, and there won’t likely be any organic solution coming from within our current economic arrangement.

Kaplan’s opening:

Many economists and technologists believe the world is on the brink of a new industrial revolution, in which advances in the field of artificial intelligence will obsolete human labor at an unforgiving pace. Two Oxford researchers recently analyzed the skills required for more than 700 different occupations to determine how many of them would be susceptible to automation in the near future, and the news was not good: They concluded that machines are likely to take over 47 percent of today’s jobs within a few decades.

This is a dire prediction, but one whose consequences will not fall upon society evenly. A close look at the data reveals a surprising pattern: The jobs performed primarily by women are relatively safe, while those typically performed by men are at risk.

It should come as no surprise that despite progress on equality in the labor force, many common professions exhibit a high degree of gender bias. For instance, of the 3 million truck drivers in the U.S., more than 95 percent are men; of the nearly 3 million secretaries and administrative assistants, more than 95 percent are women. Autonomous vehicles are a not-too-distant possibility, and when they arrive, those drivers’ jobs will evaporate; office-support workers suffer no such imminent threat.•

 

Tags:

In a tweet she published in the uproar after Cecil the lion’s killing, Roxane Gay delivered a line as funny and heartbreaking as anything George Carlin or Richard Pryor could have conjured. It was this:

I’m personally going to start wearing a lion costume when I leave my house so if I get shot, people will care.”

It speaks brilliantly to racial injustice and the unequal way our empathy is aroused. Later on, in a separate tweet, the critic said something much less true while defending that great line:

Speciesism is not a thing.

Oh, it is a thing. We’ve based so much of our world on that very thing, and all of us, even those who’ve tried to be somewhat kinder, have benefited from this arrangement.

In a winding Foreign Affairs piece that traces the history of the long struggle against animal cruelty, Humane Society CEO Wayne Pacelle reveals what is uneven but significant progress and how the movement, which coalesced around a book written 40 years ago by moral philosopher Peter Singer, gained steam after shifting tactics, replacing moralizing with legislative efforts. An excerpt:

There was forward progress, many setbacks through the decades, and a wave of lawmaking in the early 1970s, but the biggest catalyst for change came with the publication of Peter Singer’s book Animal Liberation in 1975.

Singer’s book spurred advocates to form hundreds more local and national animal protection groups, including People for the Ethical Treatment of Animals in 1980. I was part of that wave, forming an animal protection group as an undergraduate at Yale University in 1985, protesting cruelty, screening films, and generally trying to shine a light on large-scale systemic abuses that few wanted to acknowledge. This grass-roots activism pushed established groups to step up their calls for reform and adopt more campaign-oriented tactics. Nevertheless, throughout the 1980s, the animal protection movement was still fundamentally about protest. The issues that animal advocates raised were unfamiliar and challenging, and their demands were mainly for people to reform their lifestyles. Asking someone to stop eating meat or to buy products not tested on animals was a hard sell, because people don’t like to change their routines, and because practical, affordable, and easily available alternatives were scarce.

It was not until the 1990s that the animal protection movement adopted a legislative strategy and became more widely understood and embraced. A few groups had been doing the political spadework needed to secure meaningful legislative reforms, focusing mostly on the rescue and sheltering of animals. Then organizations such as the HSUS and the Fund for Animals started introducing ballot measures to establish protections, raise awareness, and demonstrate popular support for reform. The ball got rolling with a successful initiative in 1990 to outlaw the trophy hunting of mountain lions in California, which was followed by the 1992 vote in Colorado to protect bears. Other states followed suit with measures to outlaw cockfighting, dove hunting, greyhound racing, captive hunts, the use of steel-jawed traps for killing fur-bearing mammals, and the intensive confinement of animals on factory farms. Today, activists are working to address the inhumane slaughter of chickens and turkeys, the captive display of marine mammals, the hunting of captive wildlife, and the finning of sharks for food. The United States alone now counts more than 20,000 animal protection groups, with perhaps half of them formed in just the last decade. The two largest groups, the HSUS and the ASPCA, together raise and spend nearly $400 million a year and have assets approaching $500 million.•

Tags: , ,

Even if it’s difficult to believe now, imagine someone as recently as 2000 suggesting that cars powered by internal combustion engines would be out of California showrooms by 2030 and off the state’s highways by 2050, that they’d be replaced by zero-emission vehicles, one-hundred percent of them. Even think about someone hatching that plan ten years ago, when the electric car was all but considered killed. The seemingly impossible dream looks likely to become a reality thanks to great leaps in technology running headlong into the unique politics of a state demanding change. Amusingly enough, it’s something Governor Ronald Reagan tried to jump-start in 1969.

At the heart of the push is Mary Nichols, Chair of the California Air Resources Board. The opening of John Lippert’s Bloomberg article about how she’s ending Big Auto’s business as usual in the Golden State:

Sergio Marchionne had a funny thing to say about the $32,500 battery-powered Fiat 500e that his company markets in California as “eco-chic.” “I hope you don’t buy it,” he told his audience at a think tank in Washington in May 2014. He said he loses $14,000 on every 500e he sells and only produces the cars because state rules re­quire it. Marchionne, who took over the bailed-out Chrysler in 2009 to form Fiat Chrysler Automobiles, warned that if all he could sell were electric vehicles, he would be right back looking for another govern­ment rescue.

So who’s forcing Marchionne and all the other major automakers to sell mostly money-losing electric vehicles? More than any other person, it’s Mary Nichols. She’s run the California Air Resources Board since 2007, championing the state’s zero-emission-vehicle quotas and backing Pres­ident Barack Obama’s national mandate to double average fuel economy to 55 miles per gallon by 2025. She was chairman of the state air regulator once before, a generation ago, and cleaning up the famously smoggy Los Angeles skies is just one accomplish­ment in a four-decade career.

Nichols really does intend to force au­tomakers to eventually sell nothing but electrics. In an interview in June at her agency’s heavy-duty-truck laboratory in downtown Los Angeles, it becomes clear that Nichols, at age 70, is pushing regula­tions today that could by midcentury all but banish the internal combustion engine from California’s famous highways. “If we’re going to get our transportation system off petroleum,” she says, “we’ve got to get people used to a zero-emissions world, not just a little-bit-better version of the world they have now.”•

Tags: , ,

In his Pacific·Standard piece, “The Second Industrious Revolution,” Louis Hyman uses the phrase “precarious work” is the describe the new Gig Economy, which would be destabilizing if widespread. With so much of Labor prone to automation and robotization and no suitable replacement work presently in view, it’s time for Americans to start envisioning solutions that don’t impede progress but instead create stability that allows us to thrive within the new realities. Like Andrew McAfee, Eric Brynjolfsson, Martin Ford and numerous others, Hyman believes basic income may become a necessity. An excerpt:

All sorts of exciting technologies will reinforce this industrious revolution, but it is not the technology that deserves our attention. It is the people whose lives will be turned upside down. Scholars and activists are concerned about this rise in precarious work, but instead of fighting the work, we need to understand how to empower workers to take advantage of this revolution—before it is too late.

The first industrious and industrial revolutions inaugurated several centuries of social dislocation, as well as unprecedented economic growth. Not until the mid-20th century, in the heyday of post-war capitalism, did we find a way to create economic security in a wage-work economy: a steady paycheck, health insurance, and home ownership. But, almost as soon as these happened, they began to go away.

We should use this coming crisis as an opportunity to return to our core American values. An older American Dream, the Jeffersonian vision of independent farmers, was promoted by the federal government in the 19th century through the Homestead Act, which provided farm land to our citizens. It was a way to push back against the rise of wage labor, which was seen as dependent and an antithesis to American values.

In today’s digital economy, we need a comparable act that empowers us to make our own way in business. While we often discuss the American Dream in terms of consumption, there is another American Dream that is more visceral: control over one’s work. The longing many Americans feel for owning their own business, the celebration of entrepreneurship in our culture, and our homesteading heritage are not just about money—or buying houses. Yet for several generations we have made it easy to own a home, but hard to own our own businesses.

Workers don’t need land, but they do need other kinds of support—health insurance, skilled education, maybe even a basic income—to take the risks upon which success depends.•

Tags:

The Wall Street Journal’s Chris Mims, a perfect balance of enthusiast and skeptic, has written an excellent article about Virtual Reality, which might be mostly fun and games presently but was intended from the start for more serious things and may soon realize that potential. Mims believes this “transformative experience,” though still in its infancy, is on the immediate horizon, with education particularly in for a reimagining, the Louvre and the living room to become one.

The opening:

Picture this: You walk into a coffee shop or an office, and half the people around you have their eyes hidden behind opaque goggles. Their heads pivot from one made-up thing to the next as they peer into a world invisible to you. They’re in virtual reality.

This might sound like the far future, but I’m here to tell you that it could be our world within five years.

The reasons are simple: Many of us already have a VR-ready device in our pockets. All that’s left is a compelling reason to slip it into the appropriate holder, something that puts it inches from our face, like Google Cardboard or Samsungs Gear VR.

Granted, VR on your smartphone isn’t as compelling as what you can achieve with dedicated, consumer-ready headsets from HTCFacebook and Sony, which arrive late this year and early next. But the engineers I spoke to—the ones actually building this future—assured me it is only a matter of time before phones catch up.

Meanwhile, all the coverage of the birth of VR is about its applications for games and entertainment. This makes sense, because almost all the early demos are games. But VR is going to be much bigger, much more compelling, and much less trivial than what its earliest adopters have so far envisioned.•

Tags:

I’ve yet to get my grubby, ink-stained hands on a copy of Stephen Petranek How We’ll Live on Mars, which argues that we’ll establish a human presence on our neighboring planet within the next 20 years. It’s not just theory but reportage also, featuring interviews with numerous figures at the heart of the new Space Race, a welter of public and private interests. An excerpt from the book via the TED site:

These first explorers, alone on a seemingly lifeless planet as much as 250 million miles away from home, represent the greatest achievement of human intelligence.

Anyone who watched Neil Armstrong set foot on the moon in 1969 can tell you that, for a moment, the Earth stood still. The wonder and awe of that achievement was so incomprehensible that some people still believe it was staged on a Hollywood set. When astronauts stepped onto the moon, people started saying, “If we can get to the moon, we can do anything.” They meant that we could do anything on or near Earth. Getting to Mars will have an entirely different meaning: If we can get to Mars, we can go anywhere.

The achievement will make dreamy science fiction like Star Wars and Star Trek begin to look real. It will make the moons of Saturn and Jupiter seem like reasonable places to explore. It will, for better or worse, create a wave of fortune seekers to rival those of the California gold rush. Most important, it will expand our vision past the bounds of Earth’s gravity. When the first humans set foot on Mars, the moment will be more significant in terms of technology, philosophy, history, and exploration than any that have come before it. We will no longer be a one-planet species.

These explorers are the beginning of an ambitious plan not just to visit Mars and establish a settlement but to reengineer, or terraform, the planet — to make its thin atmosphere of carbon dioxide rich enough in oxygen for humans to breathe, to raise its temperature from an average of –81 degrees Fahrenheit to a more tolerable 20 degrees, to fill its dry stream beds and empty lakes with water again, and to plant foliage that can flourish in its temperate zone on a diet rich in CO2. These astronauts will set in motion a process that might not be complete for a thousand years but will result in a second home for humans, an outpost on the farthest frontier. Like many frontier outposts before it, this one may eventually rival the home planet in resources, standard of living and desirability.

Tags:

Watson has a way with words and Siri sounds sexy, but Cyc is almost silent. Why so silent, Cyc?

Cycorp’s ambitious project to create the first true AI has been ongoing for 31 years, much of the time in seclusion. A 2014 Business Insider piece by Dylan Love marked the three-decade anniversary of the odd endeavor, summing up the not-so-modest goal this way: to “codify general human knowledge and common sense.” You know, that thing. Every robot and computer could then be fed the system to gain human-level understanding.

The path the company and its CEO Doug Lenat have chosen in pursuit of this goal is to painstakingly teach Cyc every grain of knowledge until the Sahara has been formed. Perhaps, however, it’s all a mirage. Because the work has been conducted largely in quarantine, there’s been little outside review of the “patient.” But even if this artificial-brain operation is a zero rather than a HAL 9000, a dream unfulfilled, it still says something fascinating about human beings.

An excerpt from “The Know-It-All Machine,” Clive Thompson’s really fun 2001 Lingua Franca cover story on the subject: 

SINCE THIS is 2001, [Doug] Lenat has spent the year fielding jokes about HAL 9000, the fiendishly intelligent computer in Arthur C. Clarke’s 2001: A Space Odyssey. On one occasion, when television reporters came to film Cyc, they expected to see a tall, looming structure. But because Cyc doesn’t look like much—it’s just a database of facts and a collection of supporting software that can fit on a laptop—they were more interested in the company’s air conditioner. “It’s big and has all these blinking lights,” Lenat says with a laugh. “Afterwards, we even put a sign on it saying, CYC 2001, BETTER THAN HAL 9000.”

But for all Lenat’s joking, HAL is essentially his starting point for describing the challenges facing the creation of commonsense AI. He points to the moment in the film 2001 when HAL is turned on—and its first statement is “Good morning, Dr. Chandra, this is HAL. I’m ready for my first lesson.”

The problem, Lenat explains, is that for a computer to formulate sentences, it can’t be starting to learn. It needs to already possess a huge corpus of basic, everyday knowledge. It needs to know what a morning is; that a morning might be good or bad; that doctors are typically greeted by title and surname; even that we greet anyone at all. “There is just tons of implied knowledge in those two sentences,” he says.

This is the obstacle to knowledge acquisition: Intelligence isn’t just about how well you can reason; it’s also related to what you already know. In fact, the two are interdependent. “The more you know, the more and faster you can learn,” Lenat argued in his 1989 book, Building Large Knowledge-Based Systems, a sort of midterm report on Cyc. Yet the dismal inverse is also true: “If you don’t know very much to begin with, then you can’t learn much right away, and what you do learn you probably won’t learn quickly.”

This fundamental constraint has been one of the most frustrating hindrances in the history of AI. In the 1950s and 1960s, AI experts doing work on neural networks hoped to build self-organizing programs that would start almost from scratch and eventually grow to learn generalized knowledge. But by the 1970s, most researchers had concluded that learning was a hopelessly difficult problem, and were beginning to give up on the dream of a truly human, HAL-like program. “A lot of people got very discouraged,” admits John McCarthy, a pioneer in early AI. “Many of them just gave up.”

Undeterred, Lenat spent eight years of Ph.D. work—and his first few years as a professor at Stanford in the late 1970s and early 1980s—trying to craft programs that would autonomously “discover” new mathematical concepts, among other things. Meanwhile, most of his colleagues turned their attention to creating limited, task-specific systems that were programmed to “know” everything that was relevant to, say, monitoring and regulating elevator movement. But even the best of these expert systems are prone to what AI theorists call “brittleness”—they fail if they encounter unexpected information. In one famous example, an expert system for handling car loans issued a loan to an eighteen-year-old who claimed that he’d had twenty years of job experience. The software hadn’t been specifically programmed to check for this type of discrepancy and didn’t have the common sense to notice it on its own. “People kept banging their heads against this same brick wall of not having this common sense,” Lenat says.

By 1983, however, Lenat had become convinced that commonsense AI was possible—but only if someone were willing to bite the bullet and codify all common knowledge by brute force: sitting down and writing it out, fact by fact by fact. After conferring with MIT’s AI maven Marvin Minsky and Apple Computer’s high-tech thinker Alan Kay, Lenat estimated the project would take tens of millions of dollars and twenty years to complete.

“All my life, basically,” he admits. He’d be middle-aged by the time he could even figure out if he was going to fail. He estimated he had only between a 10 and 20 percent chance of success. “It was just barely doable,” he says.

But that slim chance was enough to capture the imagination of Admiral Bobby Inman, a former director of the National Security Agency and head of the Microelectronics and Computer Technology Corporation (MCC), an early high-tech consortium. (Inman became a national figure in 1994 when he withdrew as Bill Clinton’s appointee for secretary of defense, alleging a media conspiracy against him.) Inman invited Lenat to work at MCC and develop commonsense AI for the private sector. For Lenat, who had just divorced and whose tenure decision at Stanford had been postponed for a year, the offer was very appealing. He moved immediately to MCC in Austin, Texas, and Cyc was born.•

Tags: , ,

David McCullough’s latest, The Wright Brothers, details how two bicycle makers with no formal training in aviation became the first to touch the sky. In what might be James Salter’s final piece of journalism, the NYRB has posthumously published the late novelist and journalist’s graceful critique of the new book. Probably best known for his acclaimed fiction, Salter also was a reporter for People magazine in the ’70s, profiling other writers, Vladimir Nabokov and Graham Greene among them. Here he focuses on the recurring theme of the brothers’ distance from the world in everything from their family life to the relative isolation of Kitty Hawk.

An excerpt about the very origins of the Wrights’ fever dream:

Together they opened a bicycle business in 1893, selling and repairing bicycles. It was soon a success, and they were able to move to a corner building where they had two floors, the upper one for the manufacturing of their own line of bicycles. Then late in the summer of 1896 Orville fell seriously ill with typhoid fever. His father was away at the time, and he lay for days in a delirium while Wilbur and Katharine nursed him. During the convalescence Wilbur read aloud to his brother about Otto Lilienthal, a famous German glider enthusiast who had just been killed in an accident.

Lilienthal was a German mining engineer who, starting with only a pair of birdlike wings, designed and flew a series of gliders—eighteen in all—and made more than two thousand flights in them to become the first true aviator. He held on to a connecting bar with his legs dangling free so they could be used in running or jumping and also in the air for balance. He took off by jumping from a building or escarpment or running down a man-made forty-five-foot hill, and he wrote ecstatically of the sensation of flying. Articles and photographs of him in the air were published widely. Icarus-like he fell fifty-five feet and was fatally injured, not when his wings fell off but when a gust of wind tilted him upward so that his glider stalled. Opfer müssen gebracht werden were his final words, “sacrifices must be made.”

Reading about Lilienthal aroused a deep and long-held interest in Wilbur that his brother, when he had recovered, shared. They began to read intensively about birds and flying.•

See also: 

Tags: , , ,

We’ll likely be richer and healthier in the long run because of the Digital Revolution, but before the abundance, there will probably be turbulence.

A major reorganization of Labor among hundreds of millions promises to be bumpy, a situation requiring deft political solutions in a time not known for them. It’s great if Weak AI can handle the rote work and free our hands, but what will we do with them then? And how will we balance a free-market society that’s also a highly automated one?

In a Washington Post piece, Matt McFarland wisely assesses the positive and negatives of the new order. Two excerpts follow.

_______________________

Just as the agrarian and industrial revolutions made us more efficient and created more value, it follows that the digital revolution will do the same.

[Geoff] Colvin believes as the digital revolution wipes out jobs, new jobs will place a premium on our most human traits. These should be more satisfying than being a cog on an assembly line.

“For a long period, really dating to the beginning of the Industrial Revolution, our jobs became doing machine-like work, that the machines of the age couldn’t do it. The most obvious example being in factories and assembly-line jobs,” Colvin told me. “We are finally achieving an era in which the machines actually can do the machine-like work. They leave us to do the in-person, face-to-face work.”

_______________________

If self-driving cars and automated drone delivery become a reality, what happens to every delivery driver, truck driver and cab driver? Swaths of the population won’t be able to be retrained with skills needed in the new economy. Inequality will rise.

“One way or another it’s going to be kind of brutal,” [Jerry] Kaplan said. “When you start talking about 30 percent of the U.S. population being on the edge of losing their jobs, it’s not going to be a pleasant life and you’re going to get this enormous disparity between the haves and the have nots.”•

 

Tags: ,

The good and bad part of decentralization is the same: There is no center. That allows for all sorts of new possibilities, some of them good.

As I’ve argued before, the U.S. government, that reviled and feared thing, will have less and less ability to control it all, despite surveillance. You don’t have to be a paranoid Birther to see this new reality being born. Even the most suspicious among us may someday long for a strong federal presence.

Speaking of the center not holding: David Amsden’s excellent New York Times Magazine article “Who Runs the Streets of New Orleans?” looks at the privatization of some policing in the French Quarter, a remarkable square mile that’s been marred by mayhem since the destabilizing tragedy of Hurricane Katrina. In response, a single wealthy New Orleans citizen, Sidney Torres, who made his treasure hauling trash, entered into a tech-forward joint effort with the city to fight crime. It may ultimately make things safer, but, of course, there are many dangers in privatizing policing, in having an unelected individual with money dictate policy based on personal beliefs or even whims. There can be a mission creep that doesn’t just target criminals, but also the impoverished and minorities, creating a tale of two cities. While that may not sound too different than current public policing in America, at least elected officials have to answer to those issues.

An excerpt:

In the United States, private police officers currently outnumber their publicly funded counterparts by a ratio of roughly three to one. Whereas in past decades the distinction was often clear — the rent-a-cop vs. the real cop — today the boundary between the two has become ‘‘messy and complex,’’ according to a study last year by Harvard’s Kennedy School of Government. Torres’s task force is best understood in this context, one where the larger merging of private and public security has resulted in an extensive retooling of the nation’s policing as a whole. As municipal budgets have stagnated or plummeted, state and local governments have taken to outsourcing police work to the private sector, resulting in changes that have gone largely unnoticed by the public they’re tasked with protecting.

A recent report by the Justice Department, which has become one of the most prominent advocates of such collaborative efforts, identified 450 partnerships in the country between law enforcement and the private sector. Nationwide, there are now more than 1,200 ‘‘business improvement districts’’ in which businesses pay self-imposed taxes to fund improved services, including security. In many cases, officers covered by corporate entities have become indistinguishable from those paid for by taxpayers. Last year, Facebook entered into a three-year partnership with the Menlo Park, Calif., Police Department in which the social-media giant agreed to pay the $194,000 salary of a police officer whose job was going to be cut. One of the largest private security forces in the nation today is the University of Chicago Police, which has full jurisdiction over 65,000 residents, only 15,000 of whom are students. More than 100 public housing projects in Boston are patrolled by private security, including one company that has been authorized to arrest suspects under certain circumstances.•

Tags: ,

In a great Matter piece about the nightmare of climate change, Margaret Atwood revisits a 2009 Die Zeit article she wrote about possible outcomes for a future in which the world is no longer based on oil: one of accommodation, one of ruin and another in which some states are more capable of managing a post-peak tomorrow than others, a planet still inhabited by haves and have-nots, though one rewritten according to new realities.

Atwood asks these questions, among others: “Can we change our energy system? Can we change it fast enough to avoid being destroyed by it?” Despite it all, the novelist holds out hope that we can master an “everything change,” as she terms it.

An excerpt:

Then there’s Picture Two. Suppose the future without oil arrives very quickly. Suppose a bad fairy waves his wand, and poof! Suddenly there’s no oil, anywhere, at all.

Everything would immediately come to a halt. No cars, no planes; a few trains still running on hydroelectric, and some bicycles, but that wouldn’t take very many people very far. Food would cease to flow into the cities, water would cease to flow out of the taps. Within hours, panic would set in.

The first result would be the disappearance of the word “we”: except in areas with exceptional organization and leadership, the word “I” would replace it, as the war of all against all sets in. There would be a run on the supermarkets, followed immediately by food riots and looting. There would also be a run on the banks — people would want their money out for black market purchasing, although all currencies would quickly lose value, replaced by bartering. In any case the banks would close: their electronic systems would shut down, and they’d run out of cash.

Having looted and hoarded some food and filled their bathtubs with water, people would hunker down in their houses, creeping out into the backyards if they dared because their toilets would no longer flush. The lights would go out. Communication systems would break down. What next? Open a can of dog food, eat it, then eat the dog, then wait for the authorities to restore order. But the authorities — lacking transport — would be unable to do this.•

Tags:

Ted Cruz, who’s trailing my left testicle in the race for the GOP Presidential nomination (“Vote Ball ’16!”), must be flummoxed, feeling he should be the rightful leader of the Cliven Bundy wing of the Republican Party. Did he not engineer a shutdown of the entire government for no good reason? Hasn’t this Canadian immigrant shown adequate disdain for “foreigners”? Has he not tirelessly opposed gay marriage, even though he demands government otherwise not encroach on personal liberties? Has he not made every effort to dismantle Obamacare (when not busy signing up for it)? This man has bona fides.

Unfortunately for him and others, Donald Trump, a craps table with a combover, has won over the “crazies,” as John “Complete the danged fence!” McCain has called them. It’s difficult for President Trump to lose support because he doesn’t particularly stand for anything, apart from a vicious brand of entitlement stoked by prejudice. If you’re on board with that, mere facts won’t deter you.

From Megan Murphy at the Financial Times:

Absent a catastrophic implosion, Mr Trump has a lock on one of the coveted spots in the first primetime Republican debate on August 6. Given the sheer size of the party field, Fox News, the event’s host, has said only the top 10 candidates will appear on stage, as determined by an average of five as yet undisclosed national polls.

As lesser-known figures scramble to make the cut, top-tier contenders such as Mr Bush are grappling with how to avoid getting trumped by a man who is a master of publicity and self-promotion.

“Debates are still gladiatorial battles,” said Alex Castellanos, a veteran Republican strategist. “It is the coliseum, and we do it to see who emerges as the victor.”

A stage with Mr Trump on it creates a challenge for candidates who have so far chosen to focus mostly on their own messages as opposed to attacking a man who kicked off his campaign by labelling Mexican immigrants “rapists” and “criminals” and has since struck a chord with voters who fret the US is in decline.

Imagine a NASCAR driver mentally preparing for a race knowing one of the drivers will be drunk. That’s what prepping for this debate is like.”•

Tags: ,

There’s some question about how much futurists actually frame tomorrow and how much it reveals itself despite their input, but not quite knowing, we certainly need a strong representation of women and minorities in the mix, and we don’t have that. Maybe the underrepresented could suggest something other than jetpacks and trillionaires, which we don’t fucking need.

In her Atlantic piece Why Aren’t There More Women Futurists?” Rose Eveleth points out that the media’s go-to talking heads in this unlicensed, nebulous discipline are the male figures who dominate science and tech. Their dreams of utopia are often homogenous, corporate and patriarchal. It’ll be difficult to diversify futurism without attacking society’s underlying sexism.

Eveleth’s opening:

In the future, everyone’s going to have a robot assistant. That’s the story, at least. And as part of that long-running narrative, Facebook just launched its virtual assistant. They’re calling it Moneypenny—the secretary from the James Bond Films. Which means the symbol of our march forward, once again, ends up being a nod back. In this case, Moneypenny is a send-up to an age when Bond’s womanizing was a symbol of manliness and many women were, no matter what they wanted to be doing, secretaries.

Why can’t people imagine a future without falling into the sexist past? Why does the road ahead keep leading us back to a place that looks like the Tomorrowland of the 1950s? Well, when it comes to Moneypenny, here’s a relevant datapoint: More than two thirds of Facebook employees are men. That’s a ratio reflected among another key group: futurists.

Both the World Future Society and the Association of Professional Futurists are headed by women right now. And both of those women talked to me about their desire to bring more women to the field. Cindy Frewen, the head of the Association of Professional Futurists, estimates that about a third of their members are women. Amy Zalman, the CEO of the World Future Society, says that 23 percent of her group’s members identify as female. But most lists of “top futurists” perhaps include one female name. Often, that woman is no longer working in the field.•

 

Tags:

Predictive medicine powered by Big Data can alert you if you’re unwittingly headed downhill–and we all are to varying degrees–but what if this information isn’t just between you and your doctor?

CVS is phasing in IBM’s Watson to track trends in customer wellness. Seems great, provided you aren’t required to surrender your “health score” to get certain jobs the way you now must sometimes submit to drug tests. What if you needed a certain number to get a position the way you’re required to have a good credit score to get a loan? Seems unlikely, but tools are only as wise as the people who govern them in any particular era. Like most of the new normal, this innovation has the potential for great good–and otherwise.

From Ariana Eunjung Cha at the Washington Post:

[CVS Chief Medical Officer Troyen A.] Brennan said he could imagine the creation of mobile apps that would integrate information from fitness trackers and allow Watson to identify when a person’s activity level drops substantially and flag that as an indicator of something else going on. Or perhaps act as a virtual adviser for pharmacy or clinic staff that could help them identify “early signals” for when interventions may not be working and additional measures should be considered.

“Basically, if you can identify places to intervene and intervene early, you help people be healthier and avoid costly outcomes,” he said.

He added that the key to making these types of systems work will be to open lines of communication between a pharmacist, clinic staff and a patient’s physician, and that technology can help facilitate this dialogue.•

 

Tags: ,

In the latest excellent Sue Halpern NYRB piece, this one about Ashlee Vance’s Elon Musk bio, the critic characterizes the technologist as equal parts Iron Man and Tin Man, a person of otherworldly accomplishment who lacks a heart, his globe-saving goals having seemingly liberated him from a sense of empathy.

As Halpern notes, even Steve Jobs, given to auto-hagiography of stunning proportion, had ambitions dwarfed by Musk’s, who aims to not just save the planet but to also take us to a new one, engaging in a Space Race to Mars with NASA (while simultaneously doing business with the agency). The founder of Space X, Tesla, etc., may be parasitic on existing technologies, but he’s intent on revitalizing, not damaging, his hosts, doing so by bending giant corporations, entire industries and even governments to meet his will. An excerpt:

Two years after the creation of SpaceX, President George W. Bush announced an ambitious plan for manned space exploration called the Vision for Space Exploration. Three years later, NASA chief Michael Griffin suggested that the space agency could have a Mars mission off the ground in thirty years. (Just a few weeks ago, six NASA scientists emerged from an eight-month stint in a thirty-six-foot isolation dome on the side of Mauna Loa meant to mimic conditions on Mars.) Musk, ever the competitor, says he will get people to Mars by 2026. The race is on.

How are those Mars colonizers going to communicate with friends and family back on earth? Musk is working on that. He has applied to the Federal Communications Commission for permission to test a satellite-beamed Internet service that, he says, “would be like rebuilding the Internet in space.” The system would consist of four thousand small, low-orbiting satellites that would ring the earth, handing off services as they traveled through space. Though satellite Internet has been tried before, Musk thinks that his system, relying as it does on SpaceX’s own rockets and relatively inexpensive and small satellites, might actually work. Google and Fidelity apparently think so too. They recently invested $1 billion in SpaceX, in part, according to The Washington Post, to support Musk’s satellite Internet project.

While SpaceX’s four thousand circling satellites have the potential to create a whole new meaning for the World Wide Web, since they will beam down the Internet to every corner of the earth, the system holds additional interest for Musk. “Mars is going to need a global communications system, too,” he apparently told a group of engineers he was hoping to recruit at an event last January in Redmond, Washington. “A lot of what we do developing Earth-based communications can be leveraged for Mars as well, as crazy as that may sound.”

Tags: , ,

In a Good magazine piece, Jordan E. Rosenfeld argues that some Americans already born may live to 150 years old. He also tries to conjure economic solutions should that wonderful, challenging thing come to pass. 

There really are no answers, however, to supporting a population that gray, at least not according to current standards. Of course, it’s always questionable to apply modern arrangements to a future scenario, as if everything will remain the same except for one significant thing. That’s a sure way to make bad prognostications.

Perhaps if aging increases radically, we’ll also see by then mass automation and 3D printing becoming cheap and ubiquitous, leading to unprecedented abundance. Or maybe not. But Labor would definitely have to change significantly if the average person gets 15 decades, and it will be dramatically altered in the coming years even if we don’t.

From Rosenfield:

The first humans expected to live to age 150 are already alive, according to experts on aging and longevity. …

Astonishing or not, longer life will force people to rethink how (and how long) they work, and focus more on increasing the quality of these longer lives rather than rushing to retirement in their relatively spry 60s.

“We are at this huge historical event where people are living longer than they have ever lived, and our lifespans have practically doubled,” says Tamara Sims, a research psychologist at Stanford’s Lifespan Development Lab. “My mentor Laura Carstensen talks about redesigning the model and expanding our definition of middle age. It requires a cultural change, no easy task.”

One such way to redesign the model, Sims suggests, is rather than working furiously until the “magic age” of 62.5 (the earliest you can access social security benefits without penalties), people could “borrow time from their golden years.” This means people would work less in the early years—maybe part time—to raise families, pursue creative goals, and stay healthy, with the awareness that they’ll work longer than their parents and grandparents.•

 

Tags: ,

Should I say a short story about climate-change apocalypse is fun? Choire Sicha’s brief, new Matter fiction, “Table of Contents,” certainly is, though it’s suitably sobering as well. The author imagines a scenario in which the seas have risen in a bad mood, and the narrator tries to aid the survivors by printing key entries from our modern Library of Alexandria, Wikipedia, before the plug is pulled. The opening:

I don’t know which will last longer, the paper or the ink. Eventually the paper will burn or the ink will fade, so read this all as fast as you can.

But of Wikipedia’s five-million articles, these 40,000 seemed to be the most super-important.

They’re crammed in these eight plastic-bagged boxes. because I printed it all single-side. That way you can make notes on the back! For instance, definitely keep track of who has babies with whom. (See the page for Incest, then check Consanguinity.) I put in some Bics, they should last a few… years? No idea.

After that, you can look up the Pen page.

I also put in Pen (Enclosure) in case you domesticate animals later.

In any event, please do not leave the entirety of portable human knowledge out in the rain.•

Tags:

Matthew Hahn interviewed Hunter S. Thompson for The Atlantic in 1997, discussing the impact of the Internet on journalism and culture, among other matters. Thompson didn’t fully grasp that the Internet was going to become the preeminent medium, thinking it merely a consolation prize for the masses, but he was particularly prescient about the ego-feeding nature of newly decentralized landscape. An excerpt:

Matthew Hahn:

The Internet has been touted as a new mode of journalism — some even go so far as to say it might democratize journalism. Do you see a future for the Internet as a journalistic medium?

Hunter S. Thompson:

Well, I don’t know. There is a line somewhere between democratizing journalism and every man a journalist. You can’t really believe what you read in the papers anyway, but there is at least some spectrum of reliability. Maybe it’s becoming like the TV talk shows or the tabloids where anything’s acceptable as long as it’s interesting.

I believe that the major operating ethic in American society right now, the most universal want and need is to be on TV. I’ve been on TV. I could be on TV all the time if I wanted to. But most people will never get on TV. It has to be a real breakthrough for them. And trouble is, people will do almost anything to get on it. You know, confess to crimes they haven’t committed. You don’t exist unless you’re on TV. Yeah, it’s a validation process. Faulkner said that American troops wrote ‘Kilroy was here’ on the walls of Europe in World War II in order to prove that somebody had been there — ‘I was here’ — and that the whole history of man is just an effort by people, writers, to just write your name on the great wall.

You can get on [the Internet] and all of a sudden you can write a story about me, or you can put it on top of my name. You can have your picture on there too. I don’t know the percentage of the Internet that’s valid, do you? Jesus, it’s scary. I don’t surf the Internet. I did for a while. I thought I’d have a little fun and learn something. I have an e-mail address. No one knows it. But I wouldn’t check it anyway, because it’s just too fucking much. You know, it’s the volume. The Internet is probably the first wave of people who have figured out a different way to catch up with TV — if you can’t be on TV, well at least you can reach 45 million people [on the Internet].•

Algorithms may be biased, but people certainly are. 

Financial-services companies are using non-traditional data cues to separate signals from noises in determining who should receive loans. I’d think in the short term such code, if written well, may be fairer. It certainly has the potential, though, to drift in the wrong direction over time. If our economic well-being is based on real-time judgements of our every step, then we could begin mimicking behaviors that corporations desire, and, no, corporations still aren’t people.

From Quentin Hardy at the New York Times:

Douglas Merrill, the founder and chief executive of ZestFinance, is a former Google executive whose company writes loans to subprime borrowers through nonstandard data signals.

One signal is whether someone has ever given up a prepaid wireless phone number. Where housing is often uncertain, those numbers are a more reliable way to find you than addresses; giving one up may indicate you are willing (or have been forced) to disappear from family or potential employers. That is a bad sign. …

Mr. Merrill, who also has a Ph.D. in psychology…thinks that data-driven analysis of personality is ultimately fairer than standard measures.

“We’re always judging people in all sorts of ways, but without data we do it with a selection bias,” he said. “We base it on stuff we know about people, but that usually means favoring people who are most like ourselves.” Familiarity is a crude form of risk management, since we know what to expect. But that doesn’t make it fair.

Character (though it is usually called something more neutral-sounding) is now judged by many other algorithms. Workday, a company offering cloud-based personnel software, has released a product that looks at 45 employee performance factors, including how long a person has held a position and how well the person has done. It predicts whether a person is likely to quit and suggests appropriate things, like a new job or a transfer, that could make this kind of person stay.

Tags: ,

That excellent Wesley Morris writes for Grantland about the new Brando documentary, Listen to Me Marlon, which uses a digital version of the actor’s head–a decapitation of sorts, as the writer notes–which is an apt metaphor for an actor who spent his later years trying to tear his flesh from fame and the burden of his own talent–a self-induced sparagmos in Greek-tragedy terms, and one that seemed to rob him of his sanity.

Brando created his 3-D doppelganger because he dreamed of completely detaching himself from his work. He was often barely there in his later performances, even great ones–reading cue cards from Robert Duvall’s chest in The Godfather, clearly showing up solely for the paycheck in Superman. As Morris notes, the performer was making a mockery of the process and himself. Was that because his excellence hadn’t made him happy? Or was he a deconstructionist child, breaking to pieces a formerly favorite toy to understand what it had been? Maybe both.

Morris’ opening:

Maybe you’ve already heard, but in the future, actors will all just be holograms that directors will use as they see fit. That’s what Marlon Brando thought, anyway. In the 1980s, he went ahead and made a digital version of his face and head at a place called Cyberware. At the time, it was a state-of-the-art rendering. That 3-D heads haunts Listen to Me Marlon, a documentary by Stevan Riley that opens Wednesday in New York. The film is guided by Brando’s ruminative regret — about his fame, his talent, his worth as a father, about a life he felt he wasted.1 It combines news and on-set footage with material from Brando’s private archive, including the many hours of audio recordings Brando made before he died in 2004. The recordings were attempts at therapy. More than once the movie cuts to the spinning gears of cassette tapes with titles like “Self-Hypnosis #7” and so on.

Listen to Me’s wacky, spiritual power seems to emanate from that floating, rotating, mathematical arrangement of digital lasers that form Brando’s visage, which an effects team has re-created from the Cyberware scans. It’s a ghostly effect, intentionally incomplete — dated but hypnotically so.•

Tags: ,

The world is richer and smarter and healthier than ever, which is great, except that many of the forces that enabled these wonderful advances may also bring about the end of the species or at least cause an unprecedented die-off and severely diminish life for the “lucky” survivors. That’s the catch. 

In an Economist essay, Christoph Rheinberger and Nicolas Treich write of the ineffectiveness of traditional tools in assessing the cost of a potential climate-change disaster, which hampers attempts to mitigate risks, the unimaginable being incalculable. An excerpt:

Interestingly, the Pope’s letter recognises that “decisions must be made based on a comparison of the risks and benefits foreseen for the various possible alternatives”.

However, estimating these benefits means that we need to determine the value of a reduction in preventing a possible future catastrophic risk. This is a thorny task. Martin Weitzman, an economist at Harvard University, argues that the expected loss to society because of catastrophic climate change is so large that it cannot be reliably estimated. A cost-benefit analysis—economists’ standard tool for assessing policies—cannot be applied here as reducing an infinite loss is infinitely profitable. Other economists, including Kenneth Arrow of Stanford University and William Nordhaus of Yale University, have examined the technical limits of Mr Weitzman’s argument. As the interpretation of infinity in economic climate models is essentially a debate about how to deal with the threat of extinction, Mr Weitzman’s argument depends heavily on a judgement about the value of life.

Economists estimate this value based on people’s personal choices: we purchase bicycle helmets, pay more for a safer car, and receive compensation for risky occupations. The observed trade-offs between safety and money tell us about society’s willingness to pay for a reduction in mortality risk. Hundreds of studies indicate that people in developed countries are collectively willing to pay a few million dollars to avoid an additional statistical death. For example, America’s Environmental Protection Agency recommends using a value of around $8m per fatality avoided. Similar values are used to evaluate vaccination programmes and prevention of traffic accidents or airborne diseases.

Mr Posner multiplies the value of life by an estimate of Earth’s future population and obtains an illustrative figure of $336m billion as the cost of human extinction. Nick Bostrom, a philosopher at Oxford University, argues that this approach ignores the value of life of unborn generations and that the tentative figure should be much larger—perhaps infinitely so.•

 

Tags: ,

Even many economists who’ve made their bones doing macro often encourage the next generation to do micro (while continuing to do macro themselves), but something tells me the attraction of being a “big-picture thinker,” no matter how fraught that is, won’t simply dissolve. 

In his Foreign Policy piece, “Requiem for the Macrosaurus,” David Rothkopf argues the opposite, believing that macroeconomics is entering into obsolescence, that the sweep of history can’t be framed in blunt terms as it’s occurring. He feels now is the moment to move beyond this “medieval” state, with real-time Big Data leading the way. Rothkopf’s contention would certainly explain why traditional measurements seem inept at understanding the new normal of the Digital Age–why nobody knows what’s happening. An excerpt:

Being wrong has long been a special curse of economists. You might not think this would be the case in a so-called “science.” But, of course, all sciences struggle in those early years before scientists have enough data to support theories that can reflect and predict what actually happens in nature. Scientists from Galileo to Einstein have offered great discoveries but, due to the limits of their age, have labored under gross misconceptions. And in economics we are hardly in the era of Galileo quite yet. It is more like we are somewhere in the Middle Ages, where, based on some careful observation of the universe and a really inadequate view of the scope and nature of that universe, we have produced proto-science—also known today as crackpottery. (See long-standing views that the Earth was the center of the solar system or the belief that bleeding patients would cure them by ridding them of their “bad humors.”)

Modern economic approaches, theories, and techniques, the ones that policymakers fret over and to which newspapers devote barrels of ink, will someday be seen as similarly primitive. For example, economic policymakers regularly use gross estimates of national and international economic performances—largely aggregated measures based on data and models that are somewhere between profoundly flawed and crazy wrong—to assess society’s economic health, before determining whether to bleed the economic body politic by reducing the money supply or to warm it up by pumping new money into its system. Between these steps and regulating just how much the government spends and takes in taxes, we have just run through most of the commonly utilized and discussed economic policy tools—the big blunt instruments of macroeconomics.

I remember that when I was in government, those of us who dealt with trade policy or commercial issues were seen as pipsqueaks in the economic scheme of things by all the macrosauruses beneath whose feet the earth trembled, whose pronouncements echoed within the canyons of financial capitals, and who felt everything we and anyone else did was playing at the margins.

But think of the data on which those decisions were based. GDP, as it is calculated today, has roughly the same relationship to the size of the economy as estimates of the number of angels that can dance on the head of a pin do to the size of heaven. It misses vast amounts of economic activity and counts some things as value creation that aren’t at all.•

Tags:

I argued yesterday that Indian sprinter Dutee Chand should be able to compete against other women, despite a high testosterone level, because all elite athletes, not just her, have considerable natural advantages of one kind or another. (And we’re not even sure that a high T-level is an athletic edge.) Some basketball players have longer wingspans, some swimmers superior lung capacity. No one penalizes them. I wonder if the protest against Chand is caused by ignorance of biology or if it’s provoked by an unwitting bigotry over a challenge posed to traditional sex roles.

Such boundaries aren’t always clear, especially in age when sexual identity is in flux, but a total absence of them would result in a single competition for both sexes, something that might be devastating to women in sports like basketball, where size matters greatly.

In a really smart New York Times piece, Juliet Macur takes a deeper look at the complex issue. She doesn’t believe there’s an easy solution, but that all roads forward should crossed delicately, with respect for the athletes. An excerpt:

The arbitration panel in the Chand case is at least trying to inch closer to a solution. But as [Dr. Eric] Vilain suggested, there might be no solution. 

He believes that track’s governing body won’t be able to prove that women with hyperandrogenism have a great advantage over other women because it is impossible to determine that high testosterone equals a big advantage. Too many other factors go into an athlete’s success, like nutrition and training, he said. Yet the court seems to require a clear cause and effect to consider the I.A.A.F.’s rule fair.

“Looking at this does not compute for me,” said Vilain, who is an expert on the biology of intersexuality and helped formulate the International Olympic Committee’s rules on hyperandrogenism.

And that leads us back to something the arbitration panel said in its decision: “Nature is not neat.”

So the way sports officials handle this issue won’t be neat, either, because maybe it can’t be neat.•

Tags: , ,

« Older entries § Newer entries »