Excerpts

You are currently browsing the archive for the Excerpts category.

In the year 2100, how much of construction will remain in human hands? Precious little, I would think.

Pluses include much cheaper structures that are much quicker to completion, home costs being lower and personalized architecture being readily available. The chief minus is, of course, the number of jobs lost. If just the construction and trucking industries were lost to printing and automation, that would leave a huge gulf in the U.S. Labor market.

From Andrew Torchia of Reuters:

DUBAI (Reuters) – Dubai said it would construct a small office building using a 3D printer for the first time, in a drive to develop technology that would cut costs and save time as the city grows.

3D printing, which uses a printer to make three-dimensional objects from a digital design, is taking off in manufacturing industries around the world but has so far been used little in construction.

Dubai’s one-storey prototype building, with about 2,000 square feet (185 square meters) of floor space, will be printed layer-by-layer using a 20-foot tall printer, Mohamed Al Gergawi, the United Arab Emirates Minister of Cabinet Affairs, said on Tuesday.

It would then be assembled on site within a few weeks. Interior furniture and structural components would also be built through 3D printing with reinforced concrete, gypsum reinforced with glass fiber, and plastic.•

Tags:

I’ve read some titles from the Financial Times “Summer Books 2015” list, including Yuval Noah Harari’s SapiensEvan Osnos’ Age of Ambition and Martin Ford’s Rise of the Robots, all of which are wonderful–in fact, Harari’s title is the best book I’ve read this year, period. Here are several more suggestions from FT which sound great:

Station Eleven (Emily St. John Mandel) is an apocalyptic novel about a world in which almost everyone has died in a flu pandemic, and clans roam the earth killing at random. It could hardly sound less promising. And yet Emily St John Mandel’s fourth novel is different partly because she skips over the apocalypse itself — all the action takes place just before or 20 years afterwards — and because it is less about the survival of the human race than the survival of Shakespeare. The book has been on literary shortlists and won prizes and been much praised for its big themes: culture, memory, loss. Yet it works just as well at a less lofty level, as a beautifully written, compulsive read.

A Kim Jong-Il Production (Paul Fischer) The story of how the late North Korean dictator kidnapped South Korean cinema’s golden couple, the director Shin Sang-ok and his actress wife Choi Eun-hee, and put them to work building a film industry in the North. At once a gripping personal narrative and an insight into the cruelty and madness of North Korea.

The Vital Question: Why is Life the Way It Is? (Nick Lane) Biochemist Lane offers a scintillating synthesis of a new theory of life, emphasising the interplay between energy and evolution. He shows how simple microbes, which monopolised Earth for the first 2bn years, took the momentous step towards becoming the “eukaryotic” cells that then evolved into animals, plants, fungi and protozoa.•

Tags: , , , , , ,

The biggest problem with the Huffington Post isn’t the incessant lewd and lurid clickbait used to lure eyeballs, but that the money derived from such garbage is used to support very, very little good journalism. The site has been touted as navigating the way forward for the news business, but it isn’t that. The company might as well be selling Ding Dongs or Whoppers. It’s a gigantic operation coming to very little good and some bad (e.g., its early support of the anti-vaccination movement helped propel that lunacy). Ultimately, the site is its own strange island having no ramifications beyond its borders for anyone who wants to do responsible journalism. It’s not a news business, really, just a business and a dubious one. 

Early on in “Arianna Huffington’s Improbable, Insatiable Content Machine,” David Segal’s knowing New York Times Magazine article, the founder says this about a new vertical: “Let’s start iterating…let’s not wait for the perfect product.’’ And that’s true of the site writ large: It’s just iteration, and there’s no reason to anticipate it becoming perfect or even just good. That wait is over.

From Segal:

When most sites were merely guessing about what would resonate with readers, The Huffington Post brought a radical data-driven methodology to its home page, automatically moving popular stories to more prominent spaces and A-B testing its headlines. The site’s editorial director, Danny Shea, demonstrated to me how this works a few months ago, opening an online dashboard and pulling up an article about General Motors. One headline was ‘‘How GM Silenced a Whistleblower.’’ Another read ‘‘How GM Bullied a Whistleblower.’’ The site had automatically shown different headlines to different readers and found that ‘‘Silence’’ was outperforming ‘‘Bully.’’ So ‘‘Silence’’ it would be. It’s this sort of obsessive data analysis that has helped web-headline writing become so viscerally effective.

Above all, from its founding in an era dominated by ‘‘web magazines’’ like Slate, The Huffington Post has demonstrated the value of quantity. Early in its history, the site increased its breadth on the cheap by hiring young writers to quickly summarize stories that had been reported by other publications, marking the birth of industrial aggregation.
 
Today, The Huffington Post employs an armada of young editors, writers and video producers: 850 in all, many toiling at an exhausting pace. It publishes 13 editions across the globe, including sites in India, Germany and Brazil. Its properties collectively push out about 1,900 posts per day. In 2013, Digiday estimated that BuzzFeed, by contrast, was putting out 373 posts per day, The Times 350 per day and Slate 60 per day. (At the time, The Huffington Post was publishing 1,200 posts per day.) Four more editions are in the works — The Huffington Post China among them — and a franchising model will soon take the brand to small and midsize markets, according to an internal memo Huffington sent in late May.

Throughout its history, the site’s scale has also depended on free labor. One of Huffington’s most important insights early on was that if you provide bloggers with a big enough stage, you don’t have to pay them.•

 

Tags: ,

I certainly don’t know enough about Utrecht’s economy to say if the Dutch city’s plan to provide universal basic income is a good one or what the success or failure of the experiment would mean, if anything, beyond its borders, but it certainly will get rid of a lot of bureaucracy. In the U.S., the measuring of the needs of people across a multitude of plans carries heavy administrative costs. 

From DutcheNews.nl:

Utrecht city council is to begin experimenting with the idea of a basic income, replacing the current complicated system of taxes, social security benefits and top-up benefits.

City alderman Victor Everhardt says the aim is to see if the concept of a basic income works in practice. ‘Things can be simpler if we base the system on trust,’ he told website DeStadUtrecht.nl.

The experiment will start after the summer holidays and is being carried out together with researchers from Utrecht University.

In theory, a basic income consists of a flat income to cover living costs which, supporters say, will free up people to work more flexible hours, do volunteer work and study. Additional income is subject to income tax.

The Utrecht project will focus on people claiming welfare benefits. One group will continue under the present system of welfare plus supplementary benefits for housing and health insurance. A second group will get benefits based on a system of incentives and rewards and a third group will have a basic income with no extras.•

Tags:

Not everything should be left to the algorithms, but the drawing of U.S. congressional districts is one that cries out for such a technological shift. When members of congress, who collectively have a seven-percent approval rating, don’t have to worry about the security of their day jobs, gerrymandering has reached a ridiculous level. And if you can’t throw the bums out, you become Bumtown.

A half-step in the right direction is the Supreme Court decision which supports independent redistricting. It will still be people, prone to prejudices, doing the job, but at least it doesn’t allow for an outright “land grab.” 

From Adam Liptak at the New York Times:

The case, Arizona State Legislature v. Arizona Independent Redistricting Commission, No. 13-1314, concerned an independent commission created by Arizona voters in 2000. About a dozen states have experimented with redistricting commissions that have varying degrees of independence from the state legislatures, which ordinarily draw election maps. Arizona’s commission is most similar to California’s.

The Arizona commission has five members, with two chosen by Republican lawmakers and two by Democratic lawmakers. The final member is chosen by the four others.

The Republican-led State Legislature sued, saying the voters did not have the authority to strip elected lawmakers of their power to draw district lines. They pointed to the elections clause of the federal Constitution, which says, “The times, places and manner of holding elections for senators and representatives shall be prescribed in each state by the legislature thereof.”

Justice Ginsburg wrote that the Constitution’s reference to “legislature” encompassed the people’s legislative power when acting through ballot initiatives. “The animating principle of our Constitution is that the people themselves are the originating source of all the powers of government,” she wrote.•

 

Tags: ,

Ant colonies are analogous in many ways with human societies and computer systems, operating as a cooperative with the help of innate algorithms. In a Quanta Magazine Q&A, Emily Singer speaks to Stanford biologist Deborah Gordon about how these tiny builders can help us learn how to better assimilate the Information Age’s flood of data. An excerpt:

Question:

How do ant colonies change over time?

Deborah Gordon:

I found that a harvester ant colony’s behavior changes as it gets older and larger. Some aspects of network behavior just depend on size. In harvester ants, individual worker ants (other than the queen) live only a year, so it’s not the ants that get older and wiser, it’s the colony. That’s a puzzle, and it got me thinking about interaction networks, because I was looking for something that the ants could do in the same way but would have a different outcome if there are more ants. For example, I’m an ant, and I follow a rule that says, if I meet another ant at a certain rate, I do x. In a large colony, I might meet more ants. The same rule might have a different outcome if the colony is bigger because the rate of interaction would change.

We’re surrounded by giant networks — the Internet, our brains — so that got me interested in other systems. How does the behavior of a network scale as it gets larger?

Question:

How does it scale?

Deborah Gordon:

Older ant colonies are much more stable than young ones. If you create a disturbance, such as making a mess for them to clean up — I put out little piles of toothpicks — the older colonies eventually ignore the mess and get back to foraging. I think that in these colonies, with large numbers of foragers, the processes that drive them to forage override the response to the mess.•

Tags: ,

Did life on Earth have to turn out this way? Oh, I’d like to think it was all an accident.

In “Last Hominin Standing,” a bright and eloquent Aeon piece, Dan Falk weighs whether the semi-intelligent life we call Homo sapiens simply had to be or if we’re the result of a series of lucky bounces in distant history. Did we depend entirely on a certain spiny eel-esque creature surviving extinction? Without a course-altering asteroid making an impression, would it still be a planet of dinosaurs? The late paleontologist Stephen Jay Gould believed our existence a contingent one, though he had no shortage of willing debaters on the topic. My particular favorite part of Falk’s essay is the passage on a theoretical world, free of Homo sapiens, that became dominated by Neanderthals. An excerpt:

If we re-played the tape of evolution, so to speak, would Homo sapiens – or something like it – arise once again, or was humanity’s emergence contingent on a highly improbable set of circumstances?  

At first glance, everything that’s happened during the 3.8 billion-year history of life on our planet seems to have depended quite critically on all that came before. And Homo sapiens arrived on the scene only 200,000 years ago. The world got along just fine without us for billions of years. Gould didn’t mention chaos theory in his book, but he described it perfectly: ‘Little quirks at the outset, occurring for no particular reason, unleash cascades of consequences that make a particular future seem inevitable in retrospect,’ he wrote. ‘But the slightest early nudge contacts a different groove, and history veers into another plausible channel, diverging continually from its original pathway.’

One of the first lucky breaks in our story occurred at the dawn of biological complexity, when unicellular life evolved into multicellular. Single-cell organisms appeared relatively early in Earth’s history, around a billion years after the planet itself formed, but multicellular life is a much more recent development, requiring a further 2.5 billion years. Perhaps this step was inevitable, especially if biological complexity increases over time – but does it? Evolution, we’re told, does not have a ‘direction’, and biologists balk at any mention of ‘progress’. (The most despised image of all is the ubiquitous monkey-to-man diagram found in older textbooks – and in newer ones too, if only because the authors feel the need to denounce it.) And yet, when we look at the fossil record, we do, in fact, see, on average, a gradual increase in complexity.

But a closer look takes out some of the mystery of this progression. As Gould pointed out, life had to begin simply – which means that ‘up’ was the only direction for it to go. And indeed, a recent experiment suggests that the transition from unicellular to multicellular life was, perhaps, less of a hurdle than previously imagined. In a lab at the University of Minnesota, the evolutionary microbiologist William Ratcliff and his colleagues watched a single-celled yeast (Saccharomyces cerevisiae) evolve into many-cell clusters in fewer than 60 days. The clusters even displayed some complex behaviours – including a kind of division of labour, with some cells dying so that others could grow and reproduce.

But even if evolution has a direction, happenstance can still intervene.•

Tags:

While futzing around on Reddit, I came across a link to “Darwin Among the Machines,” a piece from Samuel Butler’s A First Year in Canterbury Settlement, which, amazingly, was written in 1863, when he was beginning to put together Erewhon. An excerpt:

The views of machinery which we are thus feebly indicating will suggest the solution of one of the greatest and most mysterious questions of the day. We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. Inferior in power, inferior in that moral quality of self-control, we shall look up to them as the acme of all that the best and wisest man can ever dare to aim at. No evil passions, no jealousy, no avarice, no impure desires will disturb the serene might of those glorious creatures. Sin, shame, and sorrow will have no place among them. Their minds will be in a state of perpetual calm, the contentment of a spirit that knows no wants, is disturbed by no regrets. Ambition will never torture them. Ingratitude will never cause them the uneasiness of a moment. The guilty conscience, the hope deferred, the pains of exile, the insolence of office, and the spurns that patient merit of the unworthy takes—these will be entirely unknown to them. If they want “feeding” (by the use of which very word we betray our recognition of them as living organism) they will be attended by patient slaves whose business and interest it will be to see that they shall want for nothing. If they are out of order they will be promptly attended to by physicians who are thoroughly acquainted with their constitutions; if they die, for even these glorious animals will not be exempt from that necessary and universal consummation, they will immediately enter into a new phase of existence, for what machine dies entirely in every part at one and the same instant?

We take it that when the state of things shall have arrived which we have been above attempting to describe, man will have become to the machine what the horse and the dog are to man. He will continue to exist, nay even to improve, and will be probably better off in his state of domestication under the beneficent rule of the machines than he is in his present wild state. We treat our horses, dogs, cattle, and sheep, on the whole, with great kindness; we give them whatever experience teaches us to be best for them, and there can be no doubt that our use of meat has added to the happiness of the lower animals far more than it has detracted from it; in like manner it is reasonable to suppose that the machines will treat us kindly, for their existence is as dependent upon ours as ours is upon the lower animals. They cannot kill us and eat us as we do sheep; they will not only require our services in the parturition of their young (which branch of their economy will remain always in our hands), but also in feeding them, in setting them right when they are sick, and burying their dead or working up their corpses into new machines. It is obvious that if all the animals in Great Britain save man alone were to die, and if at the same time all intercourse with foreign countries were by some sudden catastrophe to be rendered perfectly impossible, it is obvious that under such circumstances the loss of human life would be something fearful to contemplate—in like manner were mankind to cease, the machines would be as badly off or even worse. The fact is that our interests are inseparable from theirs, and theirs from ours. Each race is dependent upon the other for innumerable benefits, and, until the reproductive organs of the machines have been developed in a manner which we are hardly yet able to conceive, they are entirely dependent upon man for even the continuance of their species. It is true that these organs may be ultimately developed, inasmuch as man’s interest lies in that direction; there is nothing which our infatuated race would desire more than to see a fertile union between two steam engines; it is true that machinery is even at this present time employed in begetting machinery, in becoming the parent of machines often after its own kind, but the days of flirtation, courtship, and matrimony appear to be very remote, and indeed can hardly be realised by our feeble and imperfect imagination.

Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.•

Tags:

Somebody able to read psychology and situations at will can do amazing things, but there aren’t enough such people to go around. Many of those leading governments, corporations, HR departments, etc., are clueless as fuck, and that has an unfortunate ripple effect on others.

Enter data analysis. Can it consistently–or at least more consistently than us–get the answer right, identifying market inefficiencies and inequalities? In many cases, it couldn’t do much worse, especially when you factor in the unwitting biases within us. In “The Data or the Hunch,” Ian Leslie’s Economist Intelligent Life article, the writer uses the story of legendary record-industry figure John Hammond, who made informed bets on Billie and Basie and Bob without the aid of algorithms, to analyze Moneyball and human error. Could a computer have heard in Dylan’s voice what Hammond did? Will AI be better on average but be prone to missing the black swan? An excerpt:

The old music industry turned many young acts into big stars. But it placed many, many more wagers on acts that didn’t sell enough records to pay back; William Goldman’s axiom, “Nobody knows anything,” applies to music as much as the movies. In the social-media era, big bets on untested talent are rarer. This is partly because there’s less money to spray around. But also because the record companies are using data to lower the risk.

This is the day of the analyst. In education, academics are working their way towards a reliable method of evaluating teachers, by running data on test scores of pupils, controlled for factors such as prior achievement and raw ability. The methodology is imperfect, but research suggests that it’s not as bad as just watching someone teach. A 2011 study led by Michael Strong at the University of California identified a group of teachers who had raised student achievement and a group who had not. They showed videos of the teachers’ lessons to observers and asked them to guess which were in which group. The judges tended to agree on who was effective and ineffective, but, 60% of the time, they were wrong. They would have been better off flipping a coin. This applies even to experts: the Gates Foundation funded a vast study of lesson observations, and found that the judgments of trained inspectors were highly inconsistent.

THE LAST STRONGHOLD of the hunch is the interview. Most employers and some universities use interviews when deciding whom to hire or admit. In a conventional, unstructured interview, the candidate spends half an hour or so in a conversation directed at the whim of the interviewer. If you’re the one deciding, this is a reassuring practice: you feel as if you get a richer impression of the person than from the bare facts on their résumé, and that this enables you to make a better decision. The first theory may be true; the second is not.

Decades of scientific evidence suggest that the interview is close to useless as a tool for predicting how someone will do a job. Study after study has found that organisations make better decisions when they go by objective data, like the candidate’s qualifications, track record and performance in tests. “The assumption is, ‘if I meet them, I’ll know’,” says Jason Dana, of Yale School of Management, one of many scholars who have looked into the interview’s effectiveness. “People are wildly over-confident in their ability to do this, from a short meeting.” When employers adopt a holistic approach, combining the data with hunches formed in interviews, they make worse decisions than they do going on facts alone.

The interview isn’t just unreliable, it is unjust, because it offers a back door for prejudice.•

Tags: ,

I guess the biggest difference between the Web 1.0 bubble and the one we may be experiencing now is that no one this time thinks the Internet will suffer long-term damage even if stocks should founder badly. That actually was a fear among some during the first stunning downturn, that it had all been a flash in the pan, that we’d mistaken a frenzy for a fundamental shift. It was silly, of course, even if the jolt to the big-picture economy was painful. As had been the case with earlier, non-virtual gold rushes which left new cities in their wake, the infrastructure built in support of the burgeoning media ensured something more permanent had been realized. 

In his WSJ piece “Why This Tech Bubble Is Less Scary,” Chris Mims explains why this “frothy” period won’t be similar to the last one. Of particular interest is his analysis of tech companies floating on private money and what a market correction might mean to them. The opening:

Is tech in a bubble? I think so. The signs are all around us. The good news is, it’s nothing like the last one. Plus, for reasons that go beyond the usual impossibilities of economic prognostication, no one can say for sure what’s going on. Many people seem to find this reassuring, but we would be wise to heed the lesson that a lack of transparency about the mechanics of a market rarely leads anywhere good.

But first, let’s define what kind of a bubble tech might be in. It isn’t like the bubble of 1997-2000, the Kraken of legend that came from the depths to wreak havoc on the whole of the U.S. economy. That was a genuine, old-fashioned stock market bubble, with money pouring into publicly held tech companies that couldn’t justify the investment.

If I’m right, and what we’re experiencing now is a kind of Bubble Jr., any correction will be less widespread.

In 2015’s less-terrifying sequel to 1999, everyone is to be commended for avoiding the worst excesses of the past, the empty vehicles for irrational exuberance like Pets.com.•

 

Tags:

In 1979, Thomas G. Stockham Jr., an electrical engineer who’d provided expert testimony on the Watergate tapes, was preaching to the recording industry that digital was the future, which ultimately helped convince execs to replace LPs with CDs. He spoke to Pattie Reilly of People magazine about the coming revolution, though even he didn’t seem to fully grasp the ramifications. An excerpt:

The pioneer in America in the new field is an MIT-trained electrical engineer and computer scientist named Thomas G. Stockham Jr. “Digital recording isn’t a fad,” he insists. “It’s a whole new concept, like a new alphabet. It has stunning implications. Digitals will cause a music revolution.”

The 45-year-old Stockham is making news with his Soundstream, Inc. of Salt Lake City. But he was no stranger to headlines before this: In 1974, during the Watergate investigation, he was one of six experts who testified on the 18½-minute gap in the Nixon tapes. He and the others agreed that the segment of conversation between the ex-President and aide H. R. Haldeman had been erased from five to nine times, although they stopped short of saying that the so-called accidental erasure was apparently deliberate.

In 1975 Stockham left his professorship at the University of Utah to set up Soundstream, which built the first successful digital recorder in the U.S. the next year. Since then other companies, like 3M and Sony, have developed rival digital systems independently.

Today Soundstream is producing digitally recorded discs for 11 record companies, and the LPs are being sold at selected stores. Though the discs look the same as conventional ones, they can be identified by the word “digital” stamped on the jacket and by their premium prices, ranging from $11 to $18. …

Without doubt, digital recording enhances the reproduction of almost any orchestral performance and that of soloists with multidimensional voices. Another plus is that the new discs can be played on ordinary stereos, and the improvement is obvious even with a basic $400 system. While digitals will not necessarily make conventional records obsolete, the question is whether fussy listeners will want to play their old library after hearing digitals.”•

Tags: ,

Thomas Piketty is a capitalist, which might surprise some. The French economist simply believes the system is a driving force of wealth inequality if left unfettered and must be constantly treated, like a patient prone to fever. In a Financial Times piece by Anne-Sylvaine Chassany, Piketty discusses the development of his ideas. An excerpt:

Piketty says his interest in inequality crystallised after the collapse of the Berlin Wall and the first Gulf war. He recalls visiting Moscow in 1991 and being struck by “the lines in front of shops”. He came back vaccinated against communism — “I believe in capitalism, private property, the market” — but also with a question central to his work: “How come those people had been so afraid of inequality and capitalism in the 19th and 20th century that they created such a monstrosity? How can we tackle inequality without repeating this disaster?”

The first Gulf war, he believed, demonstrated the cynicism of the west: “We are told constantly that states can’t do anything, that it’s impossible to regulate the Cayman Islands and the other tax havens because they are too powerful, and all of a sudden we send a million soldiers 10,000km from home to allow the emir of Kuwait to keep his oil.”

I am halfway through the now tepid bolognese when I ask him why his work had such an impact in the US without causing anything like such a stir in France at the time of its original publication. Piketty says he caught American attention in 2003 when, together with Emmanuel Saez, a fellow French economist who teaches at the University of California, he first compiled historical data on the US’s wealthiest people. In 2009, newly elected President Obama used the French economists’ graph that showed inequality was back to its 1929 peak. “We became the target of Republican think-tanks,” he recalls. The French version of the book acted as a teaser to those critics, he believes, helping propel it to the top of Amazon’s bestseller list for three weeks when it was released in English.

“The rise of the top 1 per cent is an American thing. It’s not by chance that Occupy Wall Street happened in Wall Street, and not in Brussels, Paris or Tokyo,” he says. “It’s different in Europe. Here, inequality takes the form of unemployment and public debt.”

Though Piketty concedes that the global wealth tax he recommends is a “utopian” dream, he also says a confiscatory tax rate of more than 80 per cent on earnings exceeding $1m would work.•

Tags: , ,

In 1976, Gail Jennes of People magazine conducted a Q&A with Michael L. Dertouzos, who was the Director of the Laboratory for Computer Science at MIT. He pretty much hit the bullseye on everything regarding the next four decades of computing, except for thinking Moore’s Law would reach endgame in the mid-’80s. An excerpt:

Question:

Will computers be widely used by the average person in coming years?

Michael L. Dertouzos:

We don’t see technical limitations in computer development until the mid-1980s. Until then, decreased cost will make computers smaller, cheaper and more accessible. In 10 or 15 years, one should cost about the same as a big color TV. This machine could become a playmate, testing your wits at chess or checkers. If a computer were hooked up to AP or UPI news-wires, it could be programmed to know that I’m interested in Greece, computers and music. Whenever it caught news items about these subjects, it would print them out on my console—so I would see only the things I wanted to see.

Question:

Will they transmit mail?

Michael L. Dertouzos:

We are already hooked into a network spanning the U.S. and part of Europe by which we send, collect and route messages easily. Although the transmission process is instant, you can let messages pile up until you turn on your computer and ask for your mail.

Question:

Do you foresee computers as a tool for the average child?

Michael L. Dertouzos:

It already is for some. When my 6-year-old son Leonidas visited MIT, he couldn’t understand why all the secretaries had “computers.” He’d seen my computers before he’d seen their typewriters.

Question:

Will the computer eventually be as common as the typewriter?

Michael L. Dertouzos:

Perhaps even more so. It may be hidden so you won’t even know you’re using it. Don’t be surprised if there is one in every telephone, taking over most of the dialing. If you want to call your friend Joe, you just dial “JOE.” The same machine could take messages, advise if they were of interest and then could ring you. In the future, I would imagine there could be computerized cooking machines. You put in a little card that says Chateaubriand and it cooks the ingredients not only according to the best French recipe, but also to your particular taste.

Question:

Will robots ever be heavily relied upon?

Michael L. Dertouzos:

Robots are already doing things for us—for example, accounting and assembling cars. Two-legged robotic bipeds are a romantic notion and actually pretty unstable. But computer-directed robot machines with wheels, for example, may eventually do the vacuum cleaning and mow the lawn.

Question:

How might computers aid us in an election year?

Michael L. Dertouzos:

Voters might quickly find out political candidates’ positions on the issues by consulting computers. Government would then be closer to the pulse of the governed. If we had access to a very intelligent computer, we could probe to find out if the guy is telling the truth by having them check for inconsistency—but that is way in the future.

Question:

Should everyone be required to take a computer course?

Michael L. Dertouzos:

I’d rather see people choose to do so. Latin, the lute and the piano used to be required as a part of a proper upbringing. Computer science will be thought of in the same way. If we can use the computer early in life, we can understand it so we won’t be hoodwinked into believing it can do the impossible. A big danger is deferring to computers out of ignorance.•

Tags: ,

In a really fun, well-written 1987 New York Times Magazine article, Caryn James took the pulse of that decade’s New York literary life, a patient with an irregular heartbeat. In retrospect, most of the best writing of that era was done by others elsewhere.

There are just as many literary people in NYC (and everywhere else) now as during the time of the so-called “Brat Pack,” probably a lot more, but no one worries too much now about naming or glamorizing them. That’s probably for the best. The opening:

A writer’s life may be one of dreary solitude, but the Literary Life – ah, the Literary Life promises glamour, fame, a seat next to Hemingway as he scribbles immortal prose in a Paris cafe. The myth is so alluring it can survive even the most scaled-down atmosphere. On a typical New York night, for instance, David Leavitt, Meg Wolitzer and Gary Glickman – all novelists in their 20’s and the best of friends – are likely to be having dinner at their favorite restaurant, a dingy, hole in the wall in the West Village. ”Even as we sit there,” Glickman says, ”I sometimes wonder, ‘Is this it? Is it the Cafe de Flore?’ ” As Hemingway might have said, isn’t it pretty to think so?

A different twist on the literary life in 1980’s New York clubs replacing cafes, and roaming bands of authors stalking the night streets from Area to Palladium to Nell’s. Jay McInerney’s best-selling 1984 novel, Bright Lights, Big City, with its aspiring-writer hero, created a hip New York where late-night clubs and cocaine blurs collide with literary ambition. Tama Janowitz’s Slaves of New York, published last year, carries on the image, in stories full of struggling artists, devious gallery owners, desperate hangers-on, all willfully imprisoned by their need to be trendy in the city. But these muddled, hard-partying characters – when would they ever find the time, or the clearheadedness, to write? – are fictional exaggerations.

In reality, for many young writers today, New York is a base where they can strive and grow until they become successful enough, or frustrated enough, to leave for a while – to spend the summer at a tranquil writers’ colony or to make money teaching for a year – always returning to replenish themselves in the literary waters, and hit some gossip-filled book parties to make contacts with editors, agents and publishers.

Just glance at the itineraries of some of New York’s hottest young writers. Jay McInerney, who is 33, spent half of last year in Ann Arbor, where his then-wife was finishing her Ph.D. Tama Janowitz, 30, has a fellowship at Princeton University this year. David Leavitt, the short-story writer whose first novel is The Lost Language of Cranes, lives on Long Island. Meg Wolitzer, author of Hidden Pictures, has been teaching upstate. Kathy Acker (whose latest novel is called Don Quixote) the determinedly punkish 38-year-old author identified with downtown Manhattan has lived in London for the last two years. But New York has lost none of its cachet or importance for these writers; no matter where their legal residence, they seem to spend as much time in the city as out of it. They represent the city’s new literary life.

This is no longer the place where, as in years gone by, literary circles had real coherence, where the mention of the journal Partisan Review conjured up an image of like-minded intellectuals. In New York today young authors live in a swifter-than-sound atmosphere, full of energy, hype and distractions. The change reflects new realities in the city and in the publishing industry: higher rents and tougher urban living combined with pressure to bring out a book of fiction before the first blush of youth has passed. So aspiring authors find themselves on a harshly competitive fast track as soon as they are out of college or graduate writing programs – if not before. No wonder they have little time or taste for Bloomsbury-cozy salons or Hemingway-macho feuds.•

Tags:

In a recent Telegraph essay, Sir Martin Rees took on what he realizes is almost a fool’s errand: predicting the future. He holds forth on bioengineering, Weak Ai, Strong AI, etc. A passage about what he sees for us–them, really–in the far future:

Let me briefly deploy an astronomical perspective and speculate about the really far future – the post-human era. There are chemical and metabolic limits to the size and processing power of organic brains. Maybe humans are close to these limits already. But there are no such constraints on silicon-based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans. So, by any definition of “thinking”, the amount and intensity that’s done by organic human-type brains will, in the far future, be utterly swamped by the cerebrations of AI. Moreover, the Earth’s biosphere in which organic life has symbiotically evolved is not a constraint for advanced AI. Indeed, it is far from optimal – interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop insights as far beyond our imaginings as string theory is for a mouse.

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity – spanning tens of millennia at most – will be a brief precursor to the more powerful intellects of the inorganic post-human era. So, in the far future, it won’t be the minds of humans, but those of machines, that will most fully understand the cosmos – and it will be the actions of autonomous machines that will most drastically change our world, and perhaps what lies beyond.•

Tags:

Bob Guccione Jr., son of a leathery beaver merchant and a visionary magazine editor like his late father, just did an AMA at Reddit about the 30th anniversary of Spin and life in the post-print digital world. A few exchanges follow.

_________________________

Question:

Hi Bob, Was it ever weird growing up in your home considering who your father was? I mean did a Penthouse Pet ever come to any of your birthday parties?

Also – I don’t know if you have any pull at Penthouse – but the Penthouse Comix publication was great and included many famous and great creators. Do you think there’d be any chance these stories could be reprinted?

Bob Guccione Jr.:

It wasn’t weird! And yes, Penthouse Pets did come to my birthday parties, but only when I was in my twenties!

The models in Penthouse were always off limits to my brothers and I (and, I assume, although I never thought about it before, my sisters…). That was smart of my father. By the time we’d worked out how to get around that, we were adults and had a better perspective of who these women were, and high respect for them, and our own lives. I was married at 24 to a woman who was a writer and worked in a store in London, England.

I have no pull at Penthouse (I don’t even know if it is still being published). But I agree with you about the comics! Some of them were brilliant, and some of the artists extraordinary.

_________________________

Question:

This may be random, but do you have a most memorable Beastie Boys story?

Bob Guccione Jr.

I do! When they were starting out they used to come up to our office and skateboard around the halls. I don’t think we had particularly good halls, I think we were just the only magazine that wouldn’t throw them out. Until we did! I eventually had to told them they couldn’t come around QUITE as much, on the grounds that my staff would all stop working, hang out with them and think they had the best job in the world because, apparently, when Beastie Boys turned up, they didn’t have to work.

I liked them a lot, they were really good people. Very sincere, very gentle and smart. And when I ran into them periodically after they had become huge stars, they always made time to talk.

_________________________

Question:

What was it like meeting Nirvana for the first time?

Bob Guccione Jr.

You know, as an aside, I ALMOST asked the Dalai Lama when I interviewed him what he thought of Nirvana, but I chickened out!

I first met Kurt and Courtney at Nirvana’s manager Danny Goldberg’s house in LA. He was quite, gentle, attentive, incredibly smart and you could feel very special. We hit it off right away and at the end of dinner, as we were all leaving, I told him how much I loved his music, and he said he always enjoyed reading my Topspin columns and had been reading them since he was 16. That was the first time it occurred to me that ARTISTS read me — I don’t know why it never occurred to me before, but it hadn’t.

I later met the whole band in Seattle and we hung out for a great evening that resulted in my sending Krist Novoselic to Croatia during the war in Yugoslavia, to cover the war for us. When Kurt found out, as Krist was leaving for the airport, he went apeshit and called me up from Brazil, where the band had just finished a tour, and screamed YOU’RE SENDING MY BASS PLAYER TO A FUCKING WAR ZONE! I said yes, we were, but I was sure he’d be alright… He was, and he did one of the best pieces we ever published (and which we’ll republish this year as part of the 30th anniversary).

_________________________

Question:

What’s your favorite/the most memorable story SPIN has ever done?

Bob Guccione Jr.

Wow, how long do you have? There are so many I love, so many I’m very proud of and a ton that are memorable.

I think I’m proudest of our coverage of AIDS actually, it changed the world’s perception of how much was actually, really known about the awful disease/collection of diseases. Our reporting cut through the (then) rampant hysteria and forced a lot of crap science and reporting into the light where it could be more soberly evaluated. We ended the death-bringing reliance on AZT for instance. Our reporting, over ten years, was often controversial and very often not generally agreed with, but as I always point out, we never once, in 10 years and 120 columns, had to print a correction. Our facts were airtight. Our opinions and conclusions were often debated and some still are, but we made people think differently, and most of the time NOT politically correctly. I have NO time for political correctness. It’s a cultural fraud. And AIDS science and media coverage was rife with it, which wasn’t doing anyone any good. Sober, real factual reporting did help people and I’ve met many over the years who have said reading SPIN saved their lives. One can barely hope to achieve more in one’s life than that.

I’m also proud of our Live Aid exposes that showed that the funding was going to buy weapons and fund deadly resettlement marches that killed 100,000 people. We reported that Geldof was repeatedly warned by the relief agencies in the field not to deal with the Ethiopian dictator Mengistu, but drunk with his own glory ignored everyone and gave all the money to this murderous thug. Our articles at first had us shunned by the music industry and the world’s media but I challenged hundreds of news organizations to report it for themselves, and when they did, they came to the same conclusion, and basically the resulting press halted the funding of this genocide. I’m very proud of that.

One of my other favorite stories was Tama Janowitz’s fictional accounting of giving Bruce Springsteen’s wife a lobotomy and replacing her, and Bruce doesn’t notice…. Look it up!•

Tags: , , , , , ,

Attending the recent O’Reilly Solid conference in San Francisco, Richard Waters of the Financial Times glimpsed the future of the Internet of Things, still at a larval stage but a dramatic metamorphosis that will almost definitely happen, though no one knows exactly when. Gathering the information will be only half the battle as processing it intelligently is just as key. In his article, Waters focuses on the potential of ubiquitous connectedness but not the potential perils (privacy concerns, technological unemployment, etc.). An excerpt:

The world-changing applications made possible by the new technology platform cannot be imagined at the outset. …

The exhibits included a “pop-up factory” to make electronics on the fly and a part-3D printed car designed to be built in small local “microfactories”. Much of the discussion was of synthetic biology that will take manufacturing down to the microscopic level and merge the inorganic with the organic.

Behind the disruption lie three technologies that are on a collision course, according to Mickey McManus, a researcher at design software company Autodesk.

Extending internet connectivity to the physical world is only part of the story. A second seminal tech change will stem from the spread of artificial intelligence, which will make it easier to design and control complex ecosystems of objects, as well as put a higher level of intelligence into the individual “things” themselves. The third leg of the revolution, says Mr McManus, is digital manufacturing exemplified by 3D printing, which could present an alternative to some forms of mass-market production.

Taken together, he hints at the types of changes that could result: three students in a dorm room could start a car company; a distributed social network might replace a factory; or objects may disassemble and reassemble themselves as needs change.•

 

Tags:

In a fascinating 1968 Mechanix Illustrated article, science-fiction writer James R. Berry predicted life forty years hence, prescient about communications devices, online shopping and driverless cars, though he was too aggressive in some other prognostications. The opening:

IT’S 8 a.m., Tuesday, Nov. 18, 2008, and you are headed for a business appointment 300 mi. away. You slide into your sleek, two-passenger air-cushion car, press a sequence of buttons and the national traffic computer notes your destination, figures out the current traffic situation and signals your car to slide out of the garage. Hands free, you sit back and begin to read the morning paper–which is flashed on a flat TV screen over the car’s dashboard. Tapping a button changes the page.

The car accelerates to 150 mph in the city’s suburbs, then hits 250 mph in less built-up areas, gliding over the smooth plastic road. You whizz past a string of cities, many of them covered by the new domes that keep them evenly climatized year round. Traffic is heavy, typically, but there’s no need to worry. The traffic computer, which feeds and receives signals to and from all cars in transit between cities, keeps vehicles at least 50 yds. apart. There hasn’t been an accident since the system was inaugurated. Suddenly your TV phone buzzes. A business associate wants a sketch of a new kind of impeller your firm is putting out for sports boats. You reach for your attache case and draw the diagram with a pencil-thin infrared flashlight on what looks like a TV screen lining the back of the case. The diagram is relayed to a similar screen in your associate’s office, 200 mi. away. He jabs a button and a fixed copy of the sketch rolls out of the device. He wishes you good luck at the coming meeting and signs off.

Ninety minutes after leaving your home, you slide beneath the dome of your destination city. Your car decelerates and heads for an outer-core office building where you’ll meet your colleagues. After you get out, the vehicle parks itself in a convenient municipal garage to await your return. Private cars are banned inside most city cores. Moving sidewalks and electrams carry the public from one location to another.•

Tags:

At any moment in history we accept some things that are wrong and others that are even monstrous. But which ones are those in our current age, a time when technology is viewed as totem?

In “Humanist Among the Machines,” an Aeon essay by Ian Beacock, the writer suggests we seek alternatives to the received wisdom of the Digital Age by taking a cue from twentieth-century historian Arnold Toynbee, who pushed back at the “mechanization” of Homo sapiens and its world in an earlier period. 

In one passage, Beacock writes that “As Toynbee recognised, scientific principles and technical innovations might help us build a better railway, a faster locomotive – but they aren’t very good at telling us who can buy tickets, what direction we should lay the track, or whether we should be taking the train at all.” The thing is, technology has gotten pretty good at telling us those type of things and is getting better all the time. Of course, that’s just more reason to pay heed to Beacock’s clarion call. As tech’s influence casts a larger shadow, the light we shine on it should be brighter still. 

An excerpt:

There’s no shortage of writing about Silicon Valley, no lack of commentary about how smartphones and algorithms are remaking our lives. The splashiest salvos have come from distinguished humanists. In The New York Times Book Review, Leon Wieseltier, acidly indicted the culture of technology for flattening the capacious human subject into a few lines of computer code. Rebecca Solnit, in the London Review of Books, rejects the digital life as one of distraction, while angrily documenting the destruction of bohemian San Francisco at the hands of hoodied young software engineers who ride to work aboard luxury buses like “alien overlords.” Certainly there’s reason to be outraged: much good is being lost in our rush to optimisation. Yet it’s hard not to think that we’ve been so distracted by such totems as the Google Bus that we’re failing to ask the most interesting, constructive, radical questions about our digital times. Technology isn’t going anywhere. The real issue is what to do with it.

Scientific principles and the tools they generate aren’t necessarily liberating. They’re not inherently destructive, either. What matters is how they’re put to use, for which values and in whose interest they’re pressed into service. Silicon Valley’s most successful companies often present their services as value-free: Google just wants to make the world’s information transparent and accessible; Facebook humbly offers us greater connectivity with the people we care about; Lyft and Airbnb extol the virtues of sharing among friends, new and old. If there are values here, they seem to be fairly innocuous ones. How could you possibly oppose making new friends or learning new things?

Yet each of these high-tech services is motivated by a vision of the world as it ought to be, an influential set of assumptions about how we should live together, what we owe one another as neighbours and citizens, the relationship between community and individual, the boundary between public good and private interest. Technology comes, in other words, with political baggage. We need critics who can pull back the curtain, who can scrutinise digital technology without either antipathy or boosterism, who can imagine how it might be used differently. We need critics who can ask questions of value.•

Tags: ,

I love Ray Kurzweil, but unfortunately, he’s not going to become immortal as he expects he will, and it’s unlikely he’ll be right in his prediction that nanobots introduced into our brains will be doing the thinking for us by the 2030s. Most of what Kurzweil says is theoretically possible, especially if we’re talking about human life surviving for a significant span, but his timeframe for execution of radical advances seems increasingly frantic to me. From Andrew Griffin at the Independent:

In the near future, humans’ brains will be helped out by nanobot implants that will make us into “hybrids,” one of the world’s leading thinkers has claimed.

Ray Kurzweil, an inventor and director of engineering at Google, said that in the 2030s the implants will help us connect to the cloud, allowing us to pull information from the internet. Information will also be able to sent up over those networks, letting us back up our own brains.

“We’re going to gradually merge and enhance ourselves,” he said, reported CNN. “In my view, that’s the nature of being human — we transcend our limitations.”

As the cloud that our brains access improves, our thinking would get better and better, Kurzweil said. So while initially we would be a “hybrid of biological and non-biological thinking”, as we move into the 2040s, most of our thinking will be non-biological.•

Tags: ,

Futurist Stowe Boyd outdid himself when asked to imagine the nature of corporations in 2050, as you can see in his excellent Medium essay. He believes our response to challenges of wealth inequality, climate change and AI will make things break in one of three ways, resulting in scenarios he labels Humania (great), Neo-feudalistan (not-so-great) and Collapseland (yikes!). An excerpt about the most hopeful outcome:

After mounting concern about inequality, the climate, and the inroads that AI and robots were having on society, in the 2020s Western nations — and later other developing countries — were hit by a ‘Human Spring.’ New populist movements rose up and rejected the status quo, and demanded fundamental change. At first the demands were uneven — some groups emphasized climate, or inequality, or the right to work.

But by the mid 2030s, all three forces were more-or-less equal planks in the Humania platform. This led to mandated barriers to inequality — such as limits on the multiple of the salaries of highest to lowest paid workers, and progressive taxation so that the well-off paid much higher taxes by percentage. Additionally, there were worldwide actions to limit oil and coal use, and a dramatic shift to solar in the early 2020s. Concerned that people would be pushed inexorably out of the job market, governments build limits on AI use into international trade agreements, based on a notion of the human right to work.

In the year 2050, businesses in Humania are egalitarian, fast-and-loose, and porous. Egalitarian in the sense that Humania workers have great autonomy: They can choose who they want to work with and for, as well as which initiatives or projects they’d like to work on.

They’re fast-and-loose in that they are organized to be agile and lean, and in order to do so, the social ties in businesses are much looser than in the 2010s. It was those rigid relationships — for example, the one between a manager and her direct reports — that, when repeated across layers of a hierarchical organization, lead to slow-and-tight company.

Instead of a pyramid, Humania’s companies are heterarchies: They are more like a brain than an army. In the brain — and in fast-and-loose companies — different sorts of connections and groupings of connected elements can form. There is no single way to organize. People can choose the sort of relationships that most make sense.

People’s careers involve many different jobs and roles, and considerable periods of time out of work. Basic universal income is guaranteed and generous benefits for family leave are a regular feature of work, such as paternity/maternity leave, looking after ill loved ones, and subsidized opportunities for life-long learning. This is the porous side of things; The edge of the company is permeable, and people easily leave and return.•

Tags:

DARPA wants to be able to terraform Earth and Mars and whatever other sphere it chooses, editing genes in organisms which will allow for the altering of environments, the healing and fine-tuning of atmospheres. Very useful, provided nothing goes wrong. From Jason Koebler at Vice Motherboard:

The goal is to essentially pick and choose the best genes from whatever form of life we want and to edit them into other forms of life to create something entirely new. This will probably first happen in bacteria and other microorganisms, but it sounds as though the goal may to do this with more complex, multicellular organisms in the future.

The utility of having such a capability is pretty astounding: Jackson threw out goals of eradicating vector-borne illnesses, which obviously sounds lovely and utopian. But perhaps more interesting is DARPA’s plan to use specifically engineered organisms to help repair environmental damage. [Deputy Director of DARPA’s Biological Technologies Office Alicia] Jackson said that after a natural or man-made disaster, it’d be possible to engineer new types of extremophile organisms capable of surviving in a scarred wasteland. As those organisms photosynthesized and thrived, it would naturally bring that environment back to health, she said.

And that’s where terraforming Mars comes in.•

Tags: ,

When I first realized driverless cars were being road-tested and that vehicle-to-vehicle communication would be part of that new order, one of my first thoughts was that a thousand cars in one area would be hacked to suddenly turn left. I’m far from the only one to imagine this scenario, and, of course, the trick is prevention, something pressing since cars are already essentially rolling computers. From the Economist:

ONE ingenious conceit employed to great effect by science-fiction writers is the sentient machine bent on pursuing an inner mission of its own, from HAL in 2001: A Space Odyssey to V.I.K.I. in the film version of I Robot. Usually, humanity thwarts the rogue machine in question, but not always. In Gridiron, released in 1995, a computer system called Ismael—which controls the heating, lighting, lifts and everything else in a skyscraper in Los Angeles—runs amok and wreaks havoc on its occupants. The story’s cataclysmic conclusion involves Ismael instructing the skyscraper’s computer-controlled hydraulic shock-absorbers (installed to damp the swaying caused by earthquakes) to shake the building, literally, to pieces. As it does so, Ismael’s cyber-spirit flees the crumbling tower by e-mailing a copy of its malevolent code to a diaspora of like-minded computers elsewhere in the world.

While vengeful cyber-spirits may not lurk inside today’s buildings or machines, malevolent humans frequently do. Taking control remotely of modern cars, for instance, has become distressingly easy for hackers, given the proliferation of wireless-connected processors now used to run everything from keyless entry and engine ignition to brakes, steering, tyre pressure, throttle setting, transmission and anti-collision systems. Today’s vehicles have anything from 20 to 100 electronic control units (ECUs) managing their various electro-mechanical systems. Without adequate protection, the “connected car” can be every bit as vulnerable to attack and subversion as any computer network.

Were that not worrisome enough, motorists can expect further cyber-mischief once vehicle-to-vehicle (V2V) communication becomes prevalent, and cars are endowed with their own IP addresses and internet connections.•

Most scenarios of AI dominance end, for humans, with extinction, but Steve Wozniak no longer feels that way, believing we can lose the war but be happy captives–pets, even. His scenario seems unlikely. From Samuel Gibbs at the Guardian:

Apple’s early-adopting, outspoken co-founder Steve Wozniak thinks humans will be fine if robots take over the world because we’ll just become their pets.

After previously stating that a robotic future powered by artificial intelligence (AI) would be “scary and very bad for people” and that robots would “get rid of the slow humans,” Wozniak has staged a U-turn and says he now thinks robots taking over would be good for the human race.

“They’re going to be smarter than us and if they’re smarter than us then they’ll realise they need us,” Wozniak said at the Freescale technology forum in Austin. “We want to be the family pet and be taken care of all the time.” …

For Wozniak, it will be “hundreds of years” before AI is capable of taking over, but that by the time it does it will no longer be a threat to our existence: “They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally.”•

Tags: ,

In a recent episode of EconTalk, host Russ Roberts invited journalist Adam Davidson of the New York Times to discuss, among other things, his recent articleWhat Hollywood Can Teach Us About the Future of Work.” In this “On Money” column, Davidson argues that short-term Hollywood projects–a freelance, piecemeal model–may be a wave of the future. The writer contends that this is better for highly talented workers and worrisome for the great middle. I’ll agree with the latter, though I don’t think the former is as uniformly true as Davidson believes. In life, stuff happens that talent cannot save you from, that the market will not provide for.

What really perplexed me about the program was the exchange at the end, when the pair acknowledges being baffled by Uber’s many critics. I sort of get it with Roberts. He’s a Libertarian who loves the unbridled nature of the so-called Peer Economy, luxuriating in a free-market fantasy that most won’t be able to enjoy. I’m more surprised by Davidson calling Uber a “solution” to the crisis of modern work, in which contingent positions have replaced FT posts in the aftermath of the 2008 financial collapse. You mean it’s a solution to a problem it’s contributed to? It seems a strange assertion given that Davidson has clearly demonstrated his concern about the free fall of the middle class in a world in which rising profits have been uncoupled from hiring.

The reason why Uber is considered an enemy of Labor is because Uber is an enemy of Labor. Not only are medallion owners and licensed taxi drivers (whose rate is guaranteed) hurt by ridesharing, but Uber’s union-less drivers are prone to pay decreases at the whim of the company (which may be why about half the drivers became “inactive”–quit–within a year). And the workers couldn’t be heartened by CEO Travis Kalanick giddily expressing his desire to be rid of all of them before criticism intruded on his obliviousness, and he began to pretend to be their champion for PR purposes.

The Sharing Economy (another poor name for it) is probably inevitable and Uber and driverless cars are good in many ways, but they’re not good for Labor. If Roberts wants to tell small-sample-size stories about drivers he’s met who work for Uber just until their start-ups receive seed money and pretend that they’re the average, so be it. The rest of us need to be honest about what’s happening so we can reach some solutions to what might become a widespread problem. If America’s middle class is to be Uberized, to become just a bunch of rabbits to be tasked, no one should be satisfied with the new normal.

From EconTalk:

Russ Roberts:

A lot of people are critical of the rise of companies like Uber, where their workforce is essentially piece workers. Workers who don’t earn an annual salary. They’re paid a commission if they can get a passenger, if they can take someone somewhere, and they don’t have long-term promises about, necessarily, benefits. They have to pay for their own car, provide their own insurance, and a lot of people are critical of that, and my answer is, Why do people do it if it’s so awful? That’s really important. But I want to say something slightly more optimistic about it which is a lot of people like Uber, working for Uber or working for a Hollywood project for six months, because when it’s over they can take a month off or a week off. A lot of the people I talk to who drive for Uber are entrepreneurs, they’re waiting for their funding to come through, they’re waiting for something to happen, and they might work 80 hours a week while they’re waiting and when the money comes through or when their idea starts to click, they’re gonna work five hours a week, and then they’ll stop, and they don’t owe any loyalty to anyone, they can move in and out of work as they choose. I think there’s a large group of people who really love that. And that’s a feature for many people, not a bug. What matters is–beside your satisfaction and how rewarding your life is emotionally in that world–your financial part of it depends on what you make while you’re working. It’s true it’s only sort of part-time, but if you make enough, and evidently many Uber drivers are former taxi drivers who make more money with Uber for example, if you make enough, it’s great, so it seems to me that if we move to a world where people are essentially their own company, their own brand, the captain of their own ship rather than an employee, there are many good things about that as long as they have the skills that are in demand that people are willing to pay for. Many people will unfortunately will not have those skills. It’s a serious issue, but for many people those are enormous pluses, not minuses. 

Adam Davidson:

Yes, I agree with you. Thinking of life as an Uber driver with that as your only possible source of income, I would guess that might be tough. Price competition is not gonna be your friend. Thinking about a world where you have a whole bunch of options, including Task Rabbit, and who knows what else, Airbnb, to earn money in a variety of ways, that’s at various times and at various levels of intensity, that strikes me as only good. If we could shove that into the 1950s, I think you would have seen a lot more people leaving that corporate model and starting their own businesses or spending more time doing more creative endeavors. That all strikes me as a helpful tool. It does sound like some of the people who work at Uber have kind of been jerks, but it does seem strange to me that some people are mad at the company that’s providing this opportunity. It is tough that lots of Americans are underemployed and aren’t earning enough. That’s a bad situation, but it is confusing to me that we get mad at companies that are providing a solution.•

Tags: , ,

« Older entries § Newer entries »