Urban Studies

You are currently browsing the archive for the Urban Studies category.

It must have been grand in 1963, what with families living in glass or rubber houses, driving cars 140 miles-per-hour, owning their own airplanes and feasting on “pill dinners.”

None of that actually happened, of course, but those were the futuristic predictions in Part One of two-part article by Alexander R. George in a March 22, 1938 Brooklyn Daily Eagle about what was to come in just 25 years. The idea about newspapers being delivered directly into the home by some sort of wire facsimile is impressive, however, even if it’s a little too bold in timeline.


Unlike the newspaper and music industries, which were upended by the Internet, the traditional TV model is doing just fine–or really, really not. 

Michael Wolff, the least beloved of all the Muppets, has written a book about the triumph of this hoary medium in the Computer Age, one of two new titles on which Jacob Weisberg bases his wonderfully written NYRB piece “TV vs. the Internet: Who Will Win?” Weisberg notes: “Most commercials are directed at young people, based on the advertising industry’s belief in establishing brand loyalty early. That’s why so much ad-supported programming caters to the tastes of teenagers.”

That’s an interesting companion for this snippet from “Where Did Everybody Go?” an Advertising Age article published today about the paucity of viewers greeting the new season, those remaining on the couch now grayer than Japan: “The most disconcerting PUT (people using television) data concerns younger viewers, who are ditching traditional TV faster than anyone could have anticipated.”

Weisberg is admiring of aspects of Wolff’s book but ultimately thinks “his analysis is too categorical and in places simply wrong.” An excerpt:

Wolff contends that television learned a useful lesson from the gutting of the music industry. The record companies were at first lackadaisical in protecting their intellectual property, then went after their own customers, filing lawsuits against dorm-room downloaders. Under the Digital Millennium Copyright Act, passed in 1998, sites hosting videos such as YouTube appeared to be within their rights to wait for takedown notices before removing pirated material. But Viacom, led by the octogenarian Sumner Redstone, sued YouTube anyway. Its 2007 lawsuit forced Google, which had bought YouTube the previous year, to abandon copyright infringement as a business model. Thanks to the challenge from Viacom, YouTube became a venue for low-value content generated by users (“Charlie Bit My Finger”) and acceded to paying media owners, such as Comedy Central, a share of its advertising revenue in exchange for its use of material. “Instead of a common carrier they had become, in a major transformation, licensors,” Wolff writes. Where it might have been subsumed by a new distribution model, the television business instead subsumed its disruptor.

Wolff is dismissive of newer threats to the business. He regards cord cutting—customers dropping premium cable bundles in favor of Internet services such as Netflix—as an insignificant phenomenon. But even if it gathers steam, as recent evidence suggests may be happening, cord cutting leaves Comcast and Time Warner Cable, the largest cable companies, in a win-win position, since they provide the fiber optic cables that deliver broadband Internet to the home as well as those that bring TV. Even if you decide not to pay for hundreds of channels you don’t watch, you’ll pay the same monopoly to stream House of Cards. (This won’t provide much comfort, however, to companies that own the shows, which stand to lose revenue from both cable subscribers and commercials priced according to ratings.)•


Tags: ,

I admire the London Review of Books, but I was a little surprised when its longtime editor Mary-Kay Wilmers recently told the Financial Times that the periodical has had to lean more heavily on political content because they’re aren’t enough worthy books to fill its pages with critiques.

That isn’t true, I don’t think, even if it’s a frequent refrain: Our digital culture has developed in such a way as to diminish literature, people now won’t read more than 140 characters, the quality of the written word is in steep decline.

Except I truly believe we’re living in a golden age for books, with so many great titles that it’s impossible to keep up with them. Certain corners of the publishing world have been destabilized, particularly by Amazon’s business practices, but in the big picture it seems we have rich and varied contributions from a much wider array of writers. 

Maybe future generations raised on smartphones won’t be as accepting of literature (though I don’t think so), and perhaps books will become so cheap that writing won’t attract great talent (not likely), but for now at least, it’s a wonderful time to be a reader.

Another thing that’s often said is that in the near future, all books will be read on screens and not at all on dead trees. This transition wouldn’t mean the death of books, of course, as just the medium would change, and whatever influence this new instrument has on books in the long run, it would be far from lethal and perhaps even more likely prove beneficial. The changeover certainly would, however, sink bookstores. This passage may still materialize, but for now, the tide has receded.

From Alexandra Alter of the New York Times:

“E-books were this rocket ship going straight up,” said Len Vlahos, a former executive director of the Book Industry Study Group, a nonprofit research group that tracks the publishing industry. “Just about everybody you talked to thought we were going the way of digital music.”

But the digital apocalypse never arrived, or at least not on schedule. While analysts once predicted that e-books would overtake print by 2015, digital sales have instead slowed sharply.

Now, there are signs that some e-book adopters are returning to print, or becoming hybrid readers, who juggle devices and paper. E-book sales fell by 10 percent in the first five months of this year, according to the Association of American Publishers, which collects data from nearly 1,200 publishers. Digital books accounted last year for around 20 percent of the market, roughly the same as they did a few years ago.

E-books’ declining popularity may signal that publishing, while not immune to technological upheaval, will weather the tidal wave of digital technology better than other forms of media, like music and television.•


Tags: ,

From the April 8, 1928 Brooklyn Daily Eagle:


I was reading a 1908 Brooklyn Daily Eagle article about Red Cloud, and it reminded me of a passage from the opening chapter of Ian Frazier’s excellent 2000 book, On the Rez. In telling about the Oglala Lakota chief’s visit to the White House in 1870, Frazier examined our age and came to some troubling conclusions, all of which seem even truer 15 years on. Real freedom in our corporatocracy is more expensive than ever, but it’s cheap and easy to be discarded. The excerpt:

    In 1608, the newly arrived Englishmen at Jamestown colony in Virginia proposed to give the most powerful Indian in the vicinity, Chief Powhatan, a crown. Their idea was to coronate him a sub-emperor of Indians, and vassal to the English King. Powhatan found the offer insulting. “I also am a King,” he said, “and this is my land.” Joseph Brant, a Mohawk of the Iroquois Confederacy between eastern New York and the Great Lakes, was received as a celebrity when he went to England with a delegation from his tribe in 1785. Taken to St. James’s Palace for a royal audience, he refused to kneel and kiss the hand of George III; he told the King that he would, however, gladly kiss the hand of the Queen. Almost a century later, the U.S. government gave Red Cloud, victorious war leader of the Oglala, the fanciest reception it knew how, with a dinner party at the White House featuring lighted chandeliers and wine and a dessert of strawberries and ice cream. The next day Red Cloud parleyed with the government officials just as he was accustomed to on the prairie—sitting on the floor. To a member of a Senate select committee who had delivered a tirade against Sitting Bull, the Hunkpapa Sioux leader carelessly replied, “I have grown to be a very independent man, and consider myself a very great man.”

     That self-possessed sense of freedom is closer to what I want; I want to be an uncaught Indian like them.

Another remark which non-Indians often make on the subject of Indians is “Why can’t they get with the program?” Anyone who talks about Indians in public will be asked that question, or variations on it; over and over: Why don’t Indians forget all this tribal nonsense and become ordinary Americans like the rest of us? Why do they insist on living in the past? Why don’t they accept the fact that we won and they lost? Why won’t they stop, finally, being Indians and join the modern world? I have a variety of answers handy. Sometimes I say that in former days “the program” called for the eradication of Indian languages, and children in Indian boarding schools were beaten for speaking them and forced to speak English, so they would fit in; time passed, cultural fashions changed, and Hollywood made a feature film about Indians in which for the sake of authenticity the Sioux characters spoke Sioux (with English subtitles), and the movie became a hit, and lots of people decided they wanted to learn Sioux, and those who still knew the language, those who had somehow managed to avoid “the program” in the first place, were suddenly the ones in demand. Now, I think it’s better not to answer the question but to ask a question in return: What program, exactly, do you have in mind?

    We live in a craven time. I am not the first to point out that capitalism, having defeated Communism, now seems to be about to do the same to democracy. The market is doing splendidly, yet we are not, somehow. Americans today no longer work mostly in manufacturing or agriculture but in the newly risen service economy. That means that most of us make our living by being nice. And if we can’t be nice, we’d better at least be neutral. In the service economy, anyone who sat where he pleased in the presence of power or who expatiated on his own greatness would soon be out the door. “Who does he think he is?” is how the dismissal is usually framed. The dream of many of us is that someday we might miraculously have enough money that we could quit being nice, and everybody would then have to be nice to us, and niceness would surround us like a warm dome. Certain speeches we would love to make accompany this dream, glorious, blistering tellings-off of those to whom we usually hold our tongue. The eleven people who actually have enough money to do that are icons to us. What we read in newsprint and see on television always reminds us how great they are, and we can’t disagree. Unlike the rest of us, they can deliver those speeches with no fear. The freedom that inhered in Powhatan, that Red Cloud carried with him from the plains to Washington as easily as air—freedom to be and to say, whenever, regardless of disapproval—has become a luxury most of us can’t afford.•


Tags: ,

Peter Diamandis is privy to much more cutting-edge technological information than I am, but he’s also more prone to irrational exuberance. I have little doubt driverless cars will be perfected for all climates and conditions at some point in the future, but will there really be more than 50 million autonomous cars on the road by 2035? Well, it is the kind of technology likely to spread rapidly when completed. From a Diamandis Singularity Hub post about the future of transportation, agriculture, and healthcare/elder care:

By 2035 there will be more than 54 million autonomous cars on the road, and this will change everything:

  • Saved Lives: Autonomous cars don’t drive drunk, don’t text and don’t fall asleep at the wheel.
  • Reclaiming Land: You can fit eight times more autonomous cars on our roads, plus you no longer need parking spaces. Today, in the U.S. we devote 10% of the urban land to ~600 million parking spaces, and countless more to our paved highways and roads. In Los Angeles, it’s estimated that more than half of the land in the city belongs to cars in the form of garages, driveways, roads, and parking lots.
  • Saved Energy: Today we give close to 25 percent of all of our energy to personal transportation, and 25 percent of our greenhouse gases are going to the car. If cars don’t crash, you don’t need a 5,000-lb SUV driving around a 100-lb passenger (where 2% of the energy is moving the person, and 98% is to move the metal womb wrapped around them).
  • Saved Money/Higher Productivity: Get rid of needing to own a car, paying for insurance and parking, trade out 4,000-lb. cars for lighter electric cars that don’t crash, and you can expect to save 90% on your local automotive transportation bill. Plus regain 1 to 2 hours of productivity in your life (work as you are driven around), reclaiming hundreds of billions of dollars in the US economy.

Best of all, you can call any kind of car you need. Need a nap? Order a car with a bed. Want to party? Order one with a fully-stocked bar. Need a business meeting? Up drives a conference room on wheels.•


Feng shui for geeks, a way of engineering that turns a house into a fine-tuned machine,” is the description Sara Solovitch uses in her Politico Magazine feature to describe “Passive Housing,” a method of building developed in Germany which utilizes very thick insulation and high-quality windows and doors to exploit solar and create amazingly green, inexpensive and comfortable spaces. Disconcertingly if unsurprisingly, America lags behind in such environment-friendly structures, though Portland (also unsurprisingly) has become Ground Zero in the U.S. for what would be a very welcome architectural revolution.

An excerpt:

It’s what you don’t see that makes it so unique. The Orchards is a “Passive House,” currently the largest one in North America. It’s a high performing energy-efficient complex whose 57 apartments stay cool on the hottest days and can be comfortably heated with a hand-held hair dryer on the coldest. Its windows are triple-paned. Its walls and floors are stuffed 11 inches deep with insulation. The ventilation system in the attic acts as the building’s lungs—continually pulling exhaust from every kitchen and bathroom, sucking stale air through a heat exchanger before carrying it to the outside and returning with fresh air.

“Every day I find a new reason to love it,” gushes Georgye Hamlin, whose one-bedroom apartment is as noiseless as a recording studio. “It’s cool, it’s quiet, and I don’t even hear the train. During the heat wave, my girlfriend came over to sleep because it was so cool. Yay for German engineering!”

Passivhaus, a building method developed in Germany in the early 1990s, relies on an airtight envelope—the roof, exterior walls and floors, literally, the physical barrier that separates in from out—to create a building that consumes 80 percent less energy than a standard house.

As translated into English, the term is almost a misnomer. It implies single-family housing, when in fact the approach can be applied to any size building. In Europe, supermarkets, schools, churches, factories and hospitals have been built to passive house standards. The number of certified buildings there exceeds 25,000.•

Tags: ,

In Japan, Pepper is a dear, adorable thing, but the robot is being reprogrammed to be sort of a jerk for use in America. While that cultural reimagining is telling in a small way, the greater takeaway from Will Knight’s smart Technology Review piece about this particular machine is that truly flexible and multifaceted robot assistants still need a lot of work. Of course, Weak AI can do a lot of good (and wreak a lot of havoc on the economy) all by itself. An excerpt:

Brian Scassellati, a professor at Yale University who studies how people and robots can interact, says significant progress has been made in the area in the last 10 years. “Human-robot interaction has really started to home in on the kinds of behaviors that give you that feeling of presence,” he says. “A lot of these are small, subtle things.” For example, Pepper can crudely read your emotions by using software that analyzes facial expressions. I found the robot to be pretty good at telling whether I was smiling or frowning.

However, Scassellati does not believe robots are ready to become constant companions or even effective salespeople. The robots that succeed “are going to be for very limited use,” he suggests. “They’re going to be for targeted use, and probably not with the general population.”

My short time with Pepper makes me think that targeting limited applications is a sensible move.•


Tags: ,

Google is a great company, but that doesn’t mean it’s a good one.

When CEO Larry Page urges us to trust the “good corporations” like his, no one should obey for two reasons: 1) If the search giant is going to remain a powerhouse, it will need to ride information-rich moonshots into all areas of the world, turning every last object and body into an data-producing system. That will be a ferocious war among Google and all its competitors and ethics may become collateral damage. 2) Even if Page & co. were spotlessly noble, they won’t be here forever (not unless Calico is really successful), and those replacing them and inheriting our information may not be so benign. 

In a Scientific American podcast hosted by Seth Fletcher about privacy in the Digital Age, Jaron Lanier speaks to the corporate-succession issue and many others, including users being paid for their info. Listen here.


From the November 18, 1898 Brooklyn Daily Eagle:


I’m just old enough to have been dragged as a small child to the final Automat in Manhattan during its last, sad days, before it was sacrificed at the feet of Ray Kroc, as if it were just one more cow.

In its heyday, despite the lack of servers and cashiers, the Automat was an amazingly social experience and an especially democratic one, with people of all classes and kinds rubbing elbows over cheap turkey sandwiches and cheaper coffee. You could sit there forever. Al Pacino has spoken many times of how he misses the welter of people, the strands of surprising conversations. You don’t get that at McDonalds or Starbucks. It’s just a different vibe (and in the latter case, price point). 

The Automat was the past…and, perhaps, prelude. Well, to some degree. I’ve blogged before about Eatsa (here and here), the so-called “digital Automat” which recently opened in San Francisco. It’s disappeared all workers from the front of the restaurant and probably, in time, from back-room preparation. It certainly doesn’t bode well for fast-casual employees, but what of the social dynamic? 

In his latest insightful Financial Times blog post about how life is changing in the Second Machine Age, Andrew McAfee thinks about the meaning of Eatsa in our “interesting and uncertain” era. He’s not concerned about wait staff being disappeared from his conversations but acknowledges the restaurant, whose model will likely spread, is not a good sign for Labor. An excerpt:

So is this just an updated version of the old automats, with iPads replacing coin slots, or is it something more? There are indications that Eatsa’s founders want it to be the start of something legitimately new: a close to 100 per cent automated restaurant. Food preparation there is already highly optimised and standardised, and it’s probably not a coincidence that the location’s first general manager had a background in robotics.

But the fact that the restaurant’s “front of house” (ie the dining area and customer interactions) are already virtually 100 per cent automated is more interesting to me than the question of whether the “back of house” (the kitchen) ever will be. Interesting because as front of the house automation spreads, it’s going to put to the test one of the most widely held notions about work in the second machine age: that there will always be lots of service jobs because we desire a lot of human interaction.

I agree with the second half of that statement, but I’m not so sure about the first. We are a deeply social species, and even an introvert like me enjoys spending time with friends and loved ones in the physical world. I’ve also learnt to value business lunches and dinners (even though I’d rather be off by myself reading or writing) because they’re an important part of how work advances.

But in the great majority of cases, when I’m out I don’t value the interactions with the waiting staff and other service workers. They’re not unpleasant or terribly burdensome, but they do get in the way of what I want from the restaurant experience: to eat well, and to talk to my tablemates.•



The best argument for our insane, incessant foodie culture is that it ultimately pays off in a way that greatly reduces or even eradicates hunger, through some hybrid of avant kitchen experimentation and science-lab processes. Because if we’re just stuffing the faces of people who already have more than enough to eat, how decadent is that?

One cutting-edge culinary expert who aims to not just tempt palates but to combat hunger is Hervé This, who thinks note-by-note cooking may be the answer, though such progress will probably come from an amalgamation of solutions.

In a T Magazine article by Aimee Lee Ball, the chemist acknowledges that for his plan to work at all, he must first overcome “food neophobia.” That won’t be a simple matter as Ball writes that “the results sometimes seem like parcels delivered from Mars.” An excerpt:

This’s big idea is nothing less than the eradication of world hunger, which he plans to accomplish not with any new economic overhaul, but through a culinary innovation that he calls note-by-note cooking, or NbN. Molecular cuisine — the deconstruction of food into a series of highly alchemized individual textures, flavors and compounds, often in the form of foams, gels and other matter not immediately recognizable as food — is associated with intellectual-culinary concept art of the sort practiced by Heston Blumenthal of the Fat Duck and René Redzepi of Noma. But This’s ambitions for his new cuisine are far from fanciful — indeed, the 60-year-old chemist, an impish and rumpled Dumbledore without the facial hair, often sounds more like a political radical than a food scientist. ‘‘I work for the public,’’ he says. ‘‘I hate rich people. NbN is a new art for chefs, and art is important. But are we going to feed humankind — or just make something for foodies?’’

ACCORDING TO THIS, one of the reasons there isn’t enough food to go around is because when we transport it, what we’re really transporting is water, which makes food spoil. A carrot is mostly water. Same for a tomato, an apple, an eggplant and many other fruits and vegetables. Unless they’re refrigerated, which is expensive and has a nasty impact on the environment, their moist nutrients provide an optimal environment for microorganisms.

This proposes that we stop shipping ‘‘wet’’ foods across countries or continents and instead break them down into their parts: separating their nutrients and flavors into a wide variety of powders and liquids that are theoretically shelf-stable in perpetuity, and can be used as ingredients. Many of the basic components of food have unwieldy names but familiar tastes or smells. Allyl isothiocyanate, a compound obtained from mustard seeds, suggests wasabi; 1-octen-3-ol evokes wild mushrooms. Depending on its concentration, benzyl mercaptan may call to mind garlic, horseradish, mint or coffee; decanal hints at something between an orange and an apricot. ‘‘Nobody knows why the same compound in different strengths may taste like curry or maple syrup,’’ This says. ‘‘The physiology of taste is an exciting field — my colleagues are discovering new things every month.’’•

Tags: ,

It’s no small irony that Sigmund Freud died against the backdrop of one of the worst explosions of repressed rage the world has ever known. The Jewish “Father of Psychoanalysis” was hectored and hounded in his dying years by Nazis, who needed desperately needed the very inspection of self he encouraged. Freud ultimately fled Austria in a weakened state and died in London. Three Brooklyn Daily Eagle articles below tell part of the story.


From March 22, 1938:


From June 4, 1938:


From September 24, 1939.


Uber should, of course, not be prevented from becoming a company of driverless taxis when innovation makes that possible. But CEO Travis Kalanick’s part-time pose as a champion of Labor is an infuriatingly dishonest stance. Autonomous vehicles and Uber’s business model may both be great in many ways, but they’re not good for workers. Not in the short and medium term, at least, and likely never.

Kalanick, who recently discussed his company’s robotic tomorrow with Marc Benioff, sees the transition to AI coming in 10 or 15 years or so. In commentary on Bloomberg, Forrester analyst James McQuivey thinks the future is just around the bend and Kalanick too conservative in his estimation of the driverless ETA. He also believes Kalanick’s job itself will likely be a casualty of the autonomous revolution (and other factors).

Tags: , ,

Should millions of jobs, entire industries, be taken over by AI in the near future, without other ones emerging to replace them, political and economic systems would need to quickly adapt and adjust to manage the new reality. One way to prepare would be to experiment with universal basic income, which may or may not prove a panacea.

From Federico Pistono in New Scientist:

How would the millions of telemarketers and taxi drivers, for example – whose jobs are at high risk of being automated – survive in this new landscape? One of the most interesting proposals, and one that does not live in the fanciful world of “the market will figure it out,” is the creation of an unconditional basic income (UBI).

It’s a simple idea with far-reaching consequences. The state would give a monthly stipend to every citizen, regardless of income or employment status. This would simplify bureaucracy, get rid of outdated and inefficient means-based benefits, and provide support for people to live with dignity and find new meaning.

No incentive-killer

The biggest UBI experiments, involving a whole town in Canada and 20 villages in India, have confounded a key criticism – that it would kill the incentive to work. Not only did people not stop working, but they were more likely to start new businesses or perform socially beneficial activities compared with controls. In addition, there was an increase in general well-being, and no increase in public bads such as alcohol and drug use, and gambling.

These early results are promising but not conclusive. We don’t know what would happen in other countries, and whether the same results would apply if millions of people were involved. Forthcoming experiments may give us a clearer picture.•


The spirit of any age must be addressed, even when inconvenient. I doubt Bill Clinton entered politics to be the tough-on-crime President whose policies helped turn the nation into a penal colony within a colony, but there he was in the 1990s, not realizing that crime was about to mysteriously and precipitously decline, waving a badge and a billy club. Clinton likely never dreamed of a scenario in which he would be chastening “welfare queens,” yet there he was doing a better job of it than Ronald Reagan, who coined that odious term. It was no different than when Richard Nixon, having at long last having won the White House, argued in favor of universal healthcare and a basic-guaranteed-income tax plan, something he certainly wasn’t considering before the Sixties happened.

Chief among the prevailing winds of our time is wealth inequality, the enduring gift of an Occupy movement that framed a single election and otherwise sputtered out, at least for now. So GOP candidates must, at minimum, pay lip service to the concern. Donald Trump is suddenly a reformer taking aim at hedge-fund managers. Jeb Bush has spoken about how income disparity has threatened the American Dream (without mentioning, of course, that his proposed tax cuts would only exacerbate the situation). Rick Santorum and the sweater-vest wing of the party want to raise the minimum wage. 

What’s true of politicians is also so of economists, and academics have descended on the problem, which makes this moment ideal for Joseph Stiglitz, who’s spent much of his career, from thesis forward, on the topic. In a NYRB piece, James Surowiecki analyzes the economist’s most recent slate of books, finding fault with Stiglitz’s identification of the twin devils of the contemporary financial arrangement: rent-seeking and a lack of corporate oversight. Surowiecki doesn’t believe these issues explain our 99-and-1 predicament. He doesn’t dismiss Stiglitz’s suggestions and likewise sees no reason why CEOs should be earning so much, but he believes a remedy is more complicated.

An excerpt: 

It’s possible, of course, that further reform of corporate governance (like giving shareholders the ability to cast a binding vote on CEO pay packages) will change this dynamic, but it seems unlikely. After all, companies with private owners—who have total control over how much to pay their executives—pay their CEOs absurd salaries, too. And CEOs who come into a company from outside—meaning that they have no sway at all over the board—actually get paid more than inside candidates, not less. Since 2010, shareholders have been able to show their approval or disapproval of CEO pay packages by casting nonbinding “say on pay” votes. Almost all of those packages have been approved by large margins. (This year, for instance, these packages were supported, on average, by 95 percent of the votes cast.)

Similarly, while money managers do reap the benefits of opaque and overpriced fees for their advice and management of portfolios, particularly when dealing with ordinary investors (who sometimes don’t understand what they’re paying for), it’s hard to make the case that this is why they’re so much richer than they used to be. In the first place, opaque as they are, fees are actually easier to understand than they once were, and money managers face considerably more competition than before, particularly from low-cost index funds. And when it comes to hedge fund managers, their fee structure hasn’t changed much over the years, and their clients are typically reasonably sophisticated investors. It seems improbable that hedge fund managers have somehow gotten better at fooling their clients with “uncompetitive and often undisclosed fees.”

So what’s really going on? Something much simpler: asset managers are just managing much more money than they used to, because there’s much more capital in the markets than there once was.•


Tags: ,

Think how strange it seems now: Until recently, one or several musicians would disappear for many months into an expensive recording studio and try to conjure something that would fill our ears, and, perhaps, blow our minds. They threw away the vast majority of the results and delivered a few dozen minutes of entertainment. There was a distribution system which not only supported this process but was even marvelously lucrative.

It was all a dream. The decentralization of the media not only usurped the record store but also the records, squeezing the financial value from these commodities, rendering them a mere promotional tool for the few touring acts that can fill arenas. Good luck with that.

During the halcyon music-business decade of the 1970s, one economist and theorist knew the precariousness of the arrangement, how this fraught ecosystem, still spectacularly profitable, was actually endangered. He was Jacques Attali, a Mitterand adviser who 39 years ago coined the term “crisis of proliferation” in his book Noise: The Political Economy of Music, which foretold the coming perfect storm that would soak the industry. 

In “The Pop Star and the Prophet,” a new BBC Magazine article, singer-songwriter Sam York sought out Attali, wanting to know where the philosopher thought the future was heading. The quick answer is that while he maintains some hope for musicians, Attali thinks what happened to the recording biz is merely prelude. An excerpt:

Attali also had another big idea. He said that music – and the music industry – forged a path which the rest of the economy would follow. What’s happening in music can actually predict the future.

When musicians in the 18th Century – like the composer Handel – started selling tickets for concerts, rather than seeking royal patronage, they were breaking new economic ground, Attali wrote. They were signalling the end of feudalism and the beginning of a new order of capitalism.

In every period of history, Attali said, musicians have been at the cutting edge of economic developments. Because music is very important to us but also highly adaptable it’s one of the first places we can see new trends appearing.

He was right about the “crisis of proliferation”… but if music really does predict the future for the rest of the economy, what does he think it is telling us will happen next?

Attali says manufacturing will be hit by an identical crisis to the music industry, and this time it will be caused by 3D printing.

“With 3D printing, people will print their own cups, furniture,” he says. “Everyone will make their own objects, in the same way they are making their own music.”•


Tags: ,

From the February 4, 1947 Brooklyn Daily Eagle



Tags: ,

Because ultimately there’s nothing, and everything we and our loved ones once were cruelly goes away, many of us spend time desperately trying to prove that there’s something beyond. This can’t be it. While it’s not a self-fulfilling prophesy, it sure does pass the time.

At Vice’s “Broadly” section, Zing Tsjeng interviews British paranormal investigator Jayne Harris, who specializes in haunted dolls, which look almost like people, almost alive. The opening:


Can you tell me a little about what brought you to this industry?

Jayne Harris:

My parents were very interested in the paranormal so I heard lots of stories from them. When I was about 16, I started going to various spiritualist churches and seeing psychic mediums. Then my cousin died in a car accident when I was 17. That was the catalyst for me. It sparked more than a curiosity— I really wanted to reach out and maybe find some evidence that there was something more.


How often do you genuinely find a haunted object or doll in someone’s house?

Jayne Harris:

In our experience, when we get called to people’s homes, we can explain the activity through normal explanations in about seven out of 10 cases. If someone feels that they’re hearing knocking noises, we’ll examine everything from pipework to spaces in the attic where you might get mice. If the lights are flickering, it often needs rewiring. You’re trying to rule out all of the possibilities so that what you’re left with is the true case of what is going on.•


Tags: ,

Dr. Anders Sandberg of the Future of Humanity Institute at Oxford just did one of the best Reddit AMAs I’ve ever read, a brilliant back-and-forth with readers on existential risks, Transhumanism, economics, space travel, future technologies, etc. He speaks wisely of trying to predict the next global crisis: “It will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.”

The whole piece is recommended, and some exchanges are embedded below.



Will we start creating new species of animals (and plants, fungi, and microbes) any time soon?

What about fertilizing the oceans? Will we turn vast areas of ocean into monoculture like a corn field or a wood-pulp plantation?

When will substantial numbers of people live anywhere other than Earth? Where will it be?

What will we do about climate change?

Dr. Anders Sandberg:

I think we are already making new species, although releasing them into nature is frowned upon.

Ocean fertilization might be a way of binding carbon and getting good “ocean agriculture”, but the ecological prize might be pretty big. Just consider how land monocultures squeeze out biodiversity. But if we needed to (say to feed a trillion population), we could.

I think we need to really lower the cost to orbit (beanstalks, anyone?) for mass emigration. Otherwise I expect the first real space colonists to be more uploads and robots than biological humans.

I think we will muddle through climate: technological innovations make us more green, but not before a lot of change will happen – which people will also get used to.



What augmentations, if any, do you plan on getting?

Dr. Anders Sandberg:

I have long wanted to get a magnetic implant to sense magnetic fields, but since I want to be able to get close to MRI machines I have held off.

I think the first augmentations will be health related or sensory enhancement gene therapy – I would love to see ultraviolet and infrared. But life extension is likely the key area, which might involve gene therapy and implanting modified stem cells.

Further down the line I want to have implants in my hypothalamus so I can access my body’s “preferences menu” and change things like weight setpoint or manage pain. I am a bit scared of implants in the motivation system to help me manage my behavior, but it might be useful. And of course, a good neural link to my exoself of computers and gadgets would be useful – especially if it could allow me to run software supported simulations in my mental workspace.

In the long run I hope to just make my body as flexible and modifiable as possible, although no doubt it would tend to normally be set to something like “idealized standard self”.

It is hard to tell which augmentations will arrive when. But I think going for general purpose goods – health, intelligence, the ability to control oneself – is a good heuristic for what to aim for.



What major crisis can we expect in next few years? What the world is going to be like by 2025?

Dr. Anders Sandberg:

I am more of a long term guy, so it might be better to ask the people at the World Economic Forum risk report (where I am on the advisory board).http://www.weforum.org/reports/global-risks-report-2015

One group of things are economic troubles – they are safe bets before 2025 since they happen every few years, but most are not major crises. Expect some asset bubbles or deflation in a major economy, energy price shocks, failure of a major financial mechanism or institution, fiscal crises, and/or some critical infrastructure failures.

Similarly there will be at least some extreme weather or natural disaster events that cause a nasty surprise (think Katrina or the Tohoku earthquake) – such things happen all the time, but the amount of valuable or critical stuff in the world is going up, and we are affected more and more systemically (think hard drive prices after the Thai floods – all the companies were located on the same flood plain). I would be more surprised by any major biodiversity loss or ecosystem collapse, but the oceans are certainly not looking good. Even with the scariest climate scenarios things in 2025 are not that different from now.

What to look out for is interstate conflicts that get global consequences. We have never seen a “real” cyber war: maybe it is overhyped, maybe we underestimate the consequences (think something like the DARPA cyber challenge as persistent, adapting malware everywhere). Big conflicts are unfortunately not impossible, and we still have lots of nukes in the world. WMD proliferation looks worryingly doable.

If I were to make a scenario for a major crisis it would be something like a systemic global issue like the oil price causing widespread trouble in some unstable regions (think of past oil-food interactions triggering unrest leading to the Arab Spring, or Russia being under pressure now due to cheap oil), which spills over into some actual conflict that has long-range effects getting out of hand (say the release of nasty bio- or cyberweapons). But it will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.

And then we will dust ourselves off, swear to never let that happen again, and half forget it.



As I understand it, regarding existential risk and our survival as a species, most if not all discussion has to happen under the umbrella of ‘if we don’t kill ourselves off first.’ Surely, as a man who thinks so far ahead, you must have some hope that catastrophic self-inflicted won’t spell the end of our race, or at least that it won’t put us back irrevocably far technologically. In your estimation, what are the immediate self-inflicted harms we face and will we have the capacity to face them when their destructive effects manifest. Will the climate change to the point of poisoning our planet, will uncontrolled pollution destroy our global ecology in some other way, will nuclear blasts destroy all but the cockroaches and bacteria on the planet? It seems to me that we needn’t think too far to see one of these scenarios come to pass if we don’t present a globally concerted effort to intervene.

Dr. Anders Sandberg:

I think climate change, like ecological depletion or poisons, are unlikely to spell radical disaster (still, there is enough of a tail to the climate change distribution to care about the extreme cases). But they can make the world much worse to live in, and cause strains in the global social fabric that make other risks more likely.

Nuclear war is still a risk with us. And nuclear winters are potential giga-killers; we just don’t know whether they are very likely or not, because of model uncertainty. I think the probability is way higher than most people think (because of both Bayesian estimation and observer selection effects).

I think bioengineered pandemics are also a potential stumbling block. There may not be many omnicidal maniacs, but the gain-of-function experiments show that well-meaning researchers can make potentially lethal pathogens and the recent distribution of anthrax by the US military show that amazingly stupid mistakes do happen with alarming regularity.

See also: https://theconversation.com/the-five-biggest-threats-to-human-existence-27053



I have trouble imagining how our current economic structure could cope with all the 10’s of millions of driver/taxi/delivery jobs going.

The economic domino effect of inability to pay debts/mortgages, loss of secondary jobs they were supporting, fall in demand for goods, etc, etc

It seems like the world has never really got back to “normal” (whatever that is anymore in the 21st century) after the 2008 financial crisis & never will.

I’m an optimist by nature, I’m sure we will segue & transition into something we probably haven’t even imagined yet.

But it’s very hard to imagine our current hands off laissez fair style of economy functioning in the 2020’s in the face of so much unemployment.

Dr. Anders Sandberg:

Back in the 19th century it would have seemed absurd that the economy could absorb all those farmers. But historical examples may be misleading: the structure of the economy changes.

In many ways laissez faire economics work perfectly fine in the super-unemployed scenario: we just form an internal economy, less effective than the official one sailing off into the stratosphere, and repeat the process (the problem might be if property rights make it impossible to freely set up a side economy). But clearly there is a lot of human capital wasted in this scenario.

Some people almost reflexively suggest a basic income guarantee as the remedy to an increasingly automated economy. I think we need to think much more creatively about other solutions, the BIG is just one possibility (and might not even be feasible in many nations).



What is the most defining characteristic of transhumanism as an idea in the 10s compared with the 00s?

Dr. Anders Sandberg:

Back when I started in the 90s we were all early-Wired style tech enthusiasts. The future was coming, and it was all full of cyber! Very optimistic, very much based on the idea that if we could just organise better and convince society that transhumanism was a good idea, then we would win.

By the 00s we had learned that just having organisations does not mean your ideas get taken seriously. Although they were actually taken seriously to a far greater extent: the criticism from Fukuyama and others actually forced a very healthy debate about the ethics and feasibility of transhumanism. Also, the optimism had become tempered post-dotcom, post-911: progress is happening, but much more uneven and slow than we may have hoped for. It was by this point the existential risk and AI safety strands came into their own.

Transhumanism in the 10s? Right now I think the cool thing is the posttranshumanist movements like the rationalists and the effective altruists: in many ways full of transhumanist ideas, yet not beholden to always proclaiming their transhumanism. We have also become part of institutions, and there are people that grew up with transhumanism who are now senior enough to fund things, make startups or become philanthropists.



Which do you think is more important for the future of humanity, the exploration of outer space (planets, stars, galaxies, etc.)? Or the exploration of inner space (consciousness, intelligence, self, etc.)?

Dr. Anders Sandberg:

Both, but in different ways. Exploration of outer space is necessary for long term survival. Exploration of inner space is what may improve us.


What step would you take first? Would you first discover “everything” or as much as possible about inner space, or outer space?

Dr. Anders Sandberg:

I suspect safety first: getting off-planet is a good start. But one approach does not preclude working on the other at the same time.•


The MIT economist David Autor doesn’t believe it’s different this time, he doesn’t think automation will lead to widespread technological unemployment any more than it did during the Industrial Revolution or the last AI scares of the 1960s and 1970s. Autor feels that robots may come for some of our jobs, but there will still be enough old and new ones to busy human hands because our machine brethren will probably never be our equal in common sense, adaptability and creativity. Technology’s new tools may be fueling wealth inequality, he acknowledges, but the fear of AI soon eliminating Labor is unfounded. 

Well, perhaps. But if you’re a truck or bus or taxi or limo or delivery driver, a hotel clerk or bellhop, a lawyer or paralegal, a waiter or fast-casual food preparer, or one of the many other workers whose gigs will probably disappear, you may be in for some serious economic pain before abundance emerges at the other side of the new arrangement.

Autor certainly is right in arguing that the main economic problem caused by mass automation would be “one of distribution, not of scarcity.” But that’s an issue requiring some political consensus to solve, and reaching a majority isn’t easy these days in our polarized society.

From Autor’s smart article in the Journal of Economic Perspectives “Why Are There Still So Many Jobs?“:

Polanyi’s Paradox: Will It Be Overcome?

Automation, complemented in recent decades by the exponentially increasing power of information technology, has driven changes in productivity that have disrupted labor markets. This essay has emphasized that jobs are made up of many tasks and that while automation and computerization can substitute for some of them, understanding the interaction between technology and employment requires thinking about more than just substitution. It requires thinking about the range of tasks involved in jobs, and how human labor can often complement new technology. It also requires thinking about price and income elasticities for different kinds of output, and about labor supply responses.

The tasks that have proved most vexing to automate are those demanding flexibility, judgment, and common sense—skills that we understand only tacitly. I referred to this constraint above as Polanyi’s paradox. In the past decade, computerization and robotics have progressed into spheres of human activity that were considered off limits only a few years earlier—driving vehicles, parsing legal documents, even performing agricultural field labor. Is Polanyi’s paradox soon to be at least mostly overcome, in the sense that the vast majority of tasks will soon be automated?

My reading of the evidence suggests otherwise. Indeed, Polanyi’s paradox helps to explain what has not yet been accomplished, and further illuminates the paths by which more will ultimately be accomplished. Specifically, I see two distinct paths that engineering and computer science can seek to traverse to automate tasks for which we “do not know the rules”: environmental control and machine learning. The first path circumvents Polanyi’s paradox by regularizing the environment, so that comparatively inflexible machines can function semi-autonomously. The second approach inverts Polanyi’s paradox: rather than teach machines rules that we do not understand, engineers develop machines that attempt to infer tacit rules from context, abundant data, and applied statistics.•


Inhabitants on Wrangel Island, before Semenchuk’s mad reign.


Konstantin Semenchuk, the scientist who ruled for two years in the 1930s over the Soviet station on the remote Wrangel Island, is so forgotten today he doesn’t even merit his own dedicated Wikipedia page, but it’s unlikely those he governed ever forgot what Time magazine described as the madman’s “shifty-eyed” visage.

Perhaps there’s a scholar somewhere who can explain what exactly provoked Semenchuk’s seemingly insane criminality and the tragedies it brought about, but there’s no easily accessible record that spells out anything beyond the charges and result of his trial. The facts as we know them: He was appointed as Governor of Wrangel Island in 1934 by Stalin’s Soviet Union and was accused of starving, extorting, poisoning, raping and murdering the native people and his own rival coworkers. At the conclusion of a short and sensational Moscow trial, Semenchuk was sentenced to death along with his accomplice and dogsled driver, S.P. Startsev, for, among other crimes, having killed N.A. Wulfson, a doctor whom he sent out on a fake mission through a snowstorm. What follows are a succession of 1936 articles from the Brooklyn Daily Eagle which paint pieces of a ghastly portrait.


From May 19:


From May 20:


From May 24:

Tags: , , ,

Ilya Somin of the Volokh Conspiracy blog at the Washington Post has capsules of two new books with a Libertarian bent, one of which is Markets Without Limits: Moral Virtues and Commercial Interests by Jason Brennan and Peter Jaworski. The main premise seems to be that activities deemed legal if done for no financial gain should also be permitted if there is a charge. Selling kidneys and sex are two chief examples. On the face of it, that makes a lot of sense, except…

What if placing a financial value on a kidney reduces the number of organs donated for free, making them unaffordable except to people who could bid the highest? Would we want the market regulating such a thing?

Prostitution would seem like an easier problem: It’s always existed, so let’s stop being silly and just legalize it. One argument against: If it was okay to have group-sex clubs (like the infamous Plato’s Retreat), wouldn’t that create a ground zero for STDs that could go beyond the participants? Couldn’t it be a public-health threat? Sure, people can arrange for such risky group behaviors for free now, but legalization would commodify and encourage them.

It always seems exciting to strip away regulations, but there are hardly ever simple solutions. At any rate, I look forward to reading the book. 

From Somin:

In Markets Without Limits, [Jason] Brennan and [Peter] Jaworski argue that anything you should be allowed to do for free, you should also be allowed to do for money. They do not claim that markets should be completely unconstrained, merely that we should not ban any otherwise permissible transaction solely because money has been exchanged. Thus, for example, they agree that murder for hire should be illegal. But only because it should also be illegal to commit murder for free. Their thesis is also potentially compatible with a wide range of regulations of various markets to prevent fraud, deception, and the like. Nonetheless, their thesis is both radical and important. The world is filled with policies that ban selling of goods and services that can nonetheless be given away for free. Consider such cases as bans on organ markets, prostitution, and ticket-scalping.•

Tags: , ,


In 1979, John Z. DeLorean was poised for greatness or disaster, having left behind the big automakers to create his own car from scratch, a gigantic gambit in the Industrial Age that required huge talent and hubris. Esquire writer William Flanagan profiled DeLorean that year, capturing the gambler in mid-deal, still bluffing, soon to be folding. The opening:

For a man who looks like Tyrone Power, is married to the stunning young model in the Virginia Slims and Clairol ads, and earns six figures a year, John Zachary DeLorean certainly doesn’t smile much. He can’t. Not just yet, anyway. The reason is simple: The most important project in his life has yet to be accomplished. DeLorean wants to make a monkey out of General Motors. He is on the verge of doing it, but he has a way to go.

There will be no rest for DeLorean until he finishes doing what no one else in the history of modern business has dared attempt–to design, build, and sell his very own automobile from scratch, an automobile the world’s largest car company wouldn’t, couldn’t, and probably shouldn’t build.•

In 1988, his dreams dashed and reputation destroyed, DeLorean was living in Manhattan, now a born-again Christian, still believing he would get another chance. He granted a rare interview to a local TV station from his old stomping grounds in Detroit. Funny to see him strolling through Central Park.


Ayn Rand, author of Objectivist claptrap, had very specific taste in fellow writers, which she revealed to interlocutor Alvin Toffler in a 1964 Playboy interview:

Playboy: Are there any novelists whom you admire?

Ayn Rand: Yes. Victor Hugo.

Playboy: What about modern novelists?

Ayn Rand: No, there is no one that I could say I admire among the so-called serious writers. I prefer the popular literature of today, which is today’s remnant of Romanticism. My favorite is Mickey Spillane.

Playboy: Why do you like him?

Ayn Rand: Because he is primarily a moralist. In a primitive form, the form of a detective novel, he presents the conflict of good and evil, in terms of black and white. He does not present a nasty gray mixture of indistinguishable scoundrels on both sides. He presents an uncompromising conflict. As a writer, he is brilliantly expert at the aspect of literature which I consider most important: plot structure.

Playboy: What do you think of Faulkner?

Ayn Rand: Not very much. He is a good stylist, but practically unreadable in content–so I’ve read very little of him.

Playboy: What about Nabokov?

Ayn Rand: I have read only one book of his and a half–the half was Lolita, which I couldn’t finish. He is a brilliant stylist, he writes beautifully, but his subjects, his sense of life, his view of man, are so evil that no amount of artistic skill can justify them.•

Here she is at Madison Square Garden with Phil Donahue in 1979, explaining why she wouldn’t vote for any woman to be President of the United States.


The Bobby Fischer of the recording studio, Phil Spector, ultimately more unhinged than unorthodox, couldn’t permanently muffle the dark voices within a Wall of Sound. Mad even back in 1965, he “amused” Merv Griffin, Richard Pryor, et al.

If you’re a homogenous culture with a lack of fervor for immigration and a graying population as Japan is, robots are a necessity, an elegant solution even, as Yoshiaki Nohara writes in an Financial Review article. For a country like America that embraces immigration (well, some of us still do) and has thrived on youthful demographics, it’s more complicated.

From Nohara:

The rise of the machines in the workplace has US and European experts predicting massive unemployment and tumbling wages.

Not in Japan, where robots are welcomed by Prime Minister Shinzo Abe’s government as an elegant way to handle the country’s aging populace, shrinking workforce and public aversion to immigration.

Japan is already a robotics powerhouse. Abe wants more and has called for a “robotics revolution.” His government launched a five-year push to deepen the use of intelligent machines in manufacturing, supply chains, construction and health care, while expanding the robotics markets from 660 billion yen ($US5.5 billion) to 2.4 trillion yen by 2020.

“The labour shortage is such an acute issue that companies have no choice but to boost efficiency,” says Hajime Shoji, the head of the Asia-Pacific technology practice at Boston Consulting Group. “Growth potential is huge.” By 2025, robots could shave 25 percent off of factory labour costs in Japan, according to the consulting firm.•

Tags: ,

« Older entries § Newer entries »