Urban Studies

You are currently browsing the archive for the Urban Studies category.

If there were two writers whose hearts beat as one despite a generational divide, it would have been Henry Miller and Hunter S. Thompson. When I tweeted on Saturday about a 1965 Thompson article regarding Big Sur becoming too big for its own good, it reminded me of an earlier piece the Gonzo journalist had written about the community, a 1961 Rogue article which centered on Miller’s life there. Big Sur was a place the novelist went for peace and solitude, which worked out well until aspiring orgiasts located it on a map and became his uninvited cult. Despite Miller’s larger-than-life presence, Thompson focuses mostly on the eccentricities of the singular region. I found the piece at Totallygonzo.org. Just click on the pages for a larger, readable version.

 

Tags: ,

tattooeddogs2

From the November 6, 1898 Brooklyn Daily Eagle:

 

We will be measured, and often we won’t measure up.

Connected technologies will not just assess us in the aftermath, but during processes, even before they begin. Data, we’re told, can predict who’ll falter or fail or even become a felon. Like standardized testing, algorithms may aim for meritocracy, but there’s likely to be unwitting biases. 

And then, of course, is the intrusiveness. Those of us who didn’t grow up in such a scenario won’t ever get used to it, and those who do won’t know any other way.

Such processes are being experimented with in the classroom. They’re meant to improve the experience of the student, but they’re fraught with all sorts of issues.

From Helen Warrell at the Financial Times:

week after students begin their distance learning courses at the UK’s Open University this October, a computer program will have predicted their final grade. An algorithm monitoring how much the new recruits have read of their online textbooks, and how keenly they have engaged with web learning forums, will cross-reference this information against data on each person’s socio-economic background. It will identify those likely to founder and pinpoint when they will start struggling. Throughout the course, the university will know how hard students are working by continuing to scrutinise their online reading habits and test scores.

Behind the innovation is Peter Scott, a cognitive scientist whose “knowledge media institute” on the OU’s Milton Keynes campus is reminiscent of Q’s gadget laboratory in the James Bond films. His workspace is surrounded by robotic figurines and prototypes for new learning aids. But his real enthusiasm is for the use of data to improve a student’s experience. Scott, 53, who wears a vivid purple shirt with his suit, says retailers already analyse customer information in order to tempt buyers with future deals, and argues this is no different. “At a university, we can do some of those same things — not so much to sell our students something but to help them head in the right direction.”

Made possible by the increasing digitisation of education on to tablets and smartphones, such intensive surveillance is on the rise.•

Tags: ,

In Jerry Kaplan’s excellent WSJ essay about ethical robots, which is adapted from his forthcoming book, Humans Need Not Apply, the author demonstrates it will be difficult to come up with consistent standards for our silicon sisters, and even if we do, machines following rules 100% of the time will not make for a perfect world. The opening:

As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?

Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.

How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?•

 

Tags:

Some things in America are certainly accelerating. That includes things technological (think how quickly autonomous vehicles have progressed since 2004’s “Debacle in the Desert“) and sociological (gay marriage becoming legal in the U.S. just seven years after every Presidential candidate opposed it). Our world just often seems to move more rapidly to a conclusion of one sort or another in a wired, connected world, though the end won’t always be good, and legislation will be more and more feckless in the face of the new normal.

Of course, this sort of progress sets up a trap, convincing many technologists that everything is possible in the near-term future. When I read how some think robot nannies will be caring for children within 15 years, stuff like that, I think perfectly well-intentioned people are getting ahead of themselves. 

In a Forbes piece, Steven Kotler certainly thinks acceleration is the word of the moment, though to his credit he acknowledges that the tantalizing future can be frustrating to realize. An excerpt:

You really have to stop and think about this for a moment. For the first time in history, the world’s leading experts on accelerating technology are consistently finding themselves too conservative in their predictions about the future of technology.

This is more than a little peculiar. It tells us that the accelerating change we’re seeing in the world is itself accelerating. And this tells us something deep and wild and important about the future that’s coming for us.

So important, in fact, that I asked Ken [Goffman of Mondo 2000] to write up his experience with this phenomenon. In his always lucid and always funny own words, here’s his take on the dizzying vertigo that is tomorrow showing up today:

In the early ‘90s, the great science fiction author William Gibson famously remarked, “The Future is here. It’s just not very evenly distributed.” While this was a lovely bit of phraseology, it may have been a bit glib at the time. Nanotechnology was not even a commercial industry yet. The hype around virtual reality went bust. There were no seriously funded brain emulation projects pointing towards the eventuality of artificial general intelligence (now there are several). There were no longevity projects funded by major corporations (now we have the Google GOOGL -3.13%-funded Calico). You couldn’t play computer games with your brain. People weren’t winning track meets and mountain climbing on their prosthetic legs. Hell, you couldn’t even talk to your cell phone, if you were among the relatively few who had one.

Over the last few years, the tantalizing promises of radical technological changes that can alter humans and their circumstances have really started to come into their own. Truly, the future is now actually here, but still largely undistributed (never mind, evenly distributed).•

Tags: ,

The Japanese robot hotel that I’ve blogged about before (here and here) was the focus of a CBS This Morning report by Seth Doane. It’s sort of a curious piece. It focuses mostly on the novelty and the hotel’s lack of a human touch, with the correspondent stating that no one should worry about losing their job to a robot in the near future because there still are software issues to be worked out. Except that even before a total robotization of basic hotels, many positions handled currently by humans will be lost. The same goes for airports, hospitals, warehouses and restaurants.

On the face of it, what a wonderful thing if AI could handle all the drudgery and bullshit jobs, even if it could take some quality jobs and do much better work than we do. Of course, the problem is, economies in America and many other states aren’t arranged for that radical reorganization. It’s a serious cultural problem and a political one.

Tags:

Bad Smell Inside My Nose

I am periodically smelling a foul odor that appears to be coming from my nasal like rotting flesh! Of course, I am concerned and I hoped that someone could offer some guidance in rectifying this matter. I also would like to add that I have been a frequent user of Afrin nasal spray for many, many years. Any assistance that you may offer would be greatly appreciated.

From the October 28, 1911 Brooklyn Daily Eagle:

Tags:

I’m not in favor of capping Uber in NYC, since I don’t think it makes much economic sense to suppress innovation, but as I’ve said many times before, we should be honest about what ridesharing, and more broadly the Gig Economy, truly is: good for the consumer and convenience and the environment and really bad for Labor.

Not only will ridesharing obliterate taxi jobs that guarantee a minimum wage, but Travis Kalanick’s outfit has no interest in treating its drivers well or even in retaining them in the longer term and is only intent on using workers–even exploiting military vets–for publicity purposes. Yet community leaders and politicians, including New York’s Governor Cuomo, either keep buying this nonsense or are being dishonest.

From Glenn Blain in the New York Daily News:

ALBANY — Gov. Cuomo is siding with Uber in its battle with Mayor de Blasio.

In the latest eruption of bad blood between Cuomo and his one-time frenemy, the governor hailed the ride-sharing company as a job producer and scoffed at a City Council bill — backed by the mayor — that would cap Uber’s growth.

“Uber is one of these great inventions,” Cuomo said during an interview on public radio’s “The Capitol Pressroom” Wednesday.

“It is taking off like fire through dry grass and it’s offering a great service for people and it’s giving people jobs,” Cuomo said. “I don’t think government should be in the business of trying to restrict job growth.”•

Tags: ,

In an attempt to rename cars in the potential driverless future, technologists and pundits have offered numerous alternatives: robocars, driverless cars, autonomous cars. Funny thing is, the term “auto” (“by oneself or spontaneous” and “by itself or automatic”), one we already use, would be particularly apt. The word doesn’t need change–the definition will.

In a similar vein, Adrienne France of the Atlantic, has written a smart piece about the word “computer,” which is entering its 3.0 stage, an era that may call back in some ways to the original definition. An excerpt:

Now, leading computer scientists and technologists say the definition of “computer” is again changing. The topic came up repeatedly at a brain-inspired computing panel I moderated at the U.S. Capitol last week. The panelists—neuroscientists, computer scientists, engineers, and academics—agreed: We have reached a profound moment of convergence and acceleration in computing technology, one that will reverberate in the way we talk about computers, and specifically with regard to the word “computer,” from now on.

“It’s like the move from simple adding machines to automated computing,” said James Brase, the deputy associate director for data science at Lawrence Livermore National Laboratory. “Because we’re making an architectural change, not just a technology change. The new kinds of capabilities—it won’t be a linear scale—this will be a major leap.”

The architectural change he’s talking about has to do with efforts to build a computer that can act—and, crucially, learn—the way a human brain does. Which means focusing on capabilities like pattern recognition and juiced-up processing power—building machines that can perceive their surroundings by using sensors, as well as distill meaning from deep oceans of data. “We are at a time where what a computer means will be redefined. These words change. A ‘computer,’ to my grandchildren, will be definitely different,” said Vijay Narayanan, a professor of computer science at Pennsylvania State University.•

Tags: , ,

David Brooks’ recent op-ed about Ta-Nehisi Coates, the one in which he approached the critic and his skin color cautiously and with some surprise, as if he’d happened upon a strange creature in a forest, was most troubling to me for two reasons, both of which were demonstrated in the same passage.

This one:

You are illustrating the perspective born of the rage “that burned in me then, animates me now, and will likely leave me on fire for the rest of my days.”

I read this all like a slap and a revelation. I suppose the first obligation is to sit with it, to make sure the testimony is respected and sinks in. But I have to ask, Am I displaying my privilege if I disagree? Is my job just to respect your experience and accept your conclusions? Does a white person have standing to respond?

If I do have standing, I find the causation between the legacy of lynching and some guy’s decision to commit a crime inadequate to the complexity of most individual choices.

I think you distort American history.•

  • The line “Does a white person have standing to respond?” is a galling transference of burden from people of actual oppression to people who’ve experienced none, a time-tested trick in the country, and one used repeatedly in discussions about Affirmative Action and other issues. The faux victimhood is appalling.
  • Even worse is this doozy: “I find the causation between the legacy of lynching and some guy’s decision to commit a crime inadequate to the complexity of most individual choices.” Here’s a negation of history and culpability that’s jaw-dropping. Does Brooks believe the inordinate poverty, imprisonment and shorter lifespans of African-Americans stem solely from their decisions? Does he believe Native Americans, the targets of another American holocaust, have such pronounced social problems because of poor choices they made? From Jim Crow to George Zimmerman, American law and justice has often been designed to reduce former slaves and their descendants, not to keep the peace but to maintain the power. If Brooks wants to point out the Civil Rights Act as a remedy to Jim Crow, that’s fine, but you don’t get to do a victory lap for offering basic decency, and a sensible person doesn’t believe that improvement in our country, often a slow and grueling and bloody thing, doesn’t leave deep scars.•

Tags: ,

Apart from E.L. Doctorow, no one was able to conjure the late Harry Houdini, not even his widow.

But she certainly tried. A famed debunker of spiritualists, Houdini made a pact with his wife, Bess, that if the dead could speak to the living, he would deliver to her a special coded message from the beyond. Nobody but the two knew what the special message was. When a poorly received punch to the abdomen in 1926 made it impossible for the entertainer to escape death, his widow annually attempted to contact him through seance. No words were reportedly ever exchanged. The following are a couple of Brooklyn Daily Eagle articles about the wife’s attempts to continue the marital conversation.

_________________________

From April 24, 1936:

From February 12, 1943:

Tags: ,

From the October 28, 1911 Brooklyn Daily Eagle:

 

Tags:

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

The Internet of Things has potential for more good and bad than the regular Internet because it helps bring the quantification and chaos into the physical world. The largest experiment in anarchy in the history will be unloosed in the 3D world, inside our homes and cars and bodies, and sensors will, for better or worse, measure everything. That would be enough of a challenge but there’s also the specter of hackers and viruses.

A small piece from the new Economist report about IoT security concerns:

Modern cars are becoming like computers with wheels. Diabetics wear computerised insulin pumps that can instantly relay their vital signs to their doctors. Smart thermostats learn their owners’ habits, and warm and chill houses accordingly. And all are connected to the internet, to the benefit of humanity.

But the original internet brought disbenefits, too, as people used it to spread viruses, worms and malware of all sorts. Suppose, sceptics now worry, cars were taken over and crashed deliberately, diabetic patients were murdered by having their pumps disabled remotely, or people were burgled by thieves who knew, from the pattern of their energy use, when they had left their houses empty. An insecure internet of things might bring dystopia.  

Networking opportunities

All this may sound improbably apocalyptic. But hackers and security researchers have already shown it is possible.•

An uncommonly thoughtful technology entrepreneur, Vivek Wadhwa doesn’t focus solely on the benefits of disruption but its costs as well. He believes we’re headed for a jobless future and has debated the point with Marc Andreessen, who thinks such worries are so much needless hand-wringing. 

Here’s the most important distinction: If time proves Wadhwa wrong, his due diligence in the matter will not have hurt anyone. But if Andreessen is incorrect, his carefree manner will seem particularly ugly.

No one need suggest we inhibit progress, but we better have political solutions ready should entrenched technological unemployment become the new normal. Somehow we’ll have to work our way through the dissonance of a largely free-market economy meeting a highly automated one.

In a new Washington Post piece on the topic, Wadhwa considers some solutions, including the Carlos Slim idea of a three-day workweek and the oft-suggested universal basic income. The opening:

“There are more net jobs in the world today than ever before, after hundreds of years of technological innovation and hundreds of years of people predicting the death of work.  The logic on this topic is crystal clear.  Because of that, the contrary view is necessarily religious in nature, and, as we all know, there’s no point in arguing about religion.”

These are the words of tech mogul Marc Andreessen, in an e-mail exchange with me on the effect of advancing technologies on employment. Andreessen steadfastly believes that the same exponential curve that is enabling creation of an era of abundance will create new jobs faster and more broadly than before, and calls my assertions that we are heading into a jobless future a luddite fallacy.

I wish he were right, but he isn’t. And it isn’t a religious debate; it’s a matter of public policy and preparedness. With the technology advances that are presently on the horizon, not only low-skilled jobs are at risk; so are the jobs of knowledge workers. Too much is happening too fast. It will shake up entire industries and eliminate professions. Some new jobs will surely be created, but they will be few. And we won’t be able to retrain the people who lose their jobs, because, as I said to Andreessen, you can train an Andreessen to drive a cab, but you can’t retrain a laid-off cab driver to become an Andreessen.  The jobs that will be created will require very specialized skills and higher levels of education — which most people don’t have.

I am optimistic about the future and know that technology will provide society with many benefits. I also realize that millions will face permanent unemployment.•

Tags:

While it may not be good for our environment, industrialization has been good for our wallets. Transitioning from an agriculture-based economy to a manufacturing one allows a country to rapidly increase its wealth (and to contribute further wealth to other nations supplying the resources). In this century, China is, of course, the example writ large.

A post-Industrial economy in which manufacturing is no longer as valuable would seem to be the new reality, and a Disney economy of service and entertainment isn’t very transferable. In a Bloomberg View column, Noah Smith attempts to figure out a way forward for nations that are playing catch up in the Information Age. An excerpt:

The main engine of global growth since 2000 has been the rapid industrialization of China. By channeling the vast savings of its population into capital investment, and by rapidly absorbing technology from advanced countries, China was able to carry out the most stupendous modernization in history, moving hundreds of millions of farmers from rural areas to cities. That in turn powered the growth of resource-exporting countries such as Brazil, Russia and many developing nations that sold their oil, metals and other resources to the new workshop of the world. 

The problem is that China’s recent slowdown from 10 percent annual growth to about 7 percent is only the beginning. The recent drops in housing and stock prices are harbingers of a further economic moderation. That is inevitable, since no country can grow at a breakneck pace forever. And with the slowing of China, Brazil and Russia have been slowing as well — the heyday of the BRICs (Brazil, Russia, India and China) is over. 

But the really worrying question is: What if other nations can’t pick up the slack when China slows? What if China is the last country to follow the tried-and-true path of industrialization? 

There is really only one time-tested way for a country to get rich. It moves farmers to factories and imports foreign manufacturing technology. When you move surplus farmers to cities, their productivity soars — this is the so-called dual-sector model of economic development pioneered by economist W. Arthur Lewis. So far, no country has reached high levels of income by moving farmers to service jobs en masse. Which leads us to conclude that there is something unique about manufacturing.•

 

Tags:

These classic 1901 photographs show the Budapest offices of Telefon Hirmondo (or Telephone Herald), a newspaper service read via telephone to subscribers all over that city. Begun in 1893 by Transylvanian inventor Theodore Puskas–who died just a month after the service was launched–the Herald featured updated news all day and live music at night. It cost about two cents a day at the outset. At its height, the company had more than 15,000 subscribers and licensed similar setups in Italy and America. Local department stores, hotels and restaurants purchased several lines so that their customers could be hooked into flowing news and entertainment almost a century before wi-fi. The popularity of radio in the 1920s, however, made the telephone newspaper superfluous. An article in the November 17, 1902 Brooklyn Daily Eagle described the service. The “Bellamy” it refers to is Edward Bellamy, whose utopian novel Looking Backward had been published to much acclaim in 1897.

The schedule for the U.S. version (stationed in Newark) which started nearly 20 years after the initial Budapest service:

  • 8:00: Exact astronomical time.
  • 8:00-9:00: Weather, late telegrams, London exchange quotations; chief items of interest from the morning papers.
  • 9:00-9:45: Special sales at the various stores; social programs for the day.
  • 9:45-10:00: Local personals and small items.
  • 10:00-11:30: New York Stock Exchange quotations and market letter.
  • 11:30-12:00: New York miscellaneous items.
  • Noon: Exact astronomical time.
  • 12:00-12:30: Latest general news;naval, military and congressional notes.
  • 12:30-1:00: Midday New York Stock Exchange quotations.
  • 1:00-2:00: Repetition of the half day’s most interesting news.
  • 2:00-2:15: Foreign cable dispatches.
  • 2:15-2:30: Trenton and Washington items.
  • 2:30-2:45: Fashion notes and household hints.
  • 2:45-3:15: Sporting news; theatrical news.
  • 3:15-3:30: New York Stock Exchange closing quotations.
  • 3:30-5:00: Music, readings, lectures.
  • 5:00-6:00: Stories and talks for the children.
  • 8:00-10:30: Vaudeville, concert, opera.

 

Reid Hoffman of Linkedin published a post about what road transportation would be like if cars become driverless and communicate with one another autonomously.

There’ll certainly be benefits. If your networked car knows ahead of time that a certain roadway is blocked by weather conditions, your trip will be smoother. More important than mere convenience, of course, is that there would likely be far fewer accidents and less pollution.

The entrepreneur believes human-controlled vehicles will be restricted legally to specific areas where “antique” driving can be experienced. There’s no timeframe given for this legislation, but it seems very unlikely that it would occur anytime soon, nor does it seem particularly necessary since the nudge of high insurance rates will likely do the trick. 

Hoffman acknowledges some of the many problems that would attend such a scenario if it’s realized. In addition to non-stop surveillance by corporations and government and the potential for large-scale hacking, there’s skill fade to worry about. (I don’t think the latter concern will be precisely remedied by tooling down a virtual road via Oculus Rift, as Hoffman suggests for those pining for yesteryear.) I think the most interesting issues he conjures are advertisers paying car companies to direct traffic down a certain path to expose travelers to businesses or the best routes being conferred upon higher-paying customers. That would be the Net Neutrality argument relocated to the streets and highways.

It’s definitely worth reading. An excerpt:

Autonomous vehicles will also be able to share information with each other better than human drivers can, in both real-time situations and over time. Every car on the road will benefit from what every other car has learned. Driving will be a networked activity, with tighter feedback loops and a much greater ability to aggregate, analyze, and redistribute knowledge.

Today, as individual drivers compete for space, they often work against each other’s interests, sometimes obliviously, sometimes deliberately. In a world of networked driverless cars, driving retains the individualized flexibility that has always made automobility so attractive. But it also becomes a highly collaborative endeavor, with greater cooperation leading to greater efficiency. It’s not just steering wheels and rear-view mirrors that driverless cars render obsolete. You won’t need horns either. Or middle fingers.

Already, the car as network node is what drives apps like Waze, which uses smartphone GPS capabilities to crowd-source real-time traffic levels, road conditions, and even gas prices. But Waze still depends on humans to apprehend the information it generates. Autonomous vehicles, in contrast, will be able to generate, analyze, and act on information without human bottlenecks. And when thousands and then even millions of cars are connected in this way, new capabilities are going to emerge. The rate of innovation will accelerate – just as it did when we made the shift from standalone PCs to networked PCs.

So we as a society should be doing everything we can to reach this better future sooner rather than later, in ways that make the transition as smooth as possible. And that includes prohibiting human-driven cars in many contexts. On this particular road trip, the journey is not the reward. The destination is.•

Tags:

Did you ever notice that futuristic metallic clothes don’t really ever arrive in stores near you? That’s because while they’re possible, perhaps even useful in some cases, they aren’t truly necessary.

In Kevin Kelly’s 2010 book, What Technology Wants, he wrote of the paths forward for the complexity of technology. Kelly believed the physical world around us will not change drastically in most ways, but some technologies that grow amazingly complex will be retrofitted onto our more “primitive” world. I mostly agree, though I don’t think Kelly was correct to include automobiles in that category. They’ve since speeded in the other direction.

The excerpt:

There are several different ways technology’s complexity can go:

Scenario #1. As in nature, the bulk of technology remains simple, basic, and primeval because it works. And the primitive works well as a foundation for the thin layer of complex technology built upon it. Because the technium is an ecosystem of technologies, most of it will remain in its equivalent microbial stage: brick, wood, hammers, copper wires, electric motors and so on. We could develop nanoscale computers that reproduced themselves, but they wouldn’t fit our fingers. For the most part, humans will deal with simple things (as we do now) and only interact with the dizzily more complex occasionally, just as we now do. (For most of our day our hands touch relatively coarse artifacts.) Cities and houses remain similar, populated with a veneer of fast-evolving gadgets and screens on every surface.

Scenario #2. Complexity, like all other factors in growing systems, plateaus at some point, and some other quality we had not noticed earlier (perhaps quantum entanglement) takes its place as the prime observable trend. In other words, complexity may simply be the lens we see the world through at this moment, the metaphor of the era, when in reality it is a reflection of us rather than property of evolution.

Scenario #3. There is no limit to how complex things can get. Everything is complexifying over time, headed toward that omega point of ultimate complexity. The bricks in our building will become smart, the spoon in our hand will adapt to our grip; cars will be as complicated as jets are today. The most complex things we use in a day will be beyond any single person’s comprehension.

If I had to, I would bet, perhaps surprisingly, on scenario #1. The bulk of technology will remain simple or semi-simple, while a smaller portion will continue to complexify greatly. I expect our cities and homes a thousand years hence to be recognizable, rather than unrecognizable.•

Tags:

Engineering rather than organic growth has driven the reordering of much of modern Chinese life, and it’s at the crux of Beijing’s ongoing radical transformation into a megacity of 130 million people. Complicating matters are the absence of property taxes and the restriction placed upon local municipalities from retaining any other type of fees they collect, so basic infrastructure and services have often been lacking or altogether absent during the capital city’s massive makeover. 

From Ian Johnson’s very interesting NYT report:

The planned megalopolis, a metropolitan area that would be about six times the size of New York’s, is meant to revamp northern China’s economy and become a laboratory for modern urban growth.

“The supercity is the vanguard of economic reform,” said Liu Gang, a professor at Nankai University in Tianjin who advises local governments on regional development. “It reflects the senior leadership’s views on the need for integration, innovation and environmental protection.”

The new region will link the research facilities and creative culture of Beijing with the economic muscle of the port city of Tianjin and the hinterlands of Hebei Province, forcing areas that have never cooperated to work together. …

Jing-Jin-Ji, as the region is called (“Jing” for Beijing, “Jin” for Tianjin and “Ji,” the traditional name for Hebei Province), is meant to help the area catch up to China’s more prosperous economic belts: the Yangtze River Delta around Shanghai and Nanjing in central China, and the Pearl River Delta around Guangzhou and Shenzhen in southern China.

But the new supercity is intended to be different in scope and conception. It would be spread over 82,000 square miles, about the size of Kansas, and hold a population larger than a third of the United States. And unlike metro areas that have grown up organically, Jing-Jin-Ji would be a very deliberate creation.•

Tags: ,

"I have to get my guinea pig an operation to have her ovaries removed."

“I have to get my guinea pig an operation to have her ovaries removed.”

Mac desk top – $300 (Brooklyn)

Hi, I’m selling a Mac desktop with mouse and keyboard works great just has two lines going through the screen but besides that it works well. There are pictures on here of what it looks like and the operating system that’s in it now. I’m asking 300 and it comes with the box.

Cash and carry. This is not a joke I’m selling it because I have to get my guinea pig an operation to have her ovaries removed as she has ovarian cysts so the money will go along way to getting it done for her.

Life to me is just about having a little fun and doing some good things for others before time runs out–and that’s what it’s doing, rapidly. So why would our comic-book culture depress me so? Clearly it’s fun for many people. It isn’t just because I’m not personally interested in the form. That’s true of many things that don’t make me sad.

Overall, I’m glad the “barbarians” have stormed the gates, pleased technology has allowed everyone in the audience to essentially be part of the show, as Glenn Gould long ago predicted it would. The economics aren’t good for many professionals, but I still vote for the mob. I have no problem with Kris Jenner being the new Joe Jackson and a big ass being the new moonwalk. It’s not nothing, just something different.

Still, sadness.

I guess what troubles me is that it’s all centered on consumerism. It’s not only about owning a product but becoming one. That’s true of people creating free content from their personal information for Facebook and citizens being considered brands and fans donning costumes of their favorite toys at conventions. We’ve run out of things to eat so now we’re eating ourselves. That’s what our mix of democracy and capitalism has led us to.

A.O. Scott of the New York Times went to Comic-Con in San Diego and saw himself when gawking at X-Men, Yodas and zombies. His resulting article is a brilliant summation of so many things in the culture, even if he’s not quite as somber as I am about this new normal. An excerpt:

For a long weekend in July, this city a few hours down the freeway from Hollywood and Disneyland becomes a pilgrimage site for something like 130,000 worshipers. It’s both ordeal and ecstasy, and the secular observer is in no real position to judge. You arrive as an ethnographer, evolve into a participant observer and start to feel like a convert, an addict to what is surely the modern-day opiate of the masses.

What are the doctrines and canons of this faith? In some ways, they aren’t so mysterious. The Comic-Con pilgrims, with their homemade costumes and branded bags of merchandise, represent the fundamentalist wing of the ecumenical creed of fandom. Almost everyone in the world outside falls somewhere on the spectrum of observance. We go to movies, we watch television, we build things out of Lego. I went to Comic-Con thinking I was going to study the folkways of an exotic tribe. I didn’t suspect I would find myself.

Literally where I found myself, for most of the four days, was in line. It’s the shared experience that unites the diverse subcultures, and the most available topic of conversation is just how long and how many those lines are. You could either figure out which line you wanted to join — would you rather be attacked by zombies or score swag from “The Peanuts Movie”? Cop an “exclusive” Marvel toy or a drawn-to-order sketch from the indie animator Bill Plympton? — or follow the herd. “What’s this line for?” is a question I heard most often from people who were already a dozen or more bodies into it.

In other eras and societies — the Great Depression, the Soviet Union — long lines signify scarcity or oppression. In the Bizarro World that is 21st-century America, it’s the opposite: Long lines are signs of abundance and hedonism. Much can be learned about a civilization from studying its queuing habits, and Comic-Con surpasses even the Disney theme parks in the sophistication of its crowd management and the variety of its arrangements.

Tags:

From the July 24, 1911 Brooklyn Daily Eagle:

Tags:

In a Guardian piece, Paul Mason, author of the forthcoming Postcapitalism, argues that in the wake of the 2008 economic collapse, information technology is toppling capitalism in a way that a million marching Marxists never could, with the new normal unable to function by the dynamics of the old order.

I agree that a fresh system is incrementally forming–especially in regards to work and likely taxation–though it’s probably a heterogeneous one that won’t be absent free markets in the near term and perhaps the longer one as well. “Abundance” is a word used by a lot of people, including the author, in describing the future, but it may not be what they think it is. Food has been abundant for many decades and there have always been hungry, even starving, people.

Mason quotes Stewart Brand’s famous line “information wants to be free,” but let’s remember the whole quote: “Information wants to be free. Information also wants to be expensive. …That tension will not go away.”

At any rate, I’m with Mason in thinking we’re on the precipice of big changes wrought by the Internet and its many offshoots and can’t wait to read his book. An excerpt:

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm.•

Tags:

« Older entries § Newer entries »