You are currently browsing the archive for the Science/Tech category.

Long-range space exploration plans know their limitations, as changing politics lead to shifting priorities. In a 2007 New York Times essay, Dennis Overbye fretted about the failure of the Space Age to stretch much further than boots on the moon, but he also presciently realized that new-millennium geopolitics and technologist gazillionaires would likely help us shoot for Mars. An excerpt:

“Our machines have gone ahead of us. But someday people will hike through the canyons of Mars. I just don’t know when or how or who. Maybe it will be the Chinese, who seem to still feel that they have something to prove as a nation. Maybe it will be billionaire adventurers — like the Google founders who just put up a $25 million prize for the first private Moon lander, who are free to risk their own money and don’t have to answer to Congress when things go wrong, as they sometimes will — who make the dream come true, for at least a few.

There will always be someone willing to ride a pillar of fire into the unknown, but it won’t be me. I don’t want to go to Mars anymore. I no longer have the stuff — if in fact I ever did — to camp out in a tin can for two years. I’d be afraid to be so far from the Earth and my family for so long.

I don’t want my daughter to go either, for the same reason. When our children do go off forever across the void then we will have a chance to find out if we are as strong as our ancestors who bundled their children onto ships in the hope they would reach a better world across the ocean. Someday, somebody will go and not come back, and humans will have escaped their nest, for better or for worse.

There is no galactic immortality. Everything we are and have done, the whole Milky Way with its billions of stars, is eventually destined to be swallowed up in a black hole. Neither ourselves nor our works will survive the end of the universe, if dark energy eventually blows it apart, no matter what we do. All we own is the present, so it behooves each of us to live each moment impeccably, guided by whatever lights we choose. Speaking only for myself, while we are around we might as well embrace the light and the unknown, the violence and vastness that terrify us.

My sci-fi dreams are dead, but Sir Richard Branson and his fellow space entrepreneurs say they have business plans. If Mr. Branson manages to get the cosmologist Stephen Hawking into space and back, he will have done more for the cause of space exploration than 25 years of space shuttles going around in circles.

Watching the Apollo astronauts recount their travels to the Moon in the documentary ”In the Shadow of the Moon,” I was wiping away tears for a time when we had bold dreams and leaders who, for whatever motives, could make them happen. Neil Armstrong’s footprints on the Moon are as crisp as the day he made them.

I will always be glad I was alive when he took that small step, even if we are still waiting for the next big leap.



Via the wonderful Browser, I came across David Cay Johnston’s Al Jazeera essay, “America Should Be More Like Disneyland,” which suggests the U.S. invest in infrastructure and our future like Walt did with his walled-in world. It’s not the first time a thinker has suggested we look to a corporation for tips on saner public governance. In the 1990s, Los Angeles sci-fi writer Ray Bradbury, worried by the Rodney King riots and crime against his own family members, thought American cities were becoming hyper-violent and needed to turn themselves over to corporations which would make large-scale malls of metropolises. He had, of course, missed the macro picture: Crime was actually diminishing in the U.S. at a remarkable rate. It still is. Johnston’s ideas are far more reasonable. An excerpt:

“At the park’s grand opening on July 17, 1955, Walt Disney said, ‘This happy place’ is ‘dedicated to the ideals, the dreams and the hard facts that have created America.’

To raise the $17 million ($150 million in today’s money) needed to turn Anaheim’s orange groves into the Magic Kingdom, Disney mortgaged his home and created a Sunday night television show on ABC. Today the Disney Co. owns ABC, because Walt Disney’s vision of a richer future paid off beyond even his imagination.

But as a people, we disinvest in America. Even though the country could borrow at extremely low interest rates, we refuse to take the risk. Instead, we let infrastructure deteriorate, cut school budgets, close libraries, raise college tuition and pay ever more for police and security even though crime has been declining for decades.

In an era when human knowledge is expanding at a rapidly accelerating rate, Congress cuts budgets for basic research, thereby encouraging smart young scientists to go overseas because they can get funding abroad. And of course the countries that receive them will reap the benefits of their discoveries.

Disney’s brother Roy and other critics thought Disneyland was too audacious and costly an idea to succeed. Yet it not only prompted a wave of theme parks around the world, including the complete remake of Las Vegas, starting in 1989 with the Mirage, a themed casino resort on the Disneyland model.

Creating a happy place

Imagine if we applied that same vision of a better world from infrastructure to education and scientific research or even to just having public restrooms — and clean ones at that.”

Tags: ,

I’m not an irredeemable cynic because I didn’t have much hope that an elevated-pod transportation system would skim the sky in Tel Aviv just as scheduled. It’s easier, however, for a private concern to install such rich infrastructure, as Google is planning to do with pedal-and-solar powered pods on its sprawling campus, with the help of an outfit named Shweeb. From Adele Peters at Fast Company:

“The pods come in different sizes that can hold two, five, or 12 people, and can also be connected to carry packages or other cargo. The guideways that hold the cars can also double as carriers for power lines and cables. ‘We’ve envisioned the guideway that the pod runs on to become a conduit for all things electrified,’ [Stephen] Bierda says. ‘It’s a multi-purpose piece of infrastructure.’

Cyclists can throw a bike in the back of a pod, and ultimately, the company believes that the system will make it easier to bike on the ground by getting cars off the road. ‘It’s 3-D transport,’ Bierda says. ‘That leaves the surface level—streets and sidewalks—for people to walk and cycle on.’

The system is at least 30% cheaper to build than other mass transit infrastructure, and fully renewably powered. It can also be tied to the grid to feed excess electricity back to the city. ‘We envision this as a public utility,’ Bierda explains. ‘If it’s run like a utility, that also helps secure the air rights to build over streets, and parks, and parking lots.’

First built for an amusement park in New Zealand, the system won $1 million in Google’s 10^100 competition in 2010 for further development as a form of alternative transportation. The system will use tech from Google’s robocars to track the pods, and Google’s Mountain View campus will be one of the first to build a track. Some 22 other customers are also waiting in line.

The big question is whether something like this can really take off; planners started talking about personal rapid transit systems in the 1950s, but the idea hasn’t had much success.”


Tags: ,

Irrational exuberance and ponderous dial-up connections doomed many a venture during Web 1.0, that stone-age time before computers could slide out of our pockets, summon cars and photograph our genitals. In retrospect, not all these first-wave business ideas were bad ones. From a Wired piece by Robert McMillan about the revenge of the 1990s dot-bombs:

“Now that the internet has become a much bigger part of our lives, now that we have mobile phones that make using the net so much easier, now that the Googles and the Amazons have built the digital infrastructure needed to support online services on a massive scale, now that a new breed of coding tools has made it easier for people to turn their business plans into reality, now that Amazon and others have streamlined the shipping infrastructure needed to inexpensively get stuff to your door, now that we’ve shed at least some of that irrational exuberance, the world is ready to cash in on the worst ideas of the ’90s.

WebVan burned through $800 million trying to deliver fresh groceries to your door, and today, we have Amazon Fresh and Instacart, which are doing exactly the same thing—and doing it well. People laughed when Kozmo flamed out in 1998, but today, Amazon and Google are duking it out to provide same-day shopping delivery. A year ago, even told WIRED it was making a comeback ‘in the near future.’

We’re still waiting for Kozmo 2.0. But there’s also good reason to applaud the folks behind They wanted to create their own internet-based currency, and though Flooz was a flop, bitcoin has now shown that digital currency can play huge role in the modern world.

Even the idea is looking mighty good.”


The intersection of athletics and computers at the dawn of the World Wide Web was the crux of Donald Katz’s 1995 Sports Illustrated article about Paul Allen attempting to work his Microsoft mindset into the NBA. An excerpt:

“Besides the thronelike easy chairs built into the wall along one side of the regulation basketball court and the Santa Fe-style high-desert oil paintings on the opposite wall, the distinguishing features of Allen’s arena are video monitors of the sort that can be seen everywhere on his estate. Each of the screens is electronically tethered to dozens of other monitors and computer systems inside the Allen compound. Simply touching a display on one of the screens can achieve high-speed access to satellites circling the globe and therefore to just about any sports event being broadcast anywhere in the world. Inside his plush 20-seat theater, equipped with a 10-by-14-foot screen,

Allen can view ultra-high-definition video images that less-privileged consumers won’t be able to see for several years. And if Paul Allen must miss a Blazer game because he’s out at sea on his 150-foot yacht, the team will tape the game at a cost of around $30,000 and beam it to him as a digital stream of private entertainment.

From any keyboard inside his home, Allen can also access computers strewn throughout the vast web of his futuristic business empire. He can send E-mail out to Blazer forward Buck Williams or to coach P.J. Carlesimo’s address in cyberspace. ‘I’m not using these —- computers, and I’m not readin’ no E-mail!’ Carlesimo declared upon being presented with his laptop shortly after he was hired by the Blazers last summer. But since then P.J. has seen the light and joined his boss in what Allen has long called ‘the wired world.’ …

Allen, 42 and the 13th-richest American, has lately spent $1.2 billion of his $4.6 billion Microsoft-spawned fortune on a broad array of digital satellites, wireless communications outfits, multimedia software and communications hardware firms, futuristic research companies and high-profile entertainment ventures. Last March, Allen underscored the convergence of Hollywood and the digital media age through his investment of $500 million in DreamWorks SKG, the studio being assembled by Steven Spielberg, David Geffen and Jeffrey Katzenberg. And as Allen’s executives and research scientists work more subtly to merge economic power, advanced technologies and big-time sports, they are similarly defining a future in which the experience of sports will surely be changed.

Down in Portland, Allen’s Trail Blazer organization is managing the construction of a $262 million sports arena called the Rose Garden, which will be strewn with computers and wired with miles of fiber-optic cable. The 70 luxury suites inside the Rose Garden will be equipped with teleconferencing gear and be fed channels full of computer-generated sports statistics. The concourses of the Rose Garden will be draped with glowing video screens, and Allen eventually wants to feed stats and replays and stock quotes and weather reports and images of games being played in other places to a tiny screen located at every seat.

Not unlike other team owners who have invested in new stadiums and arenas over the past year, Allen is considering a virtual-reality entertainment center next door to the Rose Garden. An official Blazer ‘home page’ already connects on-line fans to the team’s own Internet address. The Blazers’ staff includes a seasoned multimedia software developer assigned to create sports products that the Blazers can sell to other teams. ‘My mission,’ team president Marshall Glickman proclaimed early in this past NBA season, ‘is to integrate Paul Allen’s world of computers and communications with my own world of sports.’

During the ‘information superhighway’ media frenzy that began toward the end of 1993, a Seattle Times reporter imagined a day in the not-too-distant future when a fan who got home late during a Seattle SuperSonic game could digitally fast-forward through the recorded action until he caught up with the real-time telecast. After a Shawn Kemp dunk, the reporter presumed, the viewer could click on the image of Kemp and call up his latest stats, read stories about Kemp from newspapers all over the world or connect with the Shawn Kemp Fan Club in Indiana. Another click would automatically order Shawn Kemp souvenirs or tickets to a coming Sonic game. The viewer could change the camera angle from which he or she was seeing the game, focusing on Kemp or watching the action from overhead.

And all of this, the newspaper article pointed out, could occur within the boundaries of Allen’s multimedia portfolio. ‘Once the high-speed digital channel is wired into people’s houses,’ Allen says before finally nailing a three-point basket, ‘all of that– and more — becomes pretty easy to do.’

Early evidence indicates that many of the innovations now understood only by technologists like Allen will intensify our experience of spectator sports — just as audio CDs have enhanced the secondhand experience of a live symphony. The informational and visual options available to fans sitting at home or in the stands are already multiplying as sports become proving grounds for advanced digital technologies. But these technologies also raise a broad array of questions, from immediate concerns (Will computerized gambling soon be inextricably linked with big-time sports?) to new business issues (Will people pay for new services?). Then there are longer-term issues: Will computer-based technologies someday offer sportslike entertainment so enthralling and convenient and highly customized that games created from bits of the best of real sports and bits of the best sports fantasies render live games obsolete?”

Tags: , ,

A global marketplace essentially demands Hollywood build franchises and make sequels, with their relatively controllable advertising costs, and that’s not necessarily a bad thing. While the long-ago serials were B-movie productions, cheaply made things, and sequels from the ’70s through the ’90s usually exploited a product rather than nurtured it, by the new century no expense was spared to make several highly entertaining variations on a theme. That doesn’t mean success is always–or even often–achieved, but that’s the intention. A little from Matthew Garrahan of the Financial Times about the unsure early attempts to build can’t-miss franchises:

“While the current vogue for producing and maintaining franchises is relatively new in Hollywood, the industry has a long history of producing sequels. In the 1930s, episodic movie ‘serials’ such as Buck Rogers and Flash Gordon drew cinema audiences week after week. The horror movies produced by Universal Studios in the same decade also spawned numerous sequels, such as Bride of Frankenstein and Dracula’s Daughter. In the decades that followed, the biggest studios produced countless genre movies — comedies, westerns, musicals, gangster films — featuring stars such as James Cagney, Fred Astaire, John Wayne and the Marx Brothers, in which the lead actor would often play variations on the same character.

The paradigm shifted in the 1970s, when The Godfather: Part II became the first sequel to win best picture at the Academy Awards. But, far from ushering in an era of critical excellence, the studios saw instead an opportunity to squeeze their successes for all they were worth, spawning a catalogue of inferior follow-ups to hit films over the next 20 years. The blockbuster success of Steven Spielberg’s 1975 Oscar-winning Jaws, for instance, spawned four sequels, each worse than its predecessor (Spielberg wisely declined to be involved in any of them), culminating in the appalling Jaws: The Revenge (1987) regarded widely as one of the worst films ever made. ‘I’ve never seen it, but by all accounts it’s terrible,’ Michael Caine, the film’s star, once said. ‘However, I have seen the house it built and it’s terrific.’

In the 1980s and early 1990s a clear trend for sequels had been established, particularly with action films. Assuming the first film was a success, a second could be produced using essentially the same plot but in a different setting.”


AI can kill all of you humans, and Sir Clive Sinclair will not give a fig. But until that fine day when we’re replaced by machines even more unfeeling than ourselves, let us meditate for a moment on a product the entrepreneur thrust upon the world in 1985, the Sinclair C5. It was a battery-powered EV tricycle, and it was a gigantic flop, the Edsel of pedal transport, a DeLorean dreamed up without the aid of cocaine courage. Was the vehicle wrong or just the time? From Jack Stewart’s BBC piece “Was the Sinclair C5 30 Years Too Early?“:

“The C5 had an almost instant image problem. The press and public saw the C5 less as a new mode of transport, and more as a toy – and an expensive one at that. Yours for only £399 (£1,120), and if you wanted to go uphill, you would have to pedal. But the C5 went from drawing board to prototype without any market research, according to Andrew Marks, who wrote an investigation into the vehicle’s failure for the European Journal of Marketing four years after the C5 was released. Sir Clive believed he could create a market where none had existed before, using changes in legislation that allowed electric pedal vehicles and improving battery technology. But, as Marks argues, the C5 programme seemed to be dictated by the company’s conviction, rather than by public demand.

The C5 was also immediately criticised for its safety, or lack thereof. ‘I don’t like the ideas of driving it in traffic, frankly,’ says [BBC reporter Dick] Oliver in [his] report. The driving position was extremely low, making it effectively invisible to other vehicles. It could also be operated by anyone over 14 years old in the UK, without a license or helmet. Famed racing driver Stirling Moss expressed his concerns too.

‘If people get into it and in any way think that they’re in a car because they’re sitting down, then they’re in trouble.’

Media reviews were also harsh about the range – the battery did not live up to expectations – and there was too much exposure to the elements. In retrospect a January launch in London may not have been the most enticing demonstration to carry out. The poor reception meant orders were minimal, and production ceased around eight months later.”


“Imagine a vehicle that can drive you five miles for a penny”:

Tags: , , ,

Holy shit, the Earth is dying! Let’s get the fuck out of here!

Easier said than done, of course. What might be helpful as we colonize the solar system are bio-spacesuits. From Liz Stinson’s Wired piece about MIT designing Neri Oxman, who’s been working on organic second skins:

“If you’re planning on extended interplanetary travel, you’re going to need more than a standard spacesuit. Sustaining human life on, say, Mercury or the moons of Jupiter and Saturn, means battling the worst of conditions. ‘Crushing gravity, ammonius air, prolonged darkness and temperatures that would boil glass or freeze carbon dioxide,’ says Neri Oxman, a designer and professor at MIT’s Media Lab. Sounds like paradise, doesn’t it?

For a new speculative design project called Wanderers Oxman and her team of students partnered with 3D printing behemoth Stratasys and the computational design duo Deskriptiv to build four wearable skins that serve as bio-augmented space suits. Each is designed to battle a specific extreme environment by transforming elements found there into ones that can sustain human life. ‘Some are designed to photosynthesize, converting daylight into energy, others bio-mineralize to strengthen and augment human bone,’ Oxman explains. In doing so, they offer a wild glimpse of a future in which the barriers between biology and technology have fallen away.

Mushtari, designed for life on the moons of Jupiter, is an external digestive tract that fits around the stomach. Oxman designed the organ system to digest biomass, absorb nutrients and expel waste. Humans would be able to convert daylight into consumable sucrose via photosynthetic bacteria that flows through the translucent 3-D printed tracts.”

Tags: , ,

I put up a post last month about taxi commissions needing to compete with Uber at its own game, creating an app that will allow medallion owners and their drivers to offer customers the best of ridesharing (smartphone hailing, digital payments) without the negative (surge pricing, unethical business and labor practices), and it seems that NYC and Chicago were already thinking along those lines. Now it will come down to properly executing the system. From Mike Isaac at the New York Times:

“If you can’t beat them, join them.

Regulators in Chicago have approved a plan to create one or more applications that would allow users to hail taxis from any operators in the city, using a smartphone. In New York, a City Council member proposed a similar app on Monday that would let residents ‘e-hail’ any of the 20,000 cabs that circulate in the city on a daily basis.

It is a new tack for officials in the two cities, a reaction to the surging use of hail-a-ride apps like Uber and Lyft.

Regulators in New York have not yet voted on the bill on the e-hail app, which was first proposed by Benjamin Kallos, a councilman who represents the Upper East Side and Roosevelt Island.

In Chicago, the plan to create such apps is part of the so-called Taxi Driver Fairness Reforms package, a plan backed by a taxi union and City Council members that would update regulations around taxi cab lease rates and violations like traffic tickets, among others. The city is expected to solicit third-party application developers to build the official app or set of apps. The City Council gave no further details on its selection criteria, nor did it give information on how the initiative would be financed.

‘These reforms represent what is necessary to further modernize this growing industry,’ Rahm Emanuel, Chicago’s mayor, said in a statement.”


Climate change is in the process of killing off an amazing array of species, perhaps humans among them. Another ramification of rising temperatures is the consolidation of species newly introduced to one another by the vanishing of habitats. It’s biodiversity disappearing into itself. The opening of “A Strange New Gene Pool of Animals Is Brewing in the Arctic,” Tim McDonnell’s Nautilus article about the rules of modern mating:

“The journey began in spring 2010, just as the sea ice surrounding the North Pole began its annual melt. Two bowhead whales, 50-foot-long behemoths that scour the Arctic seas for plankton, each started from their homes on opposite sides of North America—one in the Beaufort Sea north of Alaska, the other in Baffin Bay on the west side of Greenland. As the summer progressed, sea ice shrank (to its third-lowest cover in the last 30 years), and the whales swam toward each other through the now ice-free passage above the continent. Two independent teams of scientists from Canada and the United States watched the whales closely via satellite. ‘We were all pretty excited,’ recalls Kristen Laidre, a biologist at the University of Washington and member of the U.S. team.

In September, in an inlet some 1,800 miles north of Fargo, North Dakota, where the North American landmass dissolves into the Arctic Ocean, the whales met in the middle. They spent two weeks together, and although not much happened before they turned around, the meeting was historic. The fossil record indicates the last time Pacific and Atlantic bowhead whales came into contact was at least 10,000 years ago.

In the last 40 years, the Arctic has warmed by about 3.5 degrees Fahrenheit, more than twice the overall global rise in that same period. Already grizzly bears are tromping into polar bear territory while fish like cod and salmon are leaving their historic haunts to follow warming waters north. One tangible result of the migration, scientists report, is that animals will learn to live with new neighbors. But polar biologists worry that animals could get a little too friendly with each other. With less ice clogging Arctic seas, whales are ranging farther; meanwhile, animals like seals that breed on the ice have fewer places to go. In both cases, the chances of encountering a different species jump. ‘All of a sudden, hybridization will skyrocket,’ says Brendan Kelly, a polar ecologist at the National Science Foundation.”

Tags: , ,

Ross Perot, who believed back in 1969 that we would one day be guided by a computerized town hall, has put a small portion of his billions toward saving the ENIAC, the world’s first operational digital computer, from the scrap heap. The opening of a Wired piece about the salvaging by Brendan L. Koerner, expert on fraught rescue missions:

“Eccentric billionaires are tough to impress, so their minions must always think big when handed vague assignments. Ross Perot’s staffers did just that in 2006, when their boss declared that he wanted to decorate his Plano, Texas, headquarters with relics from computing history. Aware that a few measly Apple I’s and Altair 880’s wouldn’t be enough to satisfy a former presidential candidate, Perot’s people decided to acquire a more singular prize: a big chunk of ENIAC, the ‘Electronic Numerical Integrator And Computer.’ The ENIAC was a 27-ton, 1,800-square-foot bundle of vacuum tubes and diodes that was arguably the world’s first true computer. The hardware that Perot’s team diligently unearthed and lovingly refurbished is now accessible to the general public for the first time, back at the same Army base where it almost rotted into oblivion.

ENIAC was conceived in the thick of World War II, as a tool to help artillerymen calculate the trajectories of shells. Though construction began a year before D-Day, the computer wasn’t activated until November 1945, by which time the U.S. Army’s guns had fallen silent. But the military still found plenty of use for ENIAC as the Cold War began—the machine’s 17,468 vacuum tubes were put to work by the developers of the first hydrogen bomb, who needed a way to test the feasibility of their early designs. The scientists at Los Alamos later declared that they could never have achieved success without ENIAC’s awesome computing might: the machine could execute 5,000 instructions per second, a capability that made it a thousand times faster than the electromechanical calculators of the day. (An iPhone 6, by contrast, can zip through 25 billion instructions per second.)

When the Army declared ENIAC obsolete in 1955, however, the historic invention was treated with scant respect: its 40 panels, each of which weighed an average of 858 pounds, were divvied up and strewn about with little care. Some of the hardware landed in the hands of folks who appreciated its significance—the engineer Arthur Burks, for example, donated his panel to the University of Michigan, and the Smithsonian managed to snag a couple of panels for its collection, too. But as Libby Craft, Perot’s director of special projects, found out to her chagrin, much of ENIAC vanished into disorganized warehouses, a bit like the Ark of the Covenant at the end of Raiders of the Lost Ark.”

Tags: ,

Outsourcing has meant sending work beyond borders, but in the next wave the word will mean sending work beyond humanity. It’s happening already, of course, and the fashion retailer Zara is just one example. From Amit Bagaria at India Times:

At the other end of the spectrum is Zara, which has built its strategy around consumer trends, embracing the fast-changing tastes of its customers. Zara has developed a highly responsive supply chain that enables delivery of new fashions as soon as a trend emerges.

Zara comes up with 36,000 new designs every year, and it delivers new products as many as 2-6 times each week to its 1900+ stores around the world.  Store orders are delivered in 24-48 hours. It takes the company only 10-15 days to go from the design stage to the sales floor. How is Zara able to do this? By being fast and flexible.

Rather than subcontracting manufacturing to China, India or Bangladesh, Zara built 14 automated factories in its home country Spain, where robots work 24/7 cutting and dyeing fabrics and creating semi-finished products, which are then finished to suits, shirts, dresses and the like by about 350 finishing shops in Northwestern Spain and Portugal. 

Imagine the foresight robots don’t (yet) form a labour union and also don’t take the weekend off. Some American apparel companies are now partly following the Zara model, getting their longer-lead-time goods manufactured (semi-finished) in Asia and doing the finishing work in the US.”


Neil Irwin of the “Upshot” blog at the New York Times suggests that American wage stagnation and the lag in hiring are being driven not by market conditions but by a mentality. An excerpt: 

“So any employer with a job opening should have no problem hiring. If anything, the ratio of openings to hiring should be lower than it was in the mid-2000s, not higher.

Here’s a theory to try to make sense of the disconnect: During the recession, employers got spoiled. When unemployment was near 10 percent, talented workers were lined up outside their door. The workers they did have were terrified of losing their jobs. If you put out word that you had an opening, you could fill the job almost instantly. That’s why the ratio of job openings to hires fell so low in 2009.

As the economy has gotten better the last five years, employers have had more and more job openings, but have been sorely reluctant to accept that it’s not 2009 anymore in terms of what workers they can hire and at what wage.

Yes, unemployment is still elevated, but workers aren’t in nearly as desperate a position as they were then. So to get the kind of talented people they want, employers are going to have to pay more (or offer better benefits or working conditions) than they would have not that long ago.”


Long before Silicon Valley, Victorians gave the future a name, recognizing electricity and, more broadly, technology, as transformative, disruptive and decentralizing. We’re still borrowing from their lexicon and ideas, though we need to be writing the next tomorrow’s narratives today. From “Future Perfect,” Iwan Rhys Morus’ excellent Aeon essay:

“For the Victorians, the future, as terra incognita, was ripe for exploration (and colonisation). For someone like me – who grew up reading the science fiction of Robert Heinlein and watching Star Trek – this makes looking at how the Victorians imagined us today just as interesting as looking at the way our imagined futures work now. Just as they invented the future, the Victorians also invented the way we continue to talk about the future. Their prophets created stories about the world to come that blended technoscientific fact with fiction. When we listen to Elon Musk describing his hyperloop high-speed transportation system, or his plans to colonise Mars, we’re listening to a view of the future put together according to a Victorian rulebook. Built into this ‘futurism’ is the Victorian discovery that societies and their technologies evolve together: from this perspective, technology just is social progress.

The assumption was plainly shared by everyone around the table when, in November 1889, the Marquess of Salisbury, the Conservative prime minister of Great Britain, stood up at the Institution of Electrical Engineers’ annual dinner to deliver a speech. He set out a blueprint for an electrical future that pictured technological and social transformation hand in hand. He reminded his fellow banqueteers how the telegraph had already changed the world by working on ‘the moral and intellectual nature and action of mankind’. By making global communication immediate, the telegraph had made everyone part of the global power game. It had ‘assembled all mankind upon one great plane, where they can see everything that is done, and hear everything that is said, and judge of every policy that is pursued at the very moment those events take place’. Styling the telegraph as the great leveller was quite common among the Victorians, though it’s particularly interesting to see it echoed by a Tory prime minister.

Salisbury’s electrical future went further than that, though. He argued that the spread of electrical power systems would profoundly transform the way people lived and worked, just as massive urbanisation was the result of steam technology.”


The future usually arrives wearing the clothes of the past, but occasionally we truly and seriously experience the shock of the new. On that topic: The 1965 Life magazine piece “Will Man Direct His Own Evolution?is a fun but extremely overwrought essay by Albert Rosenfeld about the nature of identity in a time when humans would be made by design, comprised of temporary parts. Like a lot of things written in the ’60s about science and society, it’s informed by an undercurrent of anxiety about the changes beginning to affect the nuclear family. An excerpt:

“Even you and I–in 1965, already here and beyond the reach of potential modification–could live to face curious and unfamiliar problems in identity as a result of man’s increasing ability to control his own mortality after birth. As organ transplants and artificial body parts become even more available it is not totally absurd to envision any one of us walking around one day with, say, a plastic cornea, a few metal bones and Dacron arteries, with donated glands, kidney and liver from some other person, from an animal, from an organ bank, or even an assembly line, with an artificial heart, and computerized electronic devices to substitute for muscular, neural or metabolic functions that may have gone wrong. It has been suggested–though it will almost certainly not happen in our lifetime–that brains, too, might be replaceable, either by a brain transplanted from someone else, by a new one grown in tissue culture, or an electronic or mechanical one of some sort. ‘What,’ asks Dr. Lederberg, ‘is the moral, legal or psychiatric identity of an artificial chimera?’

Dr. Seymour Kety, an outstanding psychiatric authority now with the National Institute of Health, points out that fairly radical personality changes already have been wrought by existing techniques like brainwashing, electroshock therapy and prefrontal lobotomy, without raising serious questions of identity. But would it be the same if alien parts and substances were substituted for the person’s own, resulting in a new biochemistry and a new personality with new tastes, new talents, new political views–perhaps even a different memory of different experiences? Might such a man’s wife decide she no longer recognized him as her husband and that he was, in fact, not? Or might he decide that his old home, job and family situation were not to his liking and feel free to chuck the whole setup that have been quite congenial to the old person?

Not that acute problems of identity need await the day when wholesale replacement of vital organs is a reality. Very small changes in the brain could result in astounding metamorphoses. Scientists who specialize in the electrical probing of the human brain have, in the past few years, been exploring a small segment of the brain’s limbic system called the amygdala–and discovering that it is the seat of many of our basic passions and drives, including the drives that lead to uncontrollable sexual extremes such as satyriasis and nymphomania. 

Suppose, at a time that may be surprisingly near at hand, the police were to trap Mr. X, a vicious rapist whose crimes had terrorized the women of a neighborhood for months. Instead of packing him off to jail, they send him in for brain surgery. The surgeon delicately readjusts the distorted amygdala, and the patient turns into a gentle soul with a sweet, loving disposition. He is clearly a stranger to the man who was wheeled into the operating room. Is he the same man, really? Is he responsible for the crimes that he–or that other person–committed? Can he be punished? Should he go free?

As time goes on, it may be necessary to declare, without the occurrence of death, that Mr. X has ceased to exist and that Mr. Y has begun to be. This would be a metaphorical kind of death and rebirth, but quite real psychologically–and thus, perhaps, legally.”

Like much of the pre-Internet recording-industry infrastructure, the Columbia House Music Club, an erstwhile popular method of bulk-purchasing songs through snail mail, no longer exists, having departed this world before iTunes’ unfeeling gaze, as blank and pitiless as the sun. Your penny or paper dollar will no longer secure you a dozen records or tapes, nor do you have to experience the buyer’s remorse of one who reflexively purchases media without heeding the fine print which reveals that the relationship, as the Carpenters would say, had only just begun.

Of course, music pilfering didn’t start in our digital times with Napster, and Columbia was a prime target for those who loved systems capable of gaming. Via the excellent Longreads, here’s the opening of “The Rise and Fall of the Columbia House Record Club — and How We Learned to Steal Music,” a 2011 Phoenix article by Daniel Brockman and Jason W. Smith:

On June 29, 2011, the last remnant of what was once Columbia House — the mightiest mail-order record club company that ever existed — quietly shuttered for good. Other defunct facets of the 20th-century music business have been properly eulogized, but it seems that nary a tear was shed for the record club. Perhaps no one noticed its demise. After all, by the end, Columbia House was no longer Columbia House; it had folded into its main competitor and become an online-only entity years before.

A more likely explanation, though, is that a new generation of music fans who had never known a world without the Internet couldn’t grasp the marvel that was the record club in its heyday. From roughly 1955 until 2000, getting music for free meant taping a penny to a paper card and mailing it off for 12 free records — along with membership and the promise of future purchasing.

The allure of the record club was simple: you put almost nothing down, signed a simple piece of paper, picked out some records, and voila! — a stack of vinyl arrived at your doorstep. By 1963, Columbia House was the flagship of the record-club armada, with 24 million records shipped. By 1994, they had shipped more than a billion records, accounted for 15 percent of all CD sales, and had become a $500-million-a-year behemoth that employed thousands at its Terre Haute, Indiana, manufacturing and shipping facility.

Of course, most of the record clubs’ two million customers failed to read the fine print, obligating them to purchase a certain number of monthly selections at exorbitant prices and even more exorbitant shipping costs. At the same time, consumers plotted to sign up multiple accounts under assumed names, in order to keep getting those 12-for-a-penny deals as often as possible. Record clubs may have introduced several generations of America’s youth to the concept of collection agencies — and the concept of stealing music, decades before the advent of the Internet.”

Tags: ,

In 1958, Disney played large-scale urban planner, imagining the world as interconnected mototopia. Cantilevered skyways and transcontinental motorways and highway escalators, anyone? Nothing so fantastical was necessary, but we should have retrofitted highways and roads to be smarter, cleaner and safer long before driverless cars were even in the conversation, but we never had the ingenuity or political will to do so.

That last 5% of perfecting autonomous vehicles may be more difficult than the first 95%, and driverless options will likely continue to be introduced incrementally rather than all at once, but if such a system is 100% realized, there will be all manner of ramifications. In a post on his blog, Google driverless sector consultant Brad Templeton looks at the possible outcomes in such a brave new world. An excerpt:

“When I talk about robocars, I often get quite opposite reactions:

  • Americans, in particular, will never give up car ownership! You can pry the bent steering wheel from my cold, dead hands.
  • I can’t see why anybody would own a car if there were fast robotaxi service!
  • Surely human drivers will be banned from the roads before too long.

I predict neither extreme will be true. I predict the market will offer all options to the public, and several options will be very popular. I am not even sure which will be the most popular.

  1. Many people will stick to buying and driving classic, manually driven cars. The newer versions of these cars will have fancy ADAS systems that make them much harder to crash, and their accident levels will be lower.
  2. Many will buy a robocar for their near-exclusive use. It will park near where it drops them off and always be ready. It will keep their stuff in the trunk.
  3. People who live and work in an area with robotaxi service will give up car ownership, and hire for all their needs, using a wide variety of vehicles.
  4. Some people will purchase a robocar mostly for their use, but will hire it out when they know they are not likely to use it, allowing them to own a better car. They will make rarer use of robotaxi services to cover specialty trips or those times when they hired it out and ended up needing it. Their stuff will stay in a special locker in the car.”


An impediment to automation may be “robotic” humans willing to work for wages so low that it’s not cost efficient to replace them. From a peek inside a sprawling distribution center by Matt King of the Atlantic:

“Susan and her co-workers appeared in good spirits as the manager introduced them by name and told us how long they had been working at the company. About half of the workers had a mental or physical disability, a result of the company’s ‘inclusion’ program which mirrored similar efforts at other major retailers. In a news segment about a DC in South Carolina, one disabled worker said hers was ‘the coolest job in the world.’

These programs are viewed as leading examples of combined corporate and social success, but that success may be short-sighted. Pickers and low-skill jobs of the sort represent a pain point for DCs and the e-commerce executives who are managing their evolution. The jobs appear simple (one Amazon executive referred to the workers as like ‘robots in human form’), but the tasks are difficult to automate at scale: ‘Because products vary so much in size and shape and because of the way they sit on shelves, robotic manipulators still can’t beat real arms and hands,’ explains Erico Guizo on Spectrum, the blog for the Institute of Electrical and Electronic Engineers (IEEE).

Unlike Susan and her co-workers, who were salaried and long-time employees of the company, a growing number of ‘pickers’ at DCs across the country are hired through staffing agencies and classified as ‘non-permanent’ or ‘temporary.’ This means no health care coverage or benefits, pay that’s usually barely above the minimum wage, and employment that can be voided at a whim when the workers are no longer needed.

This tenuous labor arrangement is partly the result of an honest fluctuation in the demand for these jobs: The biggest influx of DC workers occurs just before the holiday season, when online retailers conduct a majority of their annual business. But like retail jobs, the arrangement is also an acknowledgement of the underlying economic reality: The jobs are utterly low-skill, and there exists a large supply of unemployed Americans willing to do the work.

‘In a way, because low-wage jobs are so cheap, we haven’t seen as much automation as you could,’ Joseph Foudy, a professor of economics at NYU’s Stern School of Business, told me.”

Tags: , ,

We’ve been able to feed millions of images into social networks for “free,” armies of servers our seeming supplicants, but with facial-recognition software coming of age, the bill is nearly due. Will the surprising acceptance of surveillance online translate to the physical world? From Paul Rubens at the BBC:

“Imagine walking into a shop you’ve never been in before, to be greeted by name by a sales assistant you don’t know.

If you’re David Beckham or Lily Allen you may be used to this type of VIP treatment, but if your fame is more limited, or even non-existent then you might find this attention rather disconcerting.

Despite this, thanks to facial recognition software you don’t need to be a celebrity for sales assistants to know your name the moment you enter a shop.

That’s because companies such as Japanese technology giant NEC and FaceFirst, a California-based company, offer systems that use cameras placed at the entrances to shops to identify people as they come in.

If your face fits

When important existing or potential customers are spotted, a text message can be sent to appropriate sales staff to ensure they provide personal attention.

‘Someone could approach you and give you a cappuccino when you arrive, and then show you the things they think you will be interested in buying,’ says Joel Rosenkrantz, FaceFirst’s chief executive.

Before a system such as FaceFirst’s can be put into operation, it has to be loaded up with photos. So an obvious question to ask is where would they come from?”


It wasn’t a commercial triumph like the organ named for him, but Laurens Hammond’s “Teleview” projection system was a critical triumph in early 3D films. The set-up was installed in Manhattan’s Selwyn Theater in the early 1920s, and moviegoers were treated to screenings of The Man From Mars, a stereoscopic film made especially for Teleview, which was shown on a large screen and on individual viewing devices attached at each seat. It apparently looked pretty great. Alas, the equipment and installation was costly, and no other cinemas adopted the technology. An article follows about the apparatus from the December 17, 1922 Brooklyn Daily Eagle.


I’ll use the graph below, from a post by Andrew Sullivan at the Dish, as possible proof of my contention that although police body-cameras may not instantly bring about a higher degree of justice, the images will effect public consciousness, which may in turn be brought to bear on race and policing.


Andrew McAfee, co-author with Eric Brynjolfsson of The Second Machine Age, believes that Weak AI will destabilize employment for decades, but he doesn’t think species-threatening Artificial Intelligence is just around the bend. From his most recent Financial Times blog post:

“AI does appear to be taking off: after decades of achingly slow progress, computers have in the past few years demonstrated superhuman ability, from recognising street signs in pictures and diagnosing cancer to discerning human emotions and playing video games. So how far off is the demon?

In all probability, a long, long way away; so long, in fact, that the current alarmism is at best needless and at worst counterproductive. To see why this is, an analogy to biology is helpful.

It was clear for a long time that important characteristics of living things (everything from the colour of pea plant flowers to the speed of racehorses) was passed down from parents to their offspring, and that selective breeding could shape these characteristics. Biologists hypothesised that units labelled ‘genes’ were the agents of this inheritance, but no one knew what genes looked like or how they operated. This mystery was solved in 1953 when James Watson and Francis Crick published their paper describing the double helix structure of the DNA molecule. This discovery shifted biology, giving scientists almost infinitely greater clarity about which questions to ask and which lines of inquiry to pursue.

The field of AI is at least one ‘Watson and Crick moment’ away from being able to create a full artificial mind (in other words, an entity that does everything our brain does). As the neuroscientist Gary Marcus explains: ‘We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.’ We also do not have any clear idea how a human child is able to know so much about the world — that is a cat, that is a chair — after being exposed to so few examples. We do not know exactly what common sense is, and it is fiendishly hard to reduce to a set of rules or logical statements. The list goes on and on, to the point that it feels like we are many Watson and Crick moments away from anything we need to worry about.”

Tags: ,

China has quietly surpassed the U.S. this year as the world’s largest economic power, and that’s not a situation likely to reverse itself anytime soon, even if that nation should suffer a large-scale financial downturn. But what is the significance of America being number two? From Joseph Stiglitz at Vanity Fair:

“Now China is the world’s No. 1 economic power. Why should we care? On one level, we actually shouldn’t. The world economy is not a zero-sum game, where China’s growth must necessarily come at the expense of ours. In fact, its growth is complementary to ours. If it grows faster, it will buy more of our goods, and we will prosper. There has always, to be sure, been a little hype in such claims—just ask workers who have lost their manufacturing jobs to China. But that reality has as much to do with our own economic policies at home as it does with the rise of some other country.

On another level, the emergence of China into the top spot matters a great deal, and we need to be aware of the implications.

First, as noted, America’s real strength lies in its soft power—the example it provides to others and the influence of its ideas, including ideas about economic and political life. The rise of China to No. 1 brings new prominence to that country’s political and economic model—and to its own forms of soft power. The rise of China also shines a harsh spotlight on the American model. That model has not been delivering for large portions of its own population. The typical American family is worse off than it was a quarter-century ago, adjusted for inflation; the proportion of people in poverty has increased. China, too, is marked by high levels of inequality, but its economy has been doing some good for most of its citizens. China moved some 500 million people out of poverty during the same period that saw America’s middle class enter a period of stagnation. An economic model that doesn’t serve a majority of its citizens is not going to provide a role model for others to emulate. America should see the rise of China as a wake-up call to put our own house in order.

Second, if we ponder the rise of China and then take actions based on the idea that the world economy is indeed a zero-sum game—and that we therefore need to boost our share and reduce China’s—we will erode our soft power even further. This would be exactly the wrong kind of wake-up call. If we see China’s gains as coming at our expense, we will strive for ‘containment,’ taking steps designed to limit China’s influence. These actions will ultimately prove futile, but will nonetheless undermine confidence in the U.S. and its position of leadership. U.S. foreign policy has repeatedly fallen into this trap.”


The whole world is a city, or becoming one, we’ve been told repeatedly, but a new Economist report pushes back at the idea, arguing that China, India and Brazil, three ascendant powers, are embracing the sprawl. Measures must be taken to ensure that the environmental costs of non-density are minimized. The opening:

“IN THE West, suburbs could hardly be less fashionable. Singers and film-makers lampoon them as the haunts of bored teenagers and desperate housewives. Ferguson, Missouri, torched by its residents following the police shooting of an unarmed black teenager, epitomises the failure of many American suburbs. Mayors like boasting about their downtown trams or metrosexual loft dwellers not their suburbs.

But the planet as a whole is fast becoming suburban. In the emerging world almost every metropolis is growing in size faster than in population. Having bought their Gucci handbags and Volkswagens, the new Asian middle class is buying living space, resulting in colossal sprawl. Many of the new suburbs are high-rise, though still car-oriented; others are straight clones of American suburbs (take a look at Orange County, outside Beijing). What should governments do about it?

The space race

Until a decade or two ago, the centres of many Western cities were emptying while their edges were spreading. This was not for the reasons normally cited. Neither the car nor the motorway caused suburban sprawl, although they sped it up: cities were spreading before either came along. Nor was the flight to the suburbs caused by racism. Whites fled inner-city neighbourhoods that were becoming black, but they also fled ones that were not. Planning and zoning rules encouraged sprawl, as did tax breaks for home ownership—but cities spread regardless of these. The real cause was mass affluence. As people grew richer, they demanded more privacy and space. Only a few could afford that in city centres; the rest moved out.

The same process is now occurring in the developing world, but much more quickly.”

« Older entries § Newer entries »