Excerpts

You are currently browsing the archive for the Excerpts category.

There’s nothing the hideous hotelier Donald Trump has said or done that’s outside the modern Republican Party playbook. He just refuses to use soft, coded language to sell his extremism, which has been the hallmark of the GOP since the rise of Newt Gingrich. So when he disgustingly labels President Obama (“Kenyan”) or Mexicans (“rapists”) or John McCain (“captured“), he’s just boiled down the mindset of conservatives to its essence. And if you think the McCain remarks were somehow out of character for Republicans, see how John Kerry or Tammy Duckworth feel about that. 

The positive response to Trump among many registered Republicans is really no surprise. The more radical band, from Christian conservatives to the Tea Party, is weary of being used by the Karl Roves of the world to consolidate power. They’re angry and they want someone who’ll speak to that anger. The racist Birther buffoon may fall from the penthouse, but his followers–both a torch-carrying mob and the Frankenstein monster the party has created–will still be there. They can’t be controlled, and that’s the logical outcome for the GOP, which has spent decades cultivating anger over threatened privilege.

From Edward Luce in the Financial Times:

Any moment now, the most buffoonish, prejudiced, egomaniacal candidate in recent Republican history will implode. All that will be left to remind us of our brief spell of folly will be those neon signs flashing “Trump” on skyscrapers and casinos around the country. At that point, thank goodness, we can resume politics as usual.

Alas, the cognoscenti are kidding themselves. US politics will not pick up where it left off. Even if Mr Trump becomes the first recorded human to be lifted skywards in rapture — the “end of days” scenario to which some of his fans subscribe — he will leave a visible mark on the Republican party.

In a field of 16 candidates, when one polls a quarter of the vote it is the equivalent of a landslide. Mr Trump’s detractors, who form arguably one of the largest bipartisan coalitions in memory, comfort themselves that he is simply on an ego trip that will turn sour. That may be true. But they are missing the point. The legions of Republicans flocking to Mr Trump’s banner are not going anywhere. If he crashes, which he eventually must, they will find another champion.

Tags: ,

If Brad Darrach hadn’t also profiled Bobby Fischer, the autonomous robot named Shakey would have been the most famous malfunctioning machine he ever wrote about. 

The John Markoff’s Harper’s piece I posted about earlier made mention of Shakey, the so-called “first electronic person,” which struggled to take baby steps on its own during the 1960s at the Stanford Research Institute. The machine’s intelligence was glacial, and much to the chagrin of its creators, it did not show rapid progress in the years that followed, as Moore’s Law forgot to do its magic.

Although Darrach’s 1970 Life piece mostly focuses on the Palo Alto area, it ventured to the other coast to allegedly record this extravagantly wrong prediction from MIT genius Marvin Minsky: “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.” The thing is, Minsky immediately and vehemently denied the quote and since other parts of the piece’s veracity were also questioned, I believe his disavowal.

The Life article (which misspelled the robot’s name as “Shaky”), for its many flaws and ethical lapses, did sagely acknowledge the potential for a post-work world and the advent of superintelligence and the challenges those developments might bring. The opening:

It looked at first glance like a Good Humor wagon sadly in need of a spring paint job. But instead of a tinkly little bell on top of its box-shaped body there was this big mechanical whangdoodle that came rearing up, full of lenses and cables, like a junk sculpture gargoyle.

“Meet Shaky,” said the young scientist who was showing me through the Stanford Research Institute. “The first electronic person.”

I looked for a twinkle in the scientist’s eye. There wasn’t any. Sober as an equation, he sat down at an input ter minal and typed out a terse instruction which was fed into Shaky’s “brain,” a computer set up in a nearby room: PUSH THE BLOCK OFF THE PLATFORM.

Something inside Shaky began to hum. A large glass prism shaped like a thick slice of pie and set in the middle of what passed for his face spun faster and faster till it disolved into a glare then his superstructure made a slow 360degree turn and his face leaned forward and seemed to be staring at the floor. As the hum rose to a whir, Shaky rolled slowly out of the room, rotated his superstructure again and turned left down the corridor at about four miles an hour, still staring at the floor.

“Guides himself by watching the baseboards,” the scientist explained as he hurried to keep up. At every open door Shaky stopped, turned his head, inspected the room, turned away and idled on to the next open door. In the fourth room he saw what he was looking for: a platform one foot high and eight feet long with a large wooden block sitting on it. He went in, then stopped short in the middle of the room and stared for about five seconds at the platform. I stared at it too.

“He’ll never make it.” I found myself thinking “His wheels are too small.” All at once I got gooseflesh. “Shaky,” I realized, ”is thinking the same thing I am thinking!”

Shaky was also thinking faster. He rotated his head slowly till his eye came to rest on a wide shallow ramp that was lying on the floor on the other side of the room. Whirring brisky, he crossed to the ramp, semicircled it and then pushed it straight across the floor till the high end of the ramp hit the platform. Rolling back a few feet, he cased the situation again and discovered that only one corner of the ramp was touching the platform. Rolling quickly to the far side of the ramp, he nudged it till the gap closed. Then he swung around, charged up the slope, located the block and gently pushed it off the platform.

Compared to the glamorous electronic elves who trundle across television screens, Shaky may not seem like much. No death-ray eyes, no secret transistorized lust for nubile lab technicians. But in fact he is a historic achievement. The task I saw him perform would tax the talents of a lively 4-year-old child, and the men who over the last two years have headed up the Shaky project—Charles Rosen, Nils Nilsson and Bert Raphael—say he is capable of far more sophisticated routines. Armed with the right devices and programmed in advance with basic instructions, Shaky could travel about the moon for months at a time and, without a single beep of direction from the earth, could gather rocks, drill Cores, make surveys and photographs and even decide to lay plank bridges over crevices he had made up his mind to cross.

The center of all this intricate activity is Shaky’s “brain,” a remarkably programmed computer with a capacity more than 1 million “bits” of information. In defiance of the soothing conventional view that the computer is just a glorified abacuus, that cannot possibly challenge the human monopoly of reason. Shaky’s brain demonstrates that machines can think. Variously defined, thinking includes processes as “exercising the powers of judgment” and “reflecting for the purpose of reaching a conclusion.” In some at these respects—among them powers of recall and mathematical agility–Shaky’s brain can think better than the human mind.

Marvin Minsky of MIT’s Project Mac, a 42-year-old polymath who has made major contributions to Artificial Intelligence, recently told me with quiet certitude, “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”

I had to smile at my instant credulity—the nervous sort of smile that comes when you realize you’ve been taken in by a clever piece of science fiction. When I checked Minsky’s prophecy with other people working on Artificial Intelligence, however, many at them said that Minsky’s timetable might be somewhat wishful—”give us 15 years,” was a common remark—but all agreed that there would be such a machine and that it could precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of growth in science, education and the arts. At the same time a number of computer scientists fear that the godsend may become a Golem. “Man’s limited mind,” says Minsky, “may not be able to control such immense mentalities.”•

Tags: , , ,

Cloned sheep once gave humans nightmares, but the science has quietly insinuated itself into the world of polo, thanks to star Adolfo Cambiaso, who impetuously saved cells from his best stallion, Aiken Cura, as the horse was being prepared for euthanasia, its leg ruined. There are now dozens of cloned versions of champion polo ponies, some of whom are competing on the field of play. 

In a really smart Vanity Fair article, Haley Cohen explores how cloning science, which dates back to sea-urchin experiments in 1885, came to the sport of mounts and mallets. Oddly, it involves Imelda Marcos. And, yes, there is discussion about using the same methods to clone humans. An excerpt:

As the pair made their way toward Cambiaso’s stabling area, the exhausted Aiken Cura’s front left leg suddenly gave out. When Cambiaso felt the horse begin to limp beneath him, he leapt out of his saddle and threw his blue-and-white helmet to the ground in anguish.

“Save this one whatever it takes!” he pleaded, covering his face with his gloves. But the leg had to be amputated below the knee, and eventually Cambiaso—whose team won the Palermo Open that year and would go on to win the tournament another five times—was forced to euthanize his beloved Cura.

Before he said his final good-bye, however, he had a curious request: he asked a veterinarian to make a small puncture in the stallion’s neck, put the resulting skin sample into a deep freeze, and store it in a Buenos Aires laboratory. He remembers, “I just thought maybe, someday, I could do something with the cells.”

His hope was not in vain. With the saved skin sample, Cambiaso was able to use cloning technology to bring Aiken Cura back to life. These days, a four-year-old, identical replica of Cambiaso’s star stallion—called Aiken Cura E01—cavorts around a flower-rimmed field in the Argentinean province of Córdoba, where he has begun to breed and train for competition.

Now 40 years old, Cambiaso is ruggedly handsome, with long brown hair, covetable bone structure, and permanent stubble. But in spite of his athleticism, good looks, and wealth, he is surprisingly shy. Walking across the Palermo polo field, where he’s come to watch his oldest daughter play, he speaks in short spurts, as if he would rather not be talking to a stranger. Staring into the distance, he says, “Today, seeing these clones is more normal for me. But seeing Cura alive again after so many years was really strange. It’s still strange. Thank goodness I saved his cells.”•

 

Tags: ,

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

The Internet of Things has potential for more good and bad than the regular Internet because it helps bring the quantification and chaos into the physical world. The largest experiment in anarchy in the history will be unloosed in the 3D world, inside our homes and cars and bodies, and sensors will, for better or worse, measure everything. That would be enough of a challenge but there’s also the specter of hackers and viruses.

A small piece from the new Economist report about IoT security concerns:

Modern cars are becoming like computers with wheels. Diabetics wear computerised insulin pumps that can instantly relay their vital signs to their doctors. Smart thermostats learn their owners’ habits, and warm and chill houses accordingly. And all are connected to the internet, to the benefit of humanity.

But the original internet brought disbenefits, too, as people used it to spread viruses, worms and malware of all sorts. Suppose, sceptics now worry, cars were taken over and crashed deliberately, diabetic patients were murdered by having their pumps disabled remotely, or people were burgled by thieves who knew, from the pattern of their energy use, when they had left their houses empty. An insecure internet of things might bring dystopia.  

Networking opportunities

All this may sound improbably apocalyptic. But hackers and security researchers have already shown it is possible.•

An uncommonly thoughtful technology entrepreneur, Vivek Wadhwa doesn’t focus solely on the benefits of disruption but its costs as well. He believes we’re headed for a jobless future and has debated the point with Marc Andreessen, who thinks such worries are so much needless hand-wringing. 

Here’s the most important distinction: If time proves Wadhwa wrong, his due diligence in the matter will not have hurt anyone. But if Andreessen is incorrect, his carefree manner will seem particularly ugly.

No one need suggest we inhibit progress, but we better have political solutions ready should entrenched technological unemployment become the new normal. Somehow we’ll have to work our way through the dissonance of a largely free-market economy meeting a highly automated one.

In a new Washington Post piece on the topic, Wadhwa considers some solutions, including the Carlos Slim idea of a three-day workweek and the oft-suggested universal basic income. The opening:

“There are more net jobs in the world today than ever before, after hundreds of years of technological innovation and hundreds of years of people predicting the death of work.  The logic on this topic is crystal clear.  Because of that, the contrary view is necessarily religious in nature, and, as we all know, there’s no point in arguing about religion.”

These are the words of tech mogul Marc Andreessen, in an e-mail exchange with me on the effect of advancing technologies on employment. Andreessen steadfastly believes that the same exponential curve that is enabling creation of an era of abundance will create new jobs faster and more broadly than before, and calls my assertions that we are heading into a jobless future a luddite fallacy.

I wish he were right, but he isn’t. And it isn’t a religious debate; it’s a matter of public policy and preparedness. With the technology advances that are presently on the horizon, not only low-skilled jobs are at risk; so are the jobs of knowledge workers. Too much is happening too fast. It will shake up entire industries and eliminate professions. Some new jobs will surely be created, but they will be few. And we won’t be able to retrain the people who lose their jobs, because, as I said to Andreessen, you can train an Andreessen to drive a cab, but you can’t retrain a laid-off cab driver to become an Andreessen.  The jobs that will be created will require very specialized skills and higher levels of education — which most people don’t have.

I am optimistic about the future and know that technology will provide society with many benefits. I also realize that millions will face permanent unemployment.•

Tags:

While it may not be good for our environment, industrialization has been good for our wallets. Transitioning from an agriculture-based economy to a manufacturing one allows a country to rapidly increase its wealth (and to contribute further wealth to other nations supplying the resources). In this century, China is, of course, the example writ large.

A post-Industrial economy in which manufacturing is no longer as valuable would seem to be the new reality, and a Disney economy of service and entertainment isn’t very transferable. In a Bloomberg View column, Noah Smith attempts to figure out a way forward for nations that are playing catch up in the Information Age. An excerpt:

The main engine of global growth since 2000 has been the rapid industrialization of China. By channeling the vast savings of its population into capital investment, and by rapidly absorbing technology from advanced countries, China was able to carry out the most stupendous modernization in history, moving hundreds of millions of farmers from rural areas to cities. That in turn powered the growth of resource-exporting countries such as Brazil, Russia and many developing nations that sold their oil, metals and other resources to the new workshop of the world. 

The problem is that China’s recent slowdown from 10 percent annual growth to about 7 percent is only the beginning. The recent drops in housing and stock prices are harbingers of a further economic moderation. That is inevitable, since no country can grow at a breakneck pace forever. And with the slowing of China, Brazil and Russia have been slowing as well — the heyday of the BRICs (Brazil, Russia, India and China) is over. 

But the really worrying question is: What if other nations can’t pick up the slack when China slows? What if China is the last country to follow the tried-and-true path of industrialization? 

There is really only one time-tested way for a country to get rich. It moves farmers to factories and imports foreign manufacturing technology. When you move surplus farmers to cities, their productivity soars — this is the so-called dual-sector model of economic development pioneered by economist W. Arthur Lewis. So far, no country has reached high levels of income by moving farmers to service jobs en masse. Which leads us to conclude that there is something unique about manufacturing.•

 

Tags:

Reid Hoffman of Linkedin published a post about what road transportation would be like if cars become driverless and communicate with one another autonomously.

There’ll certainly be benefits. If your networked car knows ahead of time that a certain roadway is blocked by weather conditions, your trip will be smoother. More important than mere convenience, of course, is that there would likely be far fewer accidents and less pollution.

The entrepreneur believes human-controlled vehicles will be restricted legally to specific areas where “antique” driving can be experienced. There’s no timeframe given for this legislation, but it seems very unlikely that it would occur anytime soon, nor does it seem particularly necessary since the nudge of high insurance rates will likely do the trick. 

Hoffman acknowledges some of the many problems that would attend such a scenario if it’s realized. In addition to non-stop surveillance by corporations and government and the potential for large-scale hacking, there’s skill fade to worry about. (I don’t think the latter concern will be precisely remedied by tooling down a virtual road via Oculus Rift, as Hoffman suggests for those pining for yesteryear.) I think the most interesting issues he conjures are advertisers paying car companies to direct traffic down a certain path to expose travelers to businesses or the best routes being conferred upon higher-paying customers. That would be the Net Neutrality argument relocated to the streets and highways.

It’s definitely worth reading. An excerpt:

Autonomous vehicles will also be able to share information with each other better than human drivers can, in both real-time situations and over time. Every car on the road will benefit from what every other car has learned. Driving will be a networked activity, with tighter feedback loops and a much greater ability to aggregate, analyze, and redistribute knowledge.

Today, as individual drivers compete for space, they often work against each other’s interests, sometimes obliviously, sometimes deliberately. In a world of networked driverless cars, driving retains the individualized flexibility that has always made automobility so attractive. But it also becomes a highly collaborative endeavor, with greater cooperation leading to greater efficiency. It’s not just steering wheels and rear-view mirrors that driverless cars render obsolete. You won’t need horns either. Or middle fingers.

Already, the car as network node is what drives apps like Waze, which uses smartphone GPS capabilities to crowd-source real-time traffic levels, road conditions, and even gas prices. But Waze still depends on humans to apprehend the information it generates. Autonomous vehicles, in contrast, will be able to generate, analyze, and act on information without human bottlenecks. And when thousands and then even millions of cars are connected in this way, new capabilities are going to emerge. The rate of innovation will accelerate – just as it did when we made the shift from standalone PCs to networked PCs.

So we as a society should be doing everything we can to reach this better future sooner rather than later, in ways that make the transition as smooth as possible. And that includes prohibiting human-driven cars in many contexts. On this particular road trip, the journey is not the reward. The destination is.•

Tags:

Did you ever notice that futuristic metallic clothes don’t really ever arrive in stores near you? That’s because while they’re possible, perhaps even useful in some cases, they aren’t truly necessary.

In Kevin Kelly’s 2010 book, What Technology Wants, he wrote of the paths forward for the complexity of technology. Kelly believed the physical world around us will not change drastically in most ways, but some technologies that grow amazingly complex will be retrofitted onto our more “primitive” world. I mostly agree, though I don’t think Kelly was correct to include automobiles in that category. They’ve since speeded in the other direction.

The excerpt:

There are several different ways technology’s complexity can go:

Scenario #1. As in nature, the bulk of technology remains simple, basic, and primeval because it works. And the primitive works well as a foundation for the thin layer of complex technology built upon it. Because the technium is an ecosystem of technologies, most of it will remain in its equivalent microbial stage: brick, wood, hammers, copper wires, electric motors and so on. We could develop nanoscale computers that reproduced themselves, but they wouldn’t fit our fingers. For the most part, humans will deal with simple things (as we do now) and only interact with the dizzily more complex occasionally, just as we now do. (For most of our day our hands touch relatively coarse artifacts.) Cities and houses remain similar, populated with a veneer of fast-evolving gadgets and screens on every surface.

Scenario #2. Complexity, like all other factors in growing systems, plateaus at some point, and some other quality we had not noticed earlier (perhaps quantum entanglement) takes its place as the prime observable trend. In other words, complexity may simply be the lens we see the world through at this moment, the metaphor of the era, when in reality it is a reflection of us rather than property of evolution.

Scenario #3. There is no limit to how complex things can get. Everything is complexifying over time, headed toward that omega point of ultimate complexity. The bricks in our building will become smart, the spoon in our hand will adapt to our grip; cars will be as complicated as jets are today. The most complex things we use in a day will be beyond any single person’s comprehension.

If I had to, I would bet, perhaps surprisingly, on scenario #1. The bulk of technology will remain simple or semi-simple, while a smaller portion will continue to complexify greatly. I expect our cities and homes a thousand years hence to be recognizable, rather than unrecognizable.•

Tags:

Engineering rather than organic growth has driven the reordering of much of modern Chinese life, and it’s at the crux of Beijing’s ongoing radical transformation into a megacity of 130 million people. Complicating matters are the absence of property taxes and the restriction placed upon local municipalities from retaining any other type of fees they collect, so basic infrastructure and services have often been lacking or altogether absent during the capital city’s massive makeover. 

From Ian Johnson’s very interesting NYT report:

The planned megalopolis, a metropolitan area that would be about six times the size of New York’s, is meant to revamp northern China’s economy and become a laboratory for modern urban growth.

“The supercity is the vanguard of economic reform,” said Liu Gang, a professor at Nankai University in Tianjin who advises local governments on regional development. “It reflects the senior leadership’s views on the need for integration, innovation and environmental protection.”

The new region will link the research facilities and creative culture of Beijing with the economic muscle of the port city of Tianjin and the hinterlands of Hebei Province, forcing areas that have never cooperated to work together. …

Jing-Jin-Ji, as the region is called (“Jing” for Beijing, “Jin” for Tianjin and “Ji,” the traditional name for Hebei Province), is meant to help the area catch up to China’s more prosperous economic belts: the Yangtze River Delta around Shanghai and Nanjing in central China, and the Pearl River Delta around Guangzhou and Shenzhen in southern China.

But the new supercity is intended to be different in scope and conception. It would be spread over 82,000 square miles, about the size of Kansas, and hold a population larger than a third of the United States. And unlike metro areas that have grown up organically, Jing-Jin-Ji would be a very deliberate creation.•

Tags: ,

Life to me is just about having a little fun and doing some good things for others before time runs out–and that’s what it’s doing, rapidly. So why would our comic-book culture depress me so? Clearly it’s fun for many people. It isn’t just because I’m not personally interested in the form. That’s true of many things that don’t make me sad.

Overall, I’m glad the “barbarians” have stormed the gates, pleased technology has allowed everyone in the audience to essentially be part of the show, as Glenn Gould long ago predicted it would. The economics aren’t good for many professionals, but I still vote for the mob. I have no problem with Kris Jenner being the new Joe Jackson and a big ass being the new moonwalk. It’s not nothing, just something different.

Still, sadness.

I guess what troubles me is that it’s all centered on consumerism. It’s not only about owning a product but becoming one. That’s true of people creating free content from their personal information for Facebook and citizens being considered brands and fans donning costumes of their favorite toys at conventions. We’ve run out of things to eat so now we’re eating ourselves. That’s what our mix of democracy and capitalism has led us to.

A.O. Scott of the New York Times went to Comic-Con in San Diego and saw himself when gawking at X-Men, Yodas and zombies. His resulting article is a brilliant summation of so many things in the culture, even if he’s not quite as somber as I am about this new normal. An excerpt:

For a long weekend in July, this city a few hours down the freeway from Hollywood and Disneyland becomes a pilgrimage site for something like 130,000 worshipers. It’s both ordeal and ecstasy, and the secular observer is in no real position to judge. You arrive as an ethnographer, evolve into a participant observer and start to feel like a convert, an addict to what is surely the modern-day opiate of the masses.

What are the doctrines and canons of this faith? In some ways, they aren’t so mysterious. The Comic-Con pilgrims, with their homemade costumes and branded bags of merchandise, represent the fundamentalist wing of the ecumenical creed of fandom. Almost everyone in the world outside falls somewhere on the spectrum of observance. We go to movies, we watch television, we build things out of Lego. I went to Comic-Con thinking I was going to study the folkways of an exotic tribe. I didn’t suspect I would find myself.

Literally where I found myself, for most of the four days, was in line. It’s the shared experience that unites the diverse subcultures, and the most available topic of conversation is just how long and how many those lines are. You could either figure out which line you wanted to join — would you rather be attacked by zombies or score swag from “The Peanuts Movie”? Cop an “exclusive” Marvel toy or a drawn-to-order sketch from the indie animator Bill Plympton? — or follow the herd. “What’s this line for?” is a question I heard most often from people who were already a dozen or more bodies into it.

In other eras and societies — the Great Depression, the Soviet Union — long lines signify scarcity or oppression. In the Bizarro World that is 21st-century America, it’s the opposite: Long lines are signs of abundance and hedonism. Much can be learned about a civilization from studying its queuing habits, and Comic-Con surpasses even the Disney theme parks in the sophistication of its crowd management and the variety of its arrangements.

Tags:

One thing about America (and likely many other places) during the age of connectedness is that change happens at a greatly accelerated pace. In 2008, no Presidential candidate, including Barack Obama and Hillary Clinton, would dare support gay marriage, yet here we are less than a decade later with its shift to legal status complete. While there was certainly a long effort by LGBT groups to secure that victory, it’s hard to believe that final push wouldn’t have taken far longer in an earlier era. 

Ideas spread quickly now. Not even science fiction can really keep up with science. Often that will be wonderful and occasionally not. 

It’s the same for corporations as it is for cultural issues. Google realizes it won’t dominate search for decades, that its fortunes will be destabilized much more quickly than Microsoft’s or Hewlett-Packard’s were. That’s why Google X is so important. If Google doesn’t truly become the AI company that Larry Page initially envisioned in the next decade or so, it will be in its dotage, a young adult in a retirement community.

From Tim Harford’s latest FT column, concerning the Alchemist Fallacy:

While alchemists never figured out how to turn lead into gold, other craftsmen did develop a process with much the same economic implications. They worked out how to transform silica sand, one of the most common materials on earth, into the beautiful, versatile material we know as glass. It has an astonishing variety of uses from fibre optics to microscopes to aeroplane fuselages. But while gold remains highly prized, glass is now so cheap that we use it as disposable packaging for water.

When it was possible to restrict access to the secret of glassmaking, the guardians of that knowledge profited. Venetian glassmakers were clustered together on the island of Murano, where sparks from the furnaces would not endanger Venice itself. Venice had less success in preventing the secrets of glassmaking from spreading. Despite being forbidden on pain of death to leave the state of Venice, some of Murano’s glassmakers sought fortunes elsewhere. The wealth that could be earned as a glassmaking monopolist in some distant city must have been worth the risk.

That is the way of new ideas: they have a tendency to spread. Business partners will fall out and set up as rivals. Employees will leave to establish their own businesses. Time-honoured techniques such as industrial espionage or reverse engineering will be deployed. Sometimes innovators are happy to give their ideas away for nothing, whether for noble reasons or commercial ones. But it is very hard to stop ideas spreading entirely.•

Tags:

In a Guardian piece, Paul Mason, author of the forthcoming Postcapitalism, argues that in the wake of the 2008 economic collapse, information technology is toppling capitalism in a way that a million marching Marxists never could, with the new normal unable to function by the dynamics of the old order.

I agree that a fresh system is incrementally forming–especially in regards to work and likely taxation–though it’s probably a heterogeneous one that won’t be absent free markets in the near term and perhaps the longer one as well. “Abundance” is a word used by a lot of people, including the author, in describing the future, but it may not be what they think it is. Food has been abundant for many decades and there have always been hungry, even starving, people.

Mason quotes Stewart Brand’s famous line “information wants to be free,” but let’s remember the whole quote: “Information wants to be free. Information also wants to be expensive. …That tension will not go away.”

At any rate, I’m with Mason in thinking we’re on the precipice of big changes wrought by the Internet and its many offshoots and can’t wait to read his book. An excerpt:

As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.

Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy. The biggest information product in the world – Wikipedia – is made by volunteers for free, abolishing the encyclopedia business and depriving the advertising industry of an estimated $3bn a year in revenue.

Almost unnoticed, in the niches and hollows of the market system, whole swaths of economic life are beginning to move to a different rhythm.•

Tags:

Very much looking forward to the forthcoming book Machines of Loving Grace, an attempt by the New York Times journalist John Markoff to make sense of our automated future. 

In an Edge.org interview, Markoff argues that Moore’s Law has flattened out, perhaps for now or maybe for the long run, a slowdown that isn’t being acknowledged by technologists. Markoff still believes we’re headed for a highly automated future, one he senses will be slower to develop than expected. Those greatly worried about technological unemployment, the writer argues, are alarmists, since he thinks technology taking jobs is a necessity, the human population likely being unable in the future to keep pace with required production. Of course, he doesn’t have to be wrong by very much for great societal upheaval to occur and political solutions to be required.

From Markoff:

We’re at that stage, where our expectations have outrun the reality of the technology.

I’ve been thinking a lot about the current physical location of Silicon Valley. The Valley has moved. About a year ago, Richard Florida did a fascinating piece of analysis where he geo-located all the current venture capital investments. Once upon a time, the center of Silicon Valley was in Santa Clara. Now it’s moved fifty miles north, and the current center of Silicon Valley by current investment is at the foot of Potrero Hill in San Francisco. Living in San Francisco, you see that. Manufacturing, which is what Silicon Valley once was, has largely moved to Asia. Now it’s this marketing and design center. It’s a very different beast than it was.                                 

I’ve been thinking about Silicon Valley at a plateau, and maybe the end of the line. I just spent about three or four years reporting about robotics. I’ve been writing about it since 2004, even longer, when the first autonomous vehicle grand challenge happened. I watched the rapid acceleration in robotics. We’re at this point where over the last three or four years there’s been a growing debate in our society about the role of automation, largely forced by the falling cost of computing and sensors and the fact that there’s a new round of automation in society, particularly in American society. We’re now not only displacing blue-collar tasks, which has happened forever, but we’re replacing lawyers and doctors. We’re starting to nibble at the top of the pyramid.

I played a role in creating this new debate. The automation debate comes around in America at regular intervals. The last time it happened in America was during the 1960s and it ended prematurely because of the Vietnam War. There was this discussion and then the war swept away any discussion. Now it’s come back with a vengeance. I began writing articles about white-collar automation in 2010, 2011. 

There’s been a deluge of books such as The Rise of the Robots, The Second Machine Age, The Lights in the Tunnel, all saying that there will be no more jobs, that the automation is going to accelerate and by 2045 machines will be able to do everything that humans can do. I was at dinner with you a couple years ago and I was ranting about this to Danny Kahneman, the psychologist, particularly with respect to China, and making the argument that this new wave of manufacturing automation is coming to China. Kahneman said to me, “You just don’t get it.” And I said, “What?” And he said, “In China, the robots are going to come just in time.”

_____________________________

 

“All Watched Over
by Machines of Loving Grace”

I’d like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal brothers and sisters,
and all watched over
by machines of loving grace.

Tags:

A deluge of data that assaults the senses doesn’t worry me so much. What’s more concerning is when those tubes carrying information to and from us are so quiet that you can barely hear a hum, when there are no tubes, when the system becomes seamless. It will happen, and it will seem normal.

From “We Are Data: The Future of Machine Intelligence,” Douglas Coupland’s latest Financial Times column (and one of his best):

What we’re discussing here is the creation of data pools that, until recently, have been extraordinarily difficult and expensive to gather. However, sooner rather than later, we’ll all be drowning in this sort of data. It will be collected voluntarily in large doses (using the Wonkr, Tinder or Grindr model) — or involuntarily or in passing through other kinds of data: your visit to a Seattle pot store; your donation to the SPCA; the turnstile you went through at a football match. Almost anything can be converted into data — or metadata — which can then be processed by machine intelligence. Quite accurately, you could say, data + machine intelligence = Artificial Intuition.

Artificial Intuition happens when a computer and its software look at data and analyse it using computation that mimics human intuition at the deepest levels: language, hierarchical thinking — even spiritual and religious thinking. The machines doing the thinking are deliberately designed to replicate human neural networks, and connected together form even larger artificial neural networks. It sounds scary . . . and maybe it is (or maybe it isn’t). But it’s happening now. In fact, it is accelerating at an astonishing clip, and it’s the true and definite and undeniable human future.•

Tags:

In his Vice Motherboard articleMarriage Won’t Make Sense When Humans Live for 1,000 Years,” Transhumanist Party Presidential candidate Zoltan Istvan predicts traditional marriage will become obsolete if radical life extension is realized. Well, sure. In fact, reconsiderations of wedlock will occur without far longer lifespans, driven by much simpler technological and sociological changes. 

Like many Transhumanists, Istvan is so ebullient about the topic that his timelines for progress are incredibly ambitious, unrealistically so. For instance: I’m willing to wager you won’t be leaving your small child at home with a robot nanny within 15 years.

From Istvan:

Social, financial, and religions pressures aside, the deeper philosophical question of the transhumanist age is: Are people really willing to marry for the rest of their lives when those lives may be hundreds or even thousands of years long? This is especially a pertinent question when it’s almost certain coming technology will allow us to radically change who we are in the near future, both physically and mentally.

In a world of indefinite lifespans, the marriage commitment takes on a whole new meaning and level of commitment.

America and many parts of the developed world are losing their religion, however, which certainly will contribute to less social pushing for matrimony. A recent Pew Research Center study found that many young people increasingly possess no religious leanings at all. In just a few decade’s time, if this statistical trajectory holds, younger generations may broadly prefer not to ever marry.

And who can argue with them? Within 15 years, some of the so-called classic advantages of marriage will be gone. Many people will have robot house nannies, driverless cars, and automated stoves that cook for us. In 20 year’s time, we may also use artificial wombs (ectogenesis) to grow babies, and use our own stem cells to provide genetic treatments to build the perfect child. A spouse will simply not be as necessary in the transhumanist age as it once was.•

Tags:

Remember when Al-Qaeda was the face of modern terror? Ah, the good old days.

One result of the confluence of America’s disastrous war in Iraq and Syria’s destabilization has been the emergence of ISIS, with its beheadings and drownings, all captured with torture-porn film techniques and promoted via social-media campaigns. 

In a Spiegel piece, Christoph Reuter interviews the incarcerated ISIS member called Abu Abdullah, the rare member of the terrorist organization captured alive. Abdullah’s chore was to outfit suicide bombers in Baghdad with explosive accessories and coach them in their mission. Quiet time in solitary has not mitigated his madness. An excerpt:

Spiegel:

Did any of the men you accompanied have doubts about their mission?

Abu Abdullah:

No, then they would have failed to carry them out. They were prepared for their assignments for a long time. When they came to me, they were calm, sometimes even joyful. When they put on the belt they would say, for example, “Fits well!” Abu Mohsen Qasimi, a young Syrian, was still making jokes two minutes before his deployment, and then, when he drove off by himself, he bid a friendly farewell. With one young Saudi Arabian, I was wondering how we could inconspicuously change spots, because I was sitting behind the wheel at first. We pretended to have car trouble, both got out and then pushed the vehicle for a bit. Nobody noticed anything. We both laughed.

Spiegel:

You are blushing as you relate that story. Apparently these are pleasant memories. Would you do everything over again?

This is the only moment in the one-and-a-half hour conversation when Abu Abdullah flinches. He turns pale, as though he had been caught red-handed. Then he says that he cannot answer the question.•

Tags: ,

I was reading a BBC article about lethal robots manufactured in South Korea, which are purchased to protect everything from pipelines to airports, and it made reference to a NYT piece I had all but forgotten about, Tim Weiner’s 2005 look at the Pentagon’s desire to robotize its forces. Autonomous soldiers haven’t yet insinuated themselves into our military in a pronounced way, but the research continues apace. Eventually technological capacity will meet desire. From Weiner:

The American military is working on a new generation of soldiers, far different from the army it has.

“They don’t get hungry,” said Gordon Johnson of the Joint Forces Command at the Pentagon. “They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”

The robot soldier is coming.

The Pentagon predicts that robots will be a major fighting force in the American military in less than a decade, hunting and killing enemies in combat. Robots are a crucial part of the Army’s effort to rebuild itself as a 21st-century fighting force, and a $127 billion project called Future Combat Systems is the biggest military contract in American history.

The military plans to invest tens of billions of dollars in automated armed forces. The costs of that transformation will help drive the Defense Department’s budget up almost 20 percent, from a requested $419.3 billion for next year to $502.3 billion in 2010, excluding the costs of war. The annual costs of buying new weapons is scheduled to rise 52 percent, from $78 billion to $118.6 billion.

Military planners say robot soldiers will think, see and react increasingly like humans. In the beginning, they will be remote-controlled, looking and acting like lethal toy trucks. As the technology develops, they may take many shapes. And as their intelligence grows, so will their autonomy.

The robot soldier has been a dream at the Pentagon for 30 years.•

Tags:

It’s difficult to believe that airports, in one way or another, won’t always be a boondoggle, but Scott McCartney of the WSJ envisions a high-tech tomorrow in which commercial fliers will be doted on and waved through by sensors and robots, welcomed and directed via their smart phones and watches. It is likely that airports, like hotels, will have less use for human workers, with holograms perhaps in the intervening period, before the process is barely noticeable

McCartney’s opening:

Like a good maître d’, the airport of the future will recognize you, greet you by name and know exactly where to put you.

Airports around the world are beginning to move in this direction. At London’s Gatwick Airport, beacons identify you by your smartphone and give GPS-like directions to your gate, pointing out food or shopping along the way. In Germany, robots at Düsseldorf’s airport park your car and return it curbside after you land, linking your itinerary to your license plate. Researchers are developing robots that will be able to check your bags and deliver them within minutes of landing.

Facial-recognition systems speed you through passport control in places including Dulles International Airport near Washington, D.C. Some airports use facial-recognition systems to track your movements around terminals. Gates in some airports are automated with doors that flash open like a subway turnstile when you scan your boarding pass or flash your smartwatch.

At the airport of the future, directional signs will be only for backup. Check-in kiosks will be tucked in a corner. Human agents may be even more unnecessary.•

__________________________

Braniff’s airport of the future, 1975.

Tags:

I’ve already posted about Johann-Dietrich Woerner, the new General Director of the European Space Agency, who wants to colonize the moon and use that perch to further explore space. Of all the settlement schemes floating around the stratosphere right now, this one seems to me to be the best and most pragmatic. If we’re going to build colonies in space, the moon should probably be first. More from Richard Hollingham at the BBC:

There are good reasons, he says, for going back to the Moon for science as well as using it a stepping-stone to further human exploration of the Solar System.

“The far side of the Moon is very interesting because we could have telescopes looking deep into the Universe, we could do lunar science on the Moon and the international aspect is very special,” he explains. “The Americans are looking to go to Mars very soon – and I don’t see how we can do that – before going to Mars we should test what we could do on Mars on the Moon.”

For example, Woerner suggests, the technology being investigated by Nasa to construct a Mars base using a giant 3D printer would be better tried out on the Moon first. Learning to live on an alien world is going to be tough – but the challenge would be a lot easier, particularly in an emergency, if the extraterrestrial community is only four days away from Earth rather than six months.

Woerner envisages his Moon village as a multinational settlement involving astronauts, Russian cosmonauts and maybe even Chinese taikonauts.•

Tags: , ,

Make a case for ridesharing as a means to greater convenience or to reduce pollution or to (potentially) disrupt racial profiling, but do not make one based on jobs. Uber has squeezed its drivers and made it clear it would love to be rid of them entirely. Uber is about Uber, not about Labor. 

Worse yet is making a case for Uber as a friend of workers by invoking the name of Eric Garner, the African-American man selling loose cigarettes who was choked to death in NYC by police in 2014, as Gerald Seabrooks, a Brooklyn bishop, did this week at a Harlem press event organized by Travis Kalanick’s outfit. Saying that Uber having its way in NYC could have prevented that tragedy is every bit as offensive and untrue as is Kalanick using military veterans as a prop for PR purposes.

From Kelly Weill at Capital New York

Gerald Seabrooks, a Brooklyn-based bishop, said increased employment opportunities would be a boon to minority communities.

“If [Eric] Garner had a job, today he would be alive,” Seabrooks said. “We’re talking economics here. We’re talking jobs.”

But Uber’s reputation isn’t necessarily progressive. The company has come under fire for taking large commissions from drivers’ paychecks, and for fighting to classify drivers as contract workers, rather than employees entitled to benefits.

De Blasio’s own administration has also accused for-hire companies like Uber of prioritizing the wealthy over the working class.

“What it boils down to is this,” taxi commissioner Meera Joshi said in June. “At some point, I strongly believe the city needs to step in and make sure that there is a balance between those of us who choose instant gratification and convenience of travel with private vehicles and the much larger group who cannot afford private car service.”•

Tags: ,

In addition to being among the best novels ever written in English, Lolita, Vladimir Nabokov’s story of monstrous love, is, shockingly, the Great American Novel, which at first blush seems absurd. How did a newcomer, who had just begun experiencing the country, process so much so soon, so that he could write a work that was of us yet was also able to brutally satirize us? Perhaps it took an immigrant with wide eyes to truly see our immigrant nation.

From John Colapinto in the New Yorker:

Lolita was not, however, Nabokov’s first attempt to write a story about a pedophile who, enamored of a particular twelve-year-old girl, marries her mother to be closer to his love object—and who finds the girl in his clutches after the mother’s untimely death. His first attempt, a short novella called The Enchanter was written in Russian shortly before his move to America. That novella, published posthumously, in 1986, by Vera and Dmitri Nabokov, shows just how important the atmosphere of America was to making Lolita the great work it is. Where The Enchanter is curiously dour, featureless, and vague, Lolita is a great, rollicking encyclopedia teeming with specific details of Nabokov’s adoptive country, sweeping into its embrace the entire American geography, from East to West, North to South, in Humbert’s zig-zagging car journeys with his under-aged sex slave (journeys that follow the same route as the decidedly more sedate butterfly-hunting trips that Nabokov made each summer with his wife).

Much of the novel’s energy derives from the love-hate relationship Nabokov had with America’s postwar culture of crap TV shows, bad westerns, squawking jukeboxes—the invigorating trash that informs the story of a cultured European’s sexual obsession with an American bobby-soxer who is, as Humbert calls her, the “ideal consumer, the subject and object of every foul poster.” Nabokov always refused the label of satirist, and it would be an oversimplification to say that Lolita merely skewers the materialism of fifties America; throughout the book, there is a sense of hypnotized wonder and delight at the happy consumerism of the country and its inhabitants, and Nabokov took overt joy at clipping and cataloguing examples of that consumerism, which he carefully worked into the very texture of Lolita.

See also:

Tags: ,

Gene Shalit, who once hiccupped and broke his mustache, was apparently busy “producing” articles for Look magazine before he became famous for saying words about movies. In an interesting 1966 piece, “boy…girl…computer,” he writes about punchcard dating invading Harvard and other campuses in those happier times before Mark Zuckerberg was born. (Canadians had experimented with computer dating a decade earlier.) The opening of Shalit’s New Journalism stylings for the long-defunct title:

Out of computers, faster than the eye can blink, fly letters stacked with names of college guys and girls–taped, scanned, checked and matched. Into the mails speed the compatible pairs, into P.O. boxes at schools across the land. Eager boys grab their phones… anxious coeds wait in dorms … a thousand burrrrrrrings jar the air . . . snow-job conversations start, and yeses are exchanged: A nationwild dating spree is on. Thousands of boys and girls who’ve never met plan weekends together, for now that punch-card dating’s here, can flings be far behind? And oh, it’s so right, baby. The Great God Computer has sent the word. Fate. Destiny. Go-go-go. Call it dating, call it mating, it flashed out of the minds of Jeff Tarr (left) and Vaughn Morrill, Harvard undergraduates who plotted Operation Match, the dig-it dating system that ties up college couples with magnetic tape. The match mystique is here: In just nine months, some 100,000 collegians paid more than $300,000 to Match (and to its MIT foe, Contact) for the names of at least five compatible dates. Does it work? Nikos Tsinikas, a Yale senior, spent a New Haven weekend with his computer-Matched date, Nancy Schreiber, an English major at Smith. Result, as long date’s journey brightened into night: a bull’s-eye for cupid’s computer.

“How come you’re still single? Don’t you know any nice computers?”

Perhaps no mother has yet said that to her daughter, but don’t bet it won’t happen, because Big Matchmaker is watching you. From Boston to Berkeley, computer dates are sweeping the campus, replacing old-fashioned boy-meets-girl devices; punch bowls are out, punch cards are in.

The boys who put data in dating are Jeff Tarr and Vaughn Morrill, Harvard undergraduates. At school last winter, they and several other juniors–“long on ingenuity but short on ingenues”–devised a computer process to match boys with girls of similar characteristics. They formed a corporation (Morrill soon sold out to Tarr), called the scheme Operation Match, flooded nearby schools with personality questionnaires to be filled out, and waited for the response.

They didn’t wait long: 8,000 answer sheets piled in, each accompanied by the three-dollar fee. Of every 100 applicants, 52 were girls. Clearly, the lads weren’t the only lonely collegians in New England. As dates were made, much of the loneliness vanished, for many found that their dates were indeed compatible. Through a complex system of two-way matching, the computer does not pair a boy with his ‘ideal’ girl unless he is also the girl’s ‘ideal’ boy. Students were so enthusiastic about this cross-check that they not only answered the 135 questions (Examples: Is extensive sexual activity [in] preparation for marriage, part of “growing up?” Do you believe in a God who answers prayer?), they even added comments and special instructions. Yale: “Please do not fold, bend or spindle my date.” Vassar: “Where, O where is Superman?” Dartmouth: “No dogs please! Have mercy!” Harvard: “Have you any buxom blondes who like poetry?” Mount Holyoke: “None of those dancing bears from Amherst.” Williams: “This is the greatest excuse for calling up a strange girl that I’ve ever heard.” Sarah Lawrence: “Help!”

Elated, Tarr rented a middling-capacity computer for $100 an hour (“I couldn’t swing the million to buy it.”), fed in the coded punch cards (“When guys said we sent them some hot numbers, they meant it literally.”) and sped the names of computer-picked dates to students all over New England. By summer, Operation Match was attracting applications from coast to coast, the staff had grown to a dozen, and Tarr had tied up with Data Network, a Wall St. firm that provided working capital and technical assistance.

In just nine months, some 90,000 applications had been received, $270,000 grossed and the road to romance strewn with guys, girls and gaffes.

A Vassarite who was sent the names of other girls demanded $20 for defamation of character. A Radcliffe senior, getting into the spirit of things, telephoned a girl on her list and said cheerfully, “I hear you’re my ideal date.” At Stanford, a coed was matched with her roommate’s fiance. Girls get brothers. Couples going steady apply, just for reassurance. When a Pembroke College freshman was paired with her former boyfriend, she began seeing him again. “Maybe the computer knows something that I don’t know,” she said.

Not everyone gets what he expects. For some, there is an embarrassment of witches, but others find agreeable surprises. A Northwestern University junior reported: “The girl you sent me didn’t have much upstairs, but what a staircase!”

Match, now graduated to an IBM 7094, guarantees five names to each applicant, but occasionally, a response sets cupid aquiver. Amy Fiedler, 18, blue-eyed, blonde Vassar sophomore, got 112 names. There wasn’t time to date them all before the semester ended, so many called her at her home in New York. “We had the horrors here for a couple of weeks,” her mother says laughingly. “One boy applied under two different names, and he showed up at our house twice!”

Tarr acknowledges that there are goofs, but he remains carefree. “You can’t get hung up about every complaint,” says Tarr. “You’ve got to look at it existentially.”

Jeff, 5′ 7″, likes girls, dates often. “If there’s some chick I’m dying to go out with,” he says, “I can drop her a note in my capacity as president of Match and say, Dear Joan, You have been selected by a highly personal process called Random Sampling to be interviewed extensively by myself. . . . and Tarr breaks into ingratiating laughter.

“Some romanticists complain that we’re too commercial,” he says. “But we’re not trying to take the love out of love; we’re just trying to make it more efficient. We supply everything but the spark.”•

 

We might already be smart enough to allow the continued survival of our species, but I wouldn’t bet on it. Homo sapiens will ultimately need to engineer evolution if we are to continue to thrive (though that new IQ better also be matched to improved ethics). Of course, our species with dramatically improved intelligence will no longer exactly be our species, but that’s not the worst thing.

In a Washington Post piece, UCLA Law Professor Eugene Volokh skillfully lays out the future, arguing that the path to tomorrow won’t be blocked by the reported 83% of Americans who currently think manipulating a baby’s genes for greater intelligence is wrong. It’s scary to think of such procedures at the moment, but eventually the moment will be different. An excerpt about designer babies and geopolitics:

Intelligence is, generally speaking, good, and more is, generally speaking, better. It’s better for the person in question. It’s better for society to have more intelligent people. It’s not the most important thing. But ask yourself: All else being equal, would you rather have your child have an IQ (for all the limitations of that measure) of 85, 100, 115 or 130?

So here’s how it will happen. Say the 83 percent poll results hold, even once safe genetic modification is available (it’s not clear they will, given that at this point they reflect a purely hypothetical question, but say they do), and Congress bans such modification. Or say there is worry — understandable when it comes to a new technology — that the modification won’t be safe and will cause the birth of children with various birth defects or other problems, so Congress bans it because of that.

Now it’s gone! No more of this awful technology. Except, wait: Say the Chinese don’t see things the way we do. Out come some number of babies with horrible birth defects (truly a tragedy, and as a purely ethical matter, possibly a reason against such experimentation; I’m just saying the ethics won’t matter much). And then things get worked out, and now the new generation of Chinese, or Japanese, or Russians becomes on average much smarter than the new generation of Americans. How long will American public opinion remain opposed to a technology that seems vital to national success, and perhaps even national independence?•

Tags:

I posted in January about a fully roboticized Japanese hotel that was in development. The “Henn na” (known as the “Weird Hotel” to the rest of the world) is now opening for business. While the Jurassic Era front-desk clerk will no doubt be an amusing distraction, the lodging is a serious step to disappearing as many people as possible from employment in the hotel industry, to turning human workers into dinosaurs. So far in the U.S., we’ve thus far seen baby steps in that direction. From Yuri Kageyama at the Associated Press:

The receptionist robot that speaks in English is a vicious-looking dinosaur, and the one that speaks Japanese is a female humanoid with blinking lashes. “If you want to check in, push one,” the dinosaur says. The visitor still has to punch a button on the desk, and type in information on a touch panel screen.

Henn na Hotel, as it is called in Japanese, was shown to reporters Wednesday, complete with robot demonstrations, ahead of its opening to the public Friday.

Another feature of the hotel is the use of facial recognition technology, instead of the standard electronic keys, by registering the digital image of the guest’s face during check-in.

The reason? Robots aren’t good at finding keys, if people happen to lose them.

A giant robotic arm, usually seen in manufacturing, is encased in glass quarters in the corner of the lobby. It lifts one of the boxes stacked into the wall and puts it out through a space in the glass, where a guest can place an item in it, to use as a locker.

The arm will put the box back into the wall, until the guest wants it again. The system is called “robot cloak room.”

Why a simple coin locker won’t do isn’t the point.

“I wanted to highlight innovation,” Sawada told reporters. “I also wanted to do something about hotel prices going up.”•

Tags:

« Older entries § Newer entries »