You are currently browsing the archive for the Excerpts category.

Jeez, Jim Holt is a dream to read. If you’ve never picked up his 2012 philosophical and moving inquiry, Why Does the World Exist?: An Existential Detective Story, it’s well worth your time. For awhile it was cheekily listed No. 1 at the Strand in the “Best Books to Read While Drunk” category, but I don’t drink, and I adored it.

In a 2003 Slate article, “My Son, the Robot,” Holt wrote of Bill McKibben’s Enough: Staying Human in an Engineered Age, a cautionary nonfiction tale that warned our siblings soon would be silicon sisters, thanks to the progress of genetic engineering, robotics, and nanotechnology. It was only a matter of time.

Holt was unmoved by the clarion call, believing human existence unlikely to be at an inflection point and thinking the author too dour about tomorrow. While Holt’s certainly right that we’re not going to defeat death anytime soon despite what excitable Transhumanist promise, both McKibben and the techno-optimists probably have time on their side.

An excerpt:

Take McKibben’s chief bogy, genetic engineering—specifically, germline engineering, in which an embryo’s DNA would be manipulated in the hopes of producing a “designer baby” with, say, a higher IQ, a knack for music, and no predisposition to obesity. The best reason to ban it (as the European Community has already done) is the physical risk it poses to individuals—namely, to the children whose genes are altered, with unforeseen and possibly horrendous consequences. The next best reason is the risk it poses to society—exacerbating inequality by creating a “GenRich” class of individuals who are smarter, healthier, and handsomer than the underclass of “Naturals.” McKibben cites these reasons, as did Fukuyama (and many others) before him. However, what really animates both authors is a more philosophical point: that genetic engineering would alter “human nature” (Fukuyama) or take away the “meaning” of life (McKibben). As far as I can tell, the argument from human nature and the argument from meaning are mere terminological variants of each other. And both are woolly, especially when contrasted with the libertarian argument that people should be free to do what they wish as long as other parties aren’t harmed.

Finally, McKibben’s reasoning fitfully betrays a vulgar variety of genetic determinism. He approvingly quotes phrases like “genetically engineered thoughts.” Altering an embryo’s DNA to make your child, say, less prone to violence would turn him into an “automaton.” Giving him “genes expressing proteins to boost his memory, to shape his stature” would leave him with “no more choice about how to live his life than a Hindu born untouchable.” Why isn’t the same true with the randomly assigned genes we now have?

Now to the deeper fallacy. McKibben takes it for granted that we are at an inflection point of history, suspended between the prehistoric and the Promethean. He writes, “we just happen to be alive at the brief and interesting moment when [technological] growth starts to really matter—when it spikes.” Everything is about to change.

The extropian visionaries arrayed against him—people like Ray Kurzweil, Hans Moravec, Marvin Minsky, and Lee Silver—agree. In fact, they think we are on the verge of conquering death (which McKibben thinks would be a terrible thing). And they mean RIGHT NOW. When you die, you should have your brain frozen; then, in a couple of decades, it will get thawed out and nanobots will repair the damage; then you can start augmenting it with silicon chips; finally, your entire mental software, and your consciousness along with it (you hope), will get uploaded into a computer; and—with multiple copies as insurance—you will live forever, or at least until the universe falls apart.•


Tags: ,

Robotics will likely play a big role in the future of spectator sports, though I don’t envision the discipline taking over ESPN in prime time in the near future. Auto racing is, of course, already a human-machine hybrid, but are familiar athletics to be encroached upon by AI or will robotic-specific sports rise? Are drones to be the new chariots?

In a Medium article, Cody Brown, who’s more bullish on robot athletics in the near term than I am, gives seven reasons for his enthusiasm. An excerpt:

3.) Top colleges fight over teenagers who win robotics competitions.

If you’re good at building a robot, chances are you have a knack for engineering, math, physics, and a litany of other skills top colleges drool over. This is exciting for anyone (at any age) but it’s especially relevant for students and parents deciding what is worth their investment.

There are already some schools that offer scholarships for e-sports. I wouldn’t be surprised if intercollegiate leagues were some of the first to pop up with traction.

5.) Rich people are amused by exceptional machines.

There is a reason that Rolex sponsors Le Mans. A relatively small number of people attend the race but it’s an elite mix of engineers and manufactures. Many of the people who became multimillionaires in the past 20 years got it from The Internet or some relation to the tech industry. They want to spend their money on what amuses them/their friends and robotics is a natural extension. Mark Zuckerberg recently gave one of the top drone racers in the world (Chapu) a shoutout on Facebook.•


Uber’s claims that it’s good for Labor are nonsense, and the press conference in Harlem even used Eric Garner’s name to sell that hokum. The rideshare company is potentially good in some ways–consumer experience, challenging the taxi business’ racial profiling, being friendlier to the environment–but it didn’t beat Mayor de Blasio because of those potentially good things, but because it outmaneuvered him on a big lie. That’s worrisome.

Chris Smith of New York smartly sums up the gamesmanship:

De Blasio hadn’t been prepared for the onslaught. What was truly disorienting for him — and politically ominous — was that the roles had been scrambled. The mayor assumed he was the progressive defender of moral fairness and the little guy: Of course city government should regulate anyone trying to add 10,000 commercial vehicles to New York’s streets. Of course he needed to protect the rights of Uber’s “driver-partners.”

Yet Uber was able to deftly outflank de Blasio on his home turf, co-opting pieces of his message, splitting him from his normal Democratic allies, and drawing together an opposition constituency that could haunt de Blasio in 2017. To do so, Uber deployed a sophisticated, expensive political campaign waged by lobbyists and strategists trained in the regimes of Obama, Cuomo, and Bloomberg. That campaign worked in part because even though Uber the company is motivated by the pursuit of profit, not social justice, Uber the product has some genuinely progressive effects. So Uber went straight at the mayor’s minority base, drawing it into its vision of the modern New York. The company’s ad blitz highlighted how Uber’s drivers are mostly black and brown. It held a press conference at Sylvia’s, in Harlem, where the company basically accused the mayor of discriminating against minorities by daring to try to rein in its growth. It pushed data to reporters showing that Uber serves outer-borough neighborhoods that for years were shunned by yellow cabs.

In doing so, the company was able to dispel its aura of Bloomberg-era elitism. Newer services like UberPool, which allows drivers to pick up multiple passengers who split the fare, would ease congestion and make the city greener. Uber exploited its appeal to a youthful, techie, multiracial liberalism, selling itself as about openness and choice — a choice that was being stymied by old bureaucratic ways that have no business in the new city. This was a direct hit on de Blasio’s greatest vulnerability: the mayor’s seeming defense of an entrenched and hated yellow-taxi monopoly that’s been one of his most prolific campaign contributors.•



The news business isn’t dead, just a tale of haves and have-nots like most of contemporary culture, and we may have reached an inflection point in that arrangement over the past year or so. From Jeff Bezos’ purchase of the Washington Post to Nikkei snapping up the Financial Times, those publications with a global name are being acquired and reformatted for a mobile world where smartphones are the main medium. 

The question for me remains whether the New York Times, the most valuable erstwhile newspaper in the U.S., will be able to go it alone as a “mom-and-pop” shop, or if they must also become part of a gigantic, diversified company. I’d hope for the former and bet the latter.

From Matthew Garrahan at the Financial Times:

What has convinced investors to take another look at news? “There has been a lot of disruption but there has never been a lack of consumer demand for quality news content,” says Jim Bankoff, chief executive of Vox Media.

Another clue can be found in the actions of Apple and Facebook, which have built news offerings. The technology companies have realised that, like photo sharing and music apps, news can attract and retain users of their mobile services. This year will be the first when smartphones are responsible for 50 per cent of news consumption, up from 25 per cent in 2012, according to Ken Doctor, an analyst with Newsonomics. The smartphone has become “the primary access point for many readers,” he says.

The news brands that have attracted the most interest are digital, mobile and global. For Nikkei, buying the FT gives it an opportunity to expand into new markets — particularly in Asia, says Mr Doctor, where markets such as South Korea, Indonesia and India are growing rapidly. The FT “gives Nikkei more weight and more smarts in how to compete,” he says.•

Tags: , ,

Elite-level athletes are born with all sorts of genetic advantages. Some are related to lungs and hearts, some to muscles and body type. Michael Phelps couldn’t have been better built in a laboratory for swimming, from leg length to wingspan. Usain Bolt, that wonder, has an innate biological edge to go along with cultural factors that benefit Jamaican runners. There’s no such thing as a level playing field.

So, I’m puzzled when a female competitor with a hormone level that’s naturally elevated into what’s considered male territory is held up to scrutiny. Apart from having to do with sexual characteristics, how is it any different?

The Indian sprinter Dutee Chand, who has high natural levels of testosterone, has thankfully been ruled eligible to compete despite protests. From John Branch at the New York Times:

The final appeals court for global sports on Monday further blurred the line separating male and female athletes, ruling that a common factor in distinguishing the sexes — the level of natural testosterone in an athlete’s body — is insufficient to bar some women from competing against females.

The Court of Arbitration for Sport, based in Switzerland, questioned the athletic advantage of naturally high levels of testosterone in women and immediately suspended the “hyperandrogenism regulation” of track and field’s governing body. It gave the governing organization, known as the I.A.A.F., two years to provide more persuasive scientific evidence linking “enhanced testosterone levels and improved athletic performance.”
The court was ruling on a case, involving the Indian sprinter Dutee Chand, that is the latest demonstration that biological gender is part of a spectrum, not a this-or-that definition easily divided for matters such as sport. It also leaves officials wondering how and where to set the boundaries between male and female competition.
The issue bemuses governing bodies and riles fans and athletes. Among those who testified in support of the I.A.A.F. policy was the British marathon runner Paula Radcliffe, who holds the event’s world record among women. According to the ruling, Radcliffe said that elevated testosterone levels “make the competition unequal in a way greater than simple natural talent and dedication.” She said that other top athletes shared her view.•


Tags: ,

Well, of course we shouldn’t engage in autonomous warfare, but what’s obvious now might not always seem so clear. What’s perfectly sensible today might seem painfully naive tomorrow.

I think humans create tools to use them, eventually. When electricity (or some other power source) is coursing through those objects, the tools almost become demanding of our attention. If you had asked the typical person 50 years ago–20 years ago?–whether they would be accepting of a surveillance state, the answer would have been a resounding “no.” But here we are. It just creeped up on us. How creepy.

I still, however, am glad that Stephen Hawking, Steve Wozniak, Elon Musk and a thousand others engaged in science and technology have petitioned for a ban on AI warfare. It can’t hurt.

From Samuel Gibbs at the Guardian:

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.•

Tags: , , ,


A NASA-commissioned study suggests the costs of establishing and maintaining a moon colony could be reduced by 90% if it’s a multinational, public-private enterprise that utilizes robotics and reusable rockets, and if the moon’s resources are mined for fuel and other necessities. As Popular Science points out, the whole project pretty much rests on there being abundant hydrogen in the moon’s crust, or it’s a no-go.

Below is the study’s passage on robotics, how autonomous machines would aid in the mission and what residual benefits their development may deliver to Earth:

Establish Crew Outpost

Following completion of the ISRU production facility, and the arrival of the large reusable lunar lander, the site is ready for the delivery of habitats, and other infrastructure needed for the permanent crewed lunar base. The ELA is designed to launch a Bigelow BA-330 expandable habitat sized system via either a Falcon Heavy or Vulcan LV to LEO, which is then transferred from LEO to low-lunar orbit (LLO) by leveraging inspace propellant transfer in LEO. The large reusable lunar lander will then rendezvous with the habitat, and other large modules, in LLO and transport them to the surface of the Moon. These modules would be moved by robotic systems from the designated landing areas to the crew habitation area selected during the scouting/prospecting operation. The modules could be positioned into lava tubes, which provide ready-made, natural protection against radiation and thermal extremes, if discovered at lunar production site. Otherwise, the robotic systems will move regolith over the modules for protection. Additionally, the robotic systems will connect the modules to the communications and power plant at the site.

Human & Robot Interaction as a System:

Why are robotics critical?

The reasons that the process begins with robotics instead of beginning with ‘human-based’ operations like Apollo includes:

1. Robotics offer much lower costs and risk than human operations, where they effective, which is amplified in remote and hostile environments.

2. Robotic capabilities are rapidly advancing to a point where robotic assets can satisfactorily prospect for resources and also for set up and prepare initial infrastructure prior to human-arrival.

3. Robotics can be operated over a long period of time in performing the prospecting and buildup phases without being constrained by human consumables on the surface (food, water, air, CO2 scrubbing, etc.).

4. Robotics can not only be used to establish initial infrastructure prior to crew arrival, preparing the way for subsequent human operations, but to also repair and maintain infrastructure, and operate equipment after humans arrive.

Why do robots need humans to effectively operate a lunar base? Why can’t robotics “do it all”? Why do we even need to involve humans in this effort?

1. Some more complex tasks are better performed jointly by humans and robotics….or by humans themselves. This is an important area of research and testing.

2. Humans operate more effectively and quicker than robotic systems, and are much more flexible. Human are able to make better informed and timely judgments and decisions than robotic operations, and can flexibly adapt to uncertainty and new situations.

3. Robotic technology has not reached a point where robots can repair and maintain themselves. The robotic systems will need to periodic as well as unscheduled maintenance and repair….provided by humans.

Public Benefits of Investments in Advanced Robotics

U.S. government investments in advanced technologies such as robotics will have tremendous impacts on American economic growth and innovation here on Earth. The investments just by DARPA in robotic technologies are having significant spill-over effects into many terrestrial applications and dual-use technologies. Examples of dual use technologies include:

a. Robotic systems performing connect /disconnect operations of umbilicals for fluid/propellant loading … could lead to automated refueling of aircraft, cars, launch vehicles, etc.

b. Robotic civil engineering: 3D printing of structures on the Moon with plumbing through industrial 3D printer robotics, could lead to similar automated construction methods here on Earth.

c. Tunnel inspections: Robotic operations for inspecting lava tunes on the Moon could lead to advanced automation in mine shafts on Earth. Advances in autonomous navigation, imagery, and operations for dangerous locations and places could save many lives here on Earth.

d. Remote and intelligent inspection of unsafe structures from natural disasters (tsunamis, radiation leakage, floods, hurricanes) could enable many more operations by autonomous robotics where it is unsafe to send humans.•

If there were two writers whose hearts beat as one despite a generational divide, it would have been Henry Miller and Hunter S. Thompson. When I tweeted on Saturday about a 1965 Thompson article regarding Big Sur becoming too big for its own good, it reminded me of an earlier piece the Gonzo journalist had written about the community, a 1961 Rogue article which centered on Miller’s life there. Big Sur was a place the novelist went for peace and solitude, which worked out well until aspiring orgiasts located it on a map and became his uninvited cult. Despite Miller’s larger-than-life presence, Thompson focuses mostly on the eccentricities of the singular region. I found the piece at Totallygonzo.org. Just click on the pages for a larger, readable version.


Tags: ,

We will be measured, and often we won’t measure up.

Connected technologies will not just assess us in the aftermath, but during processes, even before they begin. Data, we’re told, can predict who’ll falter or fail or even become a felon. Like standardized testing, algorithms may aim for meritocracy, but there’s likely to be unwitting biases. 

And then, of course, is the intrusiveness. Those of us who didn’t grow up in such a scenario won’t ever get used to it, and those who do won’t know any other way.

Such processes are being experimented with in the classroom. They’re meant to improve the experience of the student, but they’re fraught with all sorts of issues.

From Helen Warrell at the Financial Times:

week after students begin their distance learning courses at the UK’s Open University this October, a computer program will have predicted their final grade. An algorithm monitoring how much the new recruits have read of their online textbooks, and how keenly they have engaged with web learning forums, will cross-reference this information against data on each person’s socio-economic background. It will identify those likely to founder and pinpoint when they will start struggling. Throughout the course, the university will know how hard students are working by continuing to scrutinise their online reading habits and test scores.

Behind the innovation is Peter Scott, a cognitive scientist whose “knowledge media institute” on the OU’s Milton Keynes campus is reminiscent of Q’s gadget laboratory in the James Bond films. His workspace is surrounded by robotic figurines and prototypes for new learning aids. But his real enthusiasm is for the use of data to improve a student’s experience. Scott, 53, who wears a vivid purple shirt with his suit, says retailers already analyse customer information in order to tempt buyers with future deals, and argues this is no different. “At a university, we can do some of those same things — not so much to sell our students something but to help them head in the right direction.”

Made possible by the increasing digitisation of education on to tablets and smartphones, such intensive surveillance is on the rise.•

Tags: ,

Years before Gore Vidal squared off with a lit Norman Mailer on Dick Cavett’s talk show, he and William F. Buckley tore into each other on live TV on numerous occasions in the run-up to the 1968 Presidential election, a continuing spectacle so vicious (“the only pro or crypto-Nazi here is yourself”) that it ultimately carried over into a courtroom. It was a huge sensation, a dress rehearsal of sorts for the tempestuous Fischer-Spassky televised chess matches, with two men who saw themselves as kingmakers behaving like pawns. The confrontation wasn’t just a sideshow but, sadly, prelude: political opinion becoming little more than scathing spectacle of little value or substance. 

The endless channels enabled by new technologies and the Reagan erasure of Fairness Doctrine would have delivered us into loud, partisan bickering regardless, but Morgan Neville and Robert Gordon’s new documentary, Best of Enemies, sees the Buckley-Vidal drama as the tipping point of our new abnormal.

The opening of Michael M. Grynbaum’s well-written NYT article about the film:

Before partisan panels, split-screen shoutfests and brash personalities became ubiquitous on cable news, there were two men who despised each other sitting side by side on a drab soundstage, debating politics in prime time during the presidential nominating conventions of 1968. There were Gore Vidal and William F. Buckley Jr.

Literary aristocrats and ideological foes, Vidal and Buckley attracted millions of viewers to what, at the time, was a highly irregular experiment: the spectacle of two brilliant minds slugging it out — once, almost literally — on live television. It was witty, erudite and ultimately vicious, an early intrusion of full-contact punditry into the staid pastures of the evening news.

What transpired would alter both men’s lives — and, as a new documentary argues, help change the course of how the American political media reports the news. Best of Enemies, which opens July 31, makes the case that their on-screen feuding opened the floodgates for today’s opinionated, conflict-driven coverage.•

Tags: , , ,

In one of his typically bright, observant posts, Nicholas Carr wryly tackles Amazon’s new scheme of paying Kindle Unlimited authors based on how many of their pages are read, a system which reduces the written word to a granular level of constant, non-demanding engagement. 

There’s an argument to be made that like systems have worked quite well in the past: Didn’t Charles Dickens publish under similar if not-as-precisely-quantified circumstances when turning out his serial novels? Sort of. Maybe not to the same minute degree, but he was usually only as good as his last paragraph (which, thankfully, was always pretty good).

The difference is while it worked for Dickens, this process hasn’t been the motor behind most of the great writing in our history. James Joyce would not have survived very well on this nano scale. Neither would have Virginia Woolf, William Faulkner, Marcel Proust, etc. Their books aren’t just individual pages leafed together but a cumulative effect, a treasure that comes only to those who clear obstacles.

Shakespeare may have had to pander to the groundlings to pay the theater’s light bill, but what if the lights had been turned off mid-performance if he went more than a page without aiming for the bottom of the audience?

Carr’s opening:

When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed.•



Not everyone can be a syphilitic prostitute or a foul-smelling arsonist, so some must settle for being Instagram comedians. 

I mean, there’s nothing wrong with Instagram or Vine or any other platform in the hands of someone who’s genuinely funny and creative, but what’s thought of as humor on these new channels is largely (and almost intentionally) witless, just topical bullshit that doesn’t require anything more than the barest sentience for the so-called jokes to be received.

Case in point: Josh Ostrovsky, whose main flaw should be that he dubbed himself “The Fat Jew” (or “The Fat Jewish”), but he’s also hampered by being resolutely lame in captioning photos unearthed in web searches and posting them to his Instagram account. His brutally unfunny shtick is as painful as it is popular, unless you think “White girl ordering Starbucks” riffs are hilarious. Like the Kardashians, he’s mastered form free of any worthwhile content, but the difference is Reality TV promises none while humor does (or at least should). In a FT piece by John Sunyer, Ostrovsky’s process, as it were, is revealed. An excerpt:

The Fat Jew explains how, each day, from an office he rents in the back of a nail salon in Queens, he and three interns search the bowels of the web for unusual, often slightly ridiculous images — so long as they haven’t already gone viral — to post to his followers.

Last month, celebrating the Supreme Court’s decision to approve same-sex marriage across the US, he posted a Photoshopped picture of rapper Kanye West kissing his double. Below it, the Fat Jew wrote: “Finally, Kanye can legally marry himself absolutely anywhere in this great nation.” The picture gained almost 238,000 likes.

Outlandish taste runs through his comedy — in one picture two pizza slices are positioned together so that they look like the Star of David, with the caption “This is my religion” typed below. He also mocks our dependence on the internet. Take the picture showing a message saying: “Home is where your WiFi connects automatically,” beneath which a caption reads: “Meaning not my parents’ house, where the WiFi password is RHXFGJIJ0000055$T.”

With this sort of content, which people seem unable to resist regramming or telling their friends about, the Fat Jew has found fame and financial success. Still, he says, it’s difficult to explain to “proper adults” what he does.

“Not getting all high and mighty about it but it’s more like performance art than comedy,” he says, draining his cocktail. “I won’t ever open a soup kitchen but what I do is the next best thing. Pictures are what I can give back to the world. A lot of people have steady careers, health insurance, a pay cheque at the end of the month, a wife and three kids. But that kind of life can get boring; sometimes you need to see a fat guy sitting in a giant bowl of chilli.•

Tags: ,

In Jerry Kaplan’s excellent WSJ essay about ethical robots, which is adapted from his forthcoming book, Humans Need Not Apply, the author demonstrates it will be difficult to come up with consistent standards for our silicon sisters, and even if we do, machines following rules 100% of the time will not make for a perfect world. The opening:

As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?

Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.

How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?•



Some things in America are certainly accelerating. That includes things technological (think how quickly autonomous vehicles have progressed since 2004’s “Debacle in the Desert“) and sociological (gay marriage becoming legal in the U.S. just seven years after every Presidential candidate opposed it). Our world just often seems to move more rapidly to a conclusion of one sort or another in a wired, connected world, though the end won’t always be good, and legislation will be more and more feckless in the face of the new normal.

Of course, this sort of progress sets up a trap, convincing many technologists that everything is possible in the near-term future. When I read how some think robot nannies will be caring for children within 15 years, stuff like that, I think perfectly well-intentioned people are getting ahead of themselves. 

In a Forbes piece, Steven Kotler certainly thinks acceleration is the word of the moment, though to his credit he acknowledges that the tantalizing future can be frustrating to realize. An excerpt:

You really have to stop and think about this for a moment. For the first time in history, the world’s leading experts on accelerating technology are consistently finding themselves too conservative in their predictions about the future of technology.

This is more than a little peculiar. It tells us that the accelerating change we’re seeing in the world is itself accelerating. And this tells us something deep and wild and important about the future that’s coming for us.

So important, in fact, that I asked Ken [Goffman of Mondo 2000] to write up his experience with this phenomenon. In his always lucid and always funny own words, here’s his take on the dizzying vertigo that is tomorrow showing up today:

In the early ‘90s, the great science fiction author William Gibson famously remarked, “The Future is here. It’s just not very evenly distributed.” While this was a lovely bit of phraseology, it may have been a bit glib at the time. Nanotechnology was not even a commercial industry yet. The hype around virtual reality went bust. There were no seriously funded brain emulation projects pointing towards the eventuality of artificial general intelligence (now there are several). There were no longevity projects funded by major corporations (now we have the Google GOOGL -3.13%-funded Calico). You couldn’t play computer games with your brain. People weren’t winning track meets and mountain climbing on their prosthetic legs. Hell, you couldn’t even talk to your cell phone, if you were among the relatively few who had one.

Over the last few years, the tantalizing promises of radical technological changes that can alter humans and their circumstances have really started to come into their own. Truly, the future is now actually here, but still largely undistributed (never mind, evenly distributed).•

Tags: ,

I’m not in favor of capping Uber in NYC, since I don’t think it makes much economic sense to suppress innovation, but as I’ve said many times before, we should be honest about what ridesharing, and more broadly the Gig Economy, truly is: good for the consumer and convenience and the environment and really bad for Labor.

Not only will ridesharing obliterate taxi jobs that guarantee a minimum wage, but Travis Kalanick’s outfit has no interest in treating its drivers well or even in retaining them in the longer term and is only intent on using workers–even exploiting military vets–for publicity purposes. Yet community leaders and politicians, including New York’s Governor Cuomo, either keep buying this nonsense or are being dishonest.

From Glenn Blain in the New York Daily News:

ALBANY — Gov. Cuomo is siding with Uber in its battle with Mayor de Blasio.

In the latest eruption of bad blood between Cuomo and his one-time frenemy, the governor hailed the ride-sharing company as a job producer and scoffed at a City Council bill — backed by the mayor — that would cap Uber’s growth.

“Uber is one of these great inventions,” Cuomo said during an interview on public radio’s “The Capitol Pressroom” Wednesday.

“It is taking off like fire through dry grass and it’s offering a great service for people and it’s giving people jobs,” Cuomo said. “I don’t think government should be in the business of trying to restrict job growth.”•

Tags: ,

In an attempt to rename cars in the potential driverless future, technologists and pundits have offered numerous alternatives: robocars, driverless cars, autonomous cars. Funny thing is, the term “auto” (“by oneself or spontaneous” and “by itself or automatic”), one we already use, would be particularly apt. The word doesn’t need change–the definition will.

In a similar vein, Adrienne France of the Atlantic, has written a smart piece about the word “computer,” which is entering its 3.0 stage, an era that may call back in some ways to the original definition. An excerpt:

Now, leading computer scientists and technologists say the definition of “computer” is again changing. The topic came up repeatedly at a brain-inspired computing panel I moderated at the U.S. Capitol last week. The panelists—neuroscientists, computer scientists, engineers, and academics—agreed: We have reached a profound moment of convergence and acceleration in computing technology, one that will reverberate in the way we talk about computers, and specifically with regard to the word “computer,” from now on.

“It’s like the move from simple adding machines to automated computing,” said James Brase, the deputy associate director for data science at Lawrence Livermore National Laboratory. “Because we’re making an architectural change, not just a technology change. The new kinds of capabilities—it won’t be a linear scale—this will be a major leap.”

The architectural change he’s talking about has to do with efforts to build a computer that can act—and, crucially, learn—the way a human brain does. Which means focusing on capabilities like pattern recognition and juiced-up processing power—building machines that can perceive their surroundings by using sensors, as well as distill meaning from deep oceans of data. “We are at a time where what a computer means will be redefined. These words change. A ‘computer,’ to my grandchildren, will be definitely different,” said Vijay Narayanan, a professor of computer science at Pennsylvania State University.•

Tags: , ,

I would think there’s life of some sort out there somewhere beyond the little rock we call Earth. It’s never visited us here, but perhaps someday, in one way or another, we’ll make contact. At that point, how do we proceed?

In a Guardian article, Roger Pielke Jr. fears we’ll have to wing it, having done so little thinking about the possibility. An excerpt:

Earlier this week Stephen Hawking, theoretical physicist and modern rock star scientist, along with Russian billionaire Yuri Milner announced that they would be launching a new project to boost efforts to look for intelligent life outside our solar system. Milner, who is funding the effort, said that he had been “thinking about this since I was a child, reading Carl Sagan’s book.”

Upon hearing of the new project, called Breakthrough Listen, I was reminded, of all things, of a recent prison break. Last month two convicted murders escaped from a New York prison. They had spent months carefully planning and executing their escape, which involved cutting and digging their way through walls, pipes and concrete. Remarkably, however, the pair gave little thought to what they would do if they actually succeeded in their plans. The consequence of the lack of planning was a short effort to flee from authorities followed by the death of one prisoner and re-capture of the other by authorities.

The search for extra-terrestrial life shares some similarities. We are investing considerable attention and resources into the search, but little into thinking about the consequences of success. As Carl Sagan imagined, it is as if we expect to fail, which would be a relief. Even Milner says, “It’s quite likely that we won’t find anything.” But what if we do succeed? What then? …

In fact, it seems unlikely that any policy makers in national or international settings have a clearly thought–through plan for responding to the discovery of extraterrestrial life, whether that be microbes on another body in our solar system or beady-eyed aliens looking to invade. The conversation is only silly if we assume that efforts to detect alien life will never succeed.•


David Brooks’ recent op-ed about Ta-Nehisi Coates, the one in which he approached the critic and his skin color cautiously and with some surprise, as if he’d happened upon a strange creature in a forest, was most troubling to me for two reasons, both of which were demonstrated in the same passage.

This one:

You are illustrating the perspective born of the rage “that burned in me then, animates me now, and will likely leave me on fire for the rest of my days.”

I read this all like a slap and a revelation. I suppose the first obligation is to sit with it, to make sure the testimony is respected and sinks in. But I have to ask, Am I displaying my privilege if I disagree? Is my job just to respect your experience and accept your conclusions? Does a white person have standing to respond?

If I do have standing, I find the causation between the legacy of lynching and some guy’s decision to commit a crime inadequate to the complexity of most individual choices.

I think you distort American history.•

  • The line “Does a white person have standing to respond?” is a galling transference of burden from people of actual oppression to people who’ve experienced none, a time-tested trick in the country, and one used repeatedly in discussions about Affirmative Action and other issues. The faux victimhood is appalling.
  • Even worse is this doozy: “I find the causation between the legacy of lynching and some guy’s decision to commit a crime inadequate to the complexity of most individual choices.” Here’s a negation of history and culpability that’s jaw-dropping. Does Brooks believe the inordinate poverty, imprisonment and shorter lifespans of African-Americans stem solely from their decisions? Does he believe Native Americans, the targets of another American holocaust, have such pronounced social problems because of poor choices they made? From Jim Crow to George Zimmerman, American law and justice has often been designed to reduce former slaves and their descendants, not to keep the peace but to maintain the power. If Brooks wants to point out the Civil Rights Act as a remedy to Jim Crow, that’s fine, but you don’t get to do a victory lap for offering basic decency, and a sensible person doesn’t believe that improvement in our country, often a slow and grueling and bloody thing, doesn’t leave deep scars.•

Tags: ,

Julian Assange, the alleged Bill Cosby of Wikileaks, can be a preposterous blowhard, but that doesn’t mean he hasn’t been a useful part of the discussion about surveillance. At Spiegel, Michael Sontheimer has a new longform Q&A with Assange, in which they discuss what might be called Wikileaks 2.0, as well as the “digital colonization of the world” by Silicon Valley powerhouses, this era’s analog of America’s twentieth-century cultural exportation of Hollywood and hamburgers.

An excerpt:


You met Eric Schmidt, the CEO of Google. Do you think he is a dangerous man?

Julian Assange:

If you ask “Does Google collect more information than the National Security Agency?” the answer is “no,” because NSA also collects information from Google. The same applies to Facebook and other Silicon Valley-based companies. They still collect a lot of information and they are using a new economic model which academics call “surveillance capitalism.” General information about individuals is worth little, but when you group together a billion individuals, it becomes strategic like an oil or gas pipeline.


Secret services are perceived as potential criminals but the big IT corporations are perceived at least in an ambiguous way. Apple produces beautiful computers. Google is a useful search engine.

Julian Assange:

Until the 1980s, computers were big machines designed for the military or scientists, but then the personal computers were developed and companies had to start rebranding them as machines that were helpful for individual human beings. Organizations like Google, whose business model is “voluntary” mass surveillance, appear to be giving it away for free. Free e-mail, free search, etc. Therefore it seems that they’re not a corporation, because corporations don’t do things for free. It falsely seems like they are part of civil society.


And they shape the thinking of billions of users?

Julian Assange:

They are also exporting a specific mindset of culture. You can use the old term of “cultural imperialism” or call it the “Disneylandization” of the Internet. Maybe “digital colonization” is the best terminology.


What does this “colonization” look like?

Julian Assange:

These corporations establish new societal rules about what activities are permitted and what information can be transmitted. Right down to how much nipple you can show. Down to really basic matters, which are normally a function of public debate and parliaments making laws. Once something becomes sufficiently controversial, it’s banned by these organizations. Or, even if it is not so controversial, but it affects the interests that they’re close to, then it’s banned or partially banned or just not promoted.


So in the long run, cultural diversity is endangered?

Julian Assange:

The long-term effect is a tendency towards conformity, because controversy is eliminated. An American mindset is being fostered and spread to the rest of the world because they find this mindset to be uncontroversial among themselves. That is literally a type of digital colonialism; non-US cultures are being colonized by a mindset of what is tolerable to the staff and investors of a few Silicon Valley companies. The cultural standard of what is a taboo and what is not becomes a US standard, where US exceptionalism is uncontroversial.•

Tags: ,

E.L. Doctorow, who wrote several great novels and one perfect one (Ragtime), sadly just died. Historical fiction can be a really tiresome thing in most hands, especially when the subjects are recent ones, but Doctorow was as good as anyone at the truth-fiction mélange. I’ve never read his early sci-fi book, Big As Life, and would like to.

A brief 1975 People magazine article cataloged that rare moment when literary success dovetailed with the commercial kind, Apparently, Robert Altman was first set to direct the big-screen adaptation of Ragtime, those honors eventually falling to Milos Forman. The opening:

The offers went up like the temperature in steamy Manhattan—$1 million, $1.5 million. And when the final bid of $1.85 million came in, an ambitious 270-page novel called Ragtime had made literary history. It was the highest price ever paid for paperback rights to a book—edging out the Joy of Cooking by $350,000. Nine publishing houses spent more than 12 hours politely jockeying before Bantam Books made the deal.

Ragtime‘s genteel, 44-year-old author E.L. Doctorow did not, of course, attend the vulgar merchandising rites. That’s what agents are for. Doctorow was in fact 45 minutes from Broadway, browsing in a New Rochelle bookstore with his 13-year-old son, Sam, at the historic moment of sale. Finally reached by phone by his hardback publisher at Random House, Doctorow was pleased but not overwhelmed at the news that he was an instant millionaire (he will receive half the $1.8 million plus royalties on the best-selling hard cover). His three previous novels—critical but not financial triumphs—had given him a Garboesque perspective on wealth. “I really feel,” Doctorow says, “that money is like sex—it’s a private matter.”

For Bantam, the transaction will turn financially sour unless it can peddle Ragtime, to be published next summer at over $2 a copy, to 4 or 5 million customers. A big box-office movie usually helps push paperback sales, and film rights for Ragtime have been sold to this year’s top director, Robert (Nashville) Altman. Doctorow has already heard from a fellow alumnus of Kenyon College in Ohio who wants to be one of the leads. “Remember me?” asked Paul Newman. “We went to college together, and I’d love to play in the movie.” “Terrific,” said a flattered Doctorow—who graduated in 1952, three years after the 50-year-old Newman, and never met the actor—”you’d be great for the part of the father.” But, protested Newman, “I want to play the younger brother.”•


If you see a guy on a bus in your neighborhood who looks like the great artist Ai Weiwei, it may be the real Ai Weiwei because thankfully he’s had his passport returned to him by the Chinese government after four years of stalling.

Below is a 2014 New York Times video, in which he delivers a techno-positivist take on art in an age of free-flowing information and super-connectedness. The transition is certainly a huge win in the big picture, though there’s something very different about community today being a virtual thing rather than a geographical one. And the logical outcome of the Internet of Things is that we’ll all ultimately be under surveillance like Ai Weiwei has been. I guess the most hopeful scenario is that if we’re all naked, nudity won’t matter anymore.


There’s nothing the hideous hotelier Donald Trump has said or done that’s outside the modern Republican Party playbook. He just refuses to use soft, coded language to sell his extremism, which has been the hallmark of the GOP since the rise of Newt Gingrich. So when he disgustingly labels President Obama (“Kenyan”) or Mexicans (“rapists”) or John McCain (“captured“), he’s just boiled down the mindset of conservatives to its essence. And if you think the McCain remarks were somehow out of character for Republicans, see how John Kerry or Tammy Duckworth feel about that. 

The positive response to Trump among many registered Republicans is really no surprise. The more radical band, from Christian conservatives to the Tea Party, is weary of being used by the Karl Roves of the world to consolidate power. They’re angry and they want someone who’ll speak to that anger. The racist Birther buffoon may fall from the penthouse, but his followers–both a torch-carrying mob and the Frankenstein monster the party has created–will still be there. They can’t be controlled, and that’s the logical outcome for the GOP, which has spent decades cultivating anger over threatened privilege.

From Edward Luce in the Financial Times:

Any moment now, the most buffoonish, prejudiced, egomaniacal candidate in recent Republican history will implode. All that will be left to remind us of our brief spell of folly will be those neon signs flashing “Trump” on skyscrapers and casinos around the country. At that point, thank goodness, we can resume politics as usual.

Alas, the cognoscenti are kidding themselves. US politics will not pick up where it left off. Even if Mr Trump becomes the first recorded human to be lifted skywards in rapture — the “end of days” scenario to which some of his fans subscribe — he will leave a visible mark on the Republican party.

In a field of 16 candidates, when one polls a quarter of the vote it is the equivalent of a landslide. Mr Trump’s detractors, who form arguably one of the largest bipartisan coalitions in memory, comfort themselves that he is simply on an ego trip that will turn sour. That may be true. But they are missing the point. The legions of Republicans flocking to Mr Trump’s banner are not going anywhere. If he crashes, which he eventually must, they will find another champion.

Tags: ,

If Brad Darrach hadn’t also profiled Bobby Fischer, the autonomous robot named Shakey would have been the most famous malfunctioning machine he ever wrote about. 

The John Markoff’s Harper’s piece I posted about earlier made mention of Shakey, the so-called “first electronic person,” which struggled to take baby steps on its own during the 1960s at the Stanford Research Institute. The machine’s intelligence was glacial, and much to the chagrin of its creators, it did not show rapid progress in the years that followed, as Moore’s Law forgot to do its magic.

Although Darrach’s 1970 Life piece mostly focuses on the Palo Alto area, it ventured to the other coast to record this extravagantly wrong prediction from MIT genius Marvin Minsky: “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.” It makes you wonder what current prognostications and timeframes are more excitable than executable. 

The Life article (which misspelled the robot’s name as “Shaky”) was a little too pliable, but it did sagely acknowledge the potential for a post-work world and the advent of superintelligence and the challenges those developments might bring. The opening:

It looked at first glance like a Good Humor wagon sadly in need of a spring paint job. But instead of a tinkly little bell on top of its box-shaped body there was this big mechanical whangdoodle that came rearing up, full of lenses and cables, like a junk sculpture gargoyle.

“Meet Shaky,” said the young scientist who was showing me through the Stanford Research Institute. “The first electronic person.”

I looked for a twinkle in the scientist’s eye. There wasn’t any. Sober as an equation, he sat down at an input ter minal and typed out a terse instruction which was fed into Shaky’s “brain,” a computer set up in a nearby room: PUSH THE BLOCK OFF THE PLATFORM.

Something inside Shaky began to hum. A large glass prism shaped like a thick slice of pie and set in the middle of what passed for his face spun faster and faster till it disolved into a glare then his superstructure made a slow 360degree turn and his face leaned forward and seemed to be staring at the floor. As the hum rose to a whir, Shaky rolled slowly out of the room, rotated his superstructure again and turned left down the corridor at about four miles an hour, still staring at the floor.

“Guides himself by watching the baseboards,” the scientist explained as he hurried to keep up. At every open door Shaky stopped, turned his head, inspected the room, turned away and idled on to the next open door. In the fourth room he saw what he was looking for: a platform one foot high and eight feet long with a large wooden block sitting on it. He went in, then stopped short in the middle of the room and stared for about five seconds at the platform. I stared at it too.

“He’ll never make it.” I found myself thinking “His wheels are too small.” All at once I got gooseflesh. “Shaky,” I realized, ”is thinking the same thing I am thinking!”

Shaky was also thinking faster. He rotated his head slowly till his eye came to rest on a wide shallow ramp that was lying on the floor on the other side of the room. Whirring brisky, he crossed to the ramp, semicircled it and then pushed it straight across the floor till the high end of the ramp hit the platform. Rolling back a few feet, he cased the situation again and discovered that only one corner of the ramp was touching the platform. Rolling quickly to the far side of the ramp, he nudged it till the gap closed. Then he swung around, charged up the slope, located the block and gently pushed it off the platform.

Compared to the glamorous electronic elves who trundle across television screens, Shaky may not seem like much. No death-ray eyes, no secret transistorized lust for nubile lab technicians. But in fact he is a historic achievement. The task I saw him perform would tax the talents of a lively 4-year-old child, and the men who over the last two years have headed up the Shaky project—Charles Rosen, Nils Nilsson and Bert Raphael—say he is capable of far more sophisticated routines. Armed with the right devices and programmed in advance with basic instructions, Shaky could travel about the moon for months at a time and, without a single beep of direction from the earth, could gather rocks, drill Cores, make surveys and photographs and even decide to lay plank bridges over crevices he had made up his mind to cross.

The center of all this intricate activity is Shaky’s “brain,” a remarkably programmed computer with a capacity more than 1 million “bits” of information. In defiance of the soothing conventional view that the computer is just a glorified abacuus, that cannot possibly challenge the human monopoly of reason. Shaky’s brain demonstrates that machines can think. Variously defined, thinking includes processes as “exercising the powers of judgment” and “reflecting for the purpose of reaching a conclusion.” In some at these respects—among them powers of recall and mathematical agility–Shaky’s brain can think better than the human mind.

Marvin Minsky of MIT’s Project Mac, a 42-year-old polymath who has made major contributions to Artificial Intelligence, recently told me with quiet certitude, “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”

I had to smile at my instant credulity—the nervous sort of smile that comes when you realize you’ve been taken in by a clever piece of science fiction. When I checked Minsky’s prophecy with other people working on Artificial Intelligence, however, many at them said that Minsky’s timetable might be somewhat wishful—”give us 15 years,” was a common remark—but all agreed that there would be such a machine and that it could precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of growth in science, education and the arts. At the same time a number of computer scientists fear that the godsend may become a Golem. “Man’s limited mind,” says Minsky, “may not be able to control such immense mentalities.”•

Tags: , , ,

Cloned sheep once gave humans nightmares, but the science has quietly insinuated itself into the world of polo, thanks to star Adolfo Cambiaso, who impetuously saved cells from his best stallion, Aiken Cura, as the horse was being prepared for euthanasia, its leg ruined. There are now dozens of cloned versions of champion polo ponies, some of whom are competing on the field of play. 

In a really smart Vanity Fair article, Haley Cohen explores how cloning science, which dates back to sea-urchin experiments in 1885, came to the sport of mounts and mallets. Oddly, it involves Imelda Marcos. And, yes, there is discussion about using the same methods to clone humans. An excerpt:

As the pair made their way toward Cambiaso’s stabling area, the exhausted Aiken Cura’s front left leg suddenly gave out. When Cambiaso felt the horse begin to limp beneath him, he leapt out of his saddle and threw his blue-and-white helmet to the ground in anguish.

“Save this one whatever it takes!” he pleaded, covering his face with his gloves. But the leg had to be amputated below the knee, and eventually Cambiaso—whose team won the Palermo Open that year and would go on to win the tournament another five times—was forced to euthanize his beloved Cura.

Before he said his final good-bye, however, he had a curious request: he asked a veterinarian to make a small puncture in the stallion’s neck, put the resulting skin sample into a deep freeze, and store it in a Buenos Aires laboratory. He remembers, “I just thought maybe, someday, I could do something with the cells.”

His hope was not in vain. With the saved skin sample, Cambiaso was able to use cloning technology to bring Aiken Cura back to life. These days, a four-year-old, identical replica of Cambiaso’s star stallion—called Aiken Cura E01—cavorts around a flower-rimmed field in the Argentinean province of Córdoba, where he has begun to breed and train for competition.

Now 40 years old, Cambiaso is ruggedly handsome, with long brown hair, covetable bone structure, and permanent stubble. But in spite of his athleticism, good looks, and wealth, he is surprisingly shy. Walking across the Palermo polo field, where he’s come to watch his oldest daughter play, he speaks in short spurts, as if he would rather not be talking to a stranger. Staring into the distance, he says, “Today, seeing these clones is more normal for me. But seeing Cura alive again after so many years was really strange. It’s still strange. Thank goodness I saved his cells.”•


Tags: ,

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

« Older entries § Newer entries »