Science/Tech

You are currently browsing the archive for the Science/Tech category.

In a New Statesman essay, Yuval Noah Harari, author of the great book Sapiens, argues that if we’re on the precipice of a grand human revolution–in which we commandeer evolutionary forces and create a post-scarcity world–it’s being driven by private-sector technocracy, not politics, that attenuated, polarized thing. The next Lenins, the new visionaries focused on large-scale societal reorganization, Harari argues, live in Silicon Valley, and even if they don’t succeed, their efforts may significantly impact our lives. An excerpt:

Whatever their disagreements about long-term visions, communists, fascists and liberals all combined forces to create a new state-run leviathan. Within a surprisingly short time, they engineered all-encompassing systems of mass education, mass health and mass welfare, which were supposed to realise the utopian aspirations of the ruling party. These mass systems became the main employers in the job market and the main regulators of human life. In this sense, at least, the grand political visions of the past century have succeeded in creating an entirely new world. The society of 1800 was completely destroyed and we are living in a new reality altogether.

In 1900 or 1950 politicians of all hues thought big, talked big and acted even bigger. Today it seems that politicians have a chance to pursue even grander visions than those of Lenin, Hitler or Mao. While the latter tried to create a new society and a new human being with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and supercomputers. In the coming decades, technological breakthroughs are likely to change human society, human bodies and human minds in far more drastic ways than ever before.

Whereas the Nazis sought to create superhumans through selective breeding, we now have an increasing arsenal of bioengineering tools at our disposal. These could be used to redesign the shapes, abilities and even desires of human beings, so as to fulfil this or that political ideal. Bioengineering starts with the understanding that we are far from realising the full potential of organic bodies. For four billion years natural selection has been tinkering and tweaking with these bodies, so that we have gone from amoebae to reptiles to mammals to Homo sapiens. Yet there is no reason to think that sapiens is the last station. Relatively small changes in the genome, the neural system and the skeleton were enough to upgrade Homo erectus – who could produce nothing more impressive than flint knives – to Homo sapiens, who produces spaceships and computers. Who knows what the outcome of a few more changes to our genome, neural system and skeleton might be? Bioengineering is not going to wait patiently for natural selection to work its magic. Instead, bioengineers will take the old sapiens body and ­intentionally rewrite its genetic code, rewire its brain circuits, alter its biochemical balance and grow entirely new body parts.

On top of that, we are also developing the ability to create cyborgs.•

Tags:

The robotic store has been a long-held dream, and in and of itself it’s a good thing, but it’s certainly not a positive for Labor unless new work opportunities pop up to replace those disappeared or we come to some sort of political solution to a shrinking need for human hands. In Iowa, a completely automated nonprofit grocery will offer shoppers healthy food, which is wonderful, but not completely wonderful. From Christopher Snyder:

No more long lines at the grocery store – the future of food shopping is getting a high-tech upgrade.

Des Moines, Iowa is planning to build a first-of-a kind robotic grocery store as an experiment to offer food and necessities to locals anytime at their convenience.   

A partnership between the nonprofit Eat Greater Des Moines and the business equipment firm Oasis24seven will see an automated, vending machine-style unit come to the area.

“Throughout Des Moines, there are areas of town where access to quality food is limited,” said Aubrey Alvarez, the nonprofit’s executive director. “We would love for a full service grocery store to move into these areas, but until that time the robotic unit will address the gap in the community.”

She added this “project takes a simple and familiar idea, a vending machine, and turns it on its head. Robotic Retail will be accessible to everyone.”•

Tags: ,

If Marshall McLuhan and Jerome Angel were still alive, they would likely not collaborate with Quentin Fiore (95 this year) on a physical book, not even on one as great as The Medium Is the Massage, a paperback that fit bewtween its covers something akin to the breakneck genius of Godard’s early-’60s explosion. Would they create a Facebook page that comments on Facebook or a Twitter account of aphorisms or maybe an app? I don’t know, but it likely wouldn’t be a leafy thing you could put on a wooden shelf. 

About 10 days ago, I bought a copy of The Age of Earthquakes, a book created by Douglas Coupland, Hans Ulrich Obrist and Shumon Basar, which seems a sort of updating of McLuhan’s most-famous work, a Massage for the modern head and neck. It looks at our present and future but also, by the virtue of being a tree-made thing, the past. As soon as I’m done with the title I’m reading now, I’ll spend a day with Earthquakes and post something about it. 

In his latest Financial Times column, Coupland writes about the twin refiners of the modern mood: pharmacology and the Internet, the former which I think has made us somewhat happier and the latter of which we’ve used, I think, to largely to self-medicate, stretching egos to cover unhappiness rather than dealing with it, and as the misery, untreated, expands, so does its cover. We’re smarter because of the connectivity, but I don’t know that it’s put us in a better mood. 

Coupland is much more sanguine than I am about it all. He’s in a better mood. An excerpt:

If someone time travelled from 1990 (let alone from 1900) to 2015 and was asked to describe the difference between then and now, they might report back: “Well, people don’t use light bulbs any more; they use these things called LED lights, which I guess save energy, but the light they cast is cold. What else? Teenagers seem to no longer have acne or cavities, cars are much quieter, but the weirdest thing is that everyone everywhere is looking at little pieces of glass they’re holding in their hands, and people everywhere have tiny earphones in their ears. And if you do find someone without a piece of glass or earphones, their faces have this pained expression as if to say, “Where is my little piece of glass? What could possibly be in or on that little piece of glass that could so completely dominate a species in one generation?”

 . . . 

To pull back a step or two; as a species we ought to congratulate ourselves. In just a quarter of a century we have completely rewritten the menu of possible human moods, and quite possibly for the better. Psychopharmacology, combined with the neural reconfiguration generated by extended internet usage, has turned human behaviour into something inexplicable to someone from the not too distant past. We forget this so easily. Until Prozac came out in 1987, the only mood-altering options were mid-century: booze, pot and whatever MGM fed Judy Garland to keep her vibrating for three decades. The Prozac ripple was enormous . . .•

Tags: , , , ,

Marshall McLuhan was right, for the most part. 

The Canadian theorist saw Frankenstein awakening from the operating table before others did, so the messenger was often mistaken for the monster. But he was neither Dr. Victor nor his charged charge, just an observer with a keen eye, one who could recognize patterns and realized humans might not be alone forever in that talent. Excerpts follow from two 1960s pieces that explore his ideas. The first is from artist-writer Richard Kostelanetz‘s 1967 New York Times article “Understanding McLuhan (In Part)” and the other from John Brooks’ 1968 New Yorker piece “Xerox Xerox Xerox Xerox.”

____________________________

Kostelanetz’s opening:

Marshall McLuhan, one of the most acclaimed, most controversial and certainly most talked-about of contemporary intellectuals, displays little of the stuff of which prophets are made. Tall, thin, middle-aged and graying, he has a face of such meager individual character that it is difficult to remember exactly what he looks like; different photographs of him rarely seem to capture the same man.

By trade, he is a professor of English at St. Michael’s College, the Roman Catholic unit of the University of Toronto. Except for a seminar called “Communication,” the courses he teaches are the standard fare of Mod. Lit. and Crit., and around the university he has hardly been a celebrity. One young woman now in Toronto publishing remembers that a decade ago, “McLuhan was a bit of a campus joke.” Even now, only a few of his graduate students seem familiar with his studies of the impact of communications media on civilization those famous books that have excited so many outside Toronto.

McLuhan’s two major works The Gutenberg Galaxy (1962) and Understanding Media (1964) have won an astonishing variety of admirers. General Electric, I.B.M. and Bell Telephone have all had him address their top executives, so have the publishers of America’s largest magazines. The composer John Cage made a pilgrimage to Toronto especially to pay homage to McLuhan and the critic Susan Sontag has praised his “grasp on the texture of contemporary reality.”

He has a number of eminent and vehement detractors, too. The critic Dwight Macdonald calls McLuhan’s books “impure nonsense, nonsense adulterated by sense.” Leslie Fiedler wrote in Partisan Review “Marshall McLuhan. . .continually risks sounding like the body-fluids man in Doctor Strangelove.

Still the McLuhan movement rolls on.”•

____________________________

From Brooks:

In the opinion of some commentators, what has happened so far is only the first phase of a kind o revolution in graphics. “Xerography is bringing a reign of terror into the world of publishing, because it
means that every reader can become both author and publisher,” the Canadian sage Marshall McLuhan wrote in the spring, 1966, issue of the American Scholar. “Authorship and readership alike can become production-oriented under xerography.… Xerography is electricity invading the world of typography, and it means a total revolution in this old sphere.” Even allowing for McLuhan’s erratic ebullience (“I change my opinions daily,” he once confessed), he seems to have got his teeth into something here. Various magazine articles have predicted nothing less than the disappearance of the book as it now exists, and pictured the library of the future as a sort of monster computer capable of storing and retrieving the contents of books electronically and xerographically. The “books” in such a library would be tiny chips of computer film — “editions of one.” Everyone agrees that such a library is still some time away. (But not so far away as to preclude a wary reaction from forehanded publishers. Beginning late in 1966, the long-familiar “all rights reserved” rigmarole on the copyright page of all books published by Harcourt, Brace & World was altered to read, a bit spookily, “All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information   storage and retrieval system …” Other publishers quickly followed the example.) One of the nearest approaches to it in the late sixties was the Xerox subsidiary University Microfilms, which could, and did, enlarge its microfilms of out-of-print books and print them as attractive and highly legible paperback volumes, at a cost to the customer of four cents a page; in cases where the book was covered by copyright, the firm paid a royalty to the author on each copy produced. But the time when almost anyone can make his own copy of a published book at lower than the market price is not some years away; it is now. All that the amateur publisher needs is access to a Xerox machine and a small offset printing press. One of the lesser but still important attributes of xerography is its ability to make master copies for use on offset presses, and make them much more cheaply and quickly than was previously possible. According to Irwin Karp, counsel to the Authors League of America, an edition of fifty copies of any printed book could in 1967 be handsomely “published” (minus the binding) by this combination of technologies in a matter of minutes at a cost of about eight-tenths of a cent per page, and less than that if the edition was larger. A teacher wishing to distribute to a class of fifty students the contents of a sixty-four-page book of poetry selling for three dollars and seventy-five cents could do so, if he were disposed to ignore the copyright laws, at a cost of slightly over fifty cents per copy.

The danger in the new technology, authors and publishers have contended, is that in doing away with the book it may do away with them, and thus with writing itself. Herbert S. Bailey, Jr., director of Princeton University Press, wrote in the Saturday Review of a scholar friend of his who has cancelled all his subscriptions to scholarly journals; instead, he now scans their tables of contents at his public library and makes copies of the articles that interest him. Bailey commented, “If all scholars followed [this] practice, there would be no scholarly journals.” Beginning in the middle sixties, Congress has been considering a revision of the copyright laws — the first since 1909. At the hearings, a committee representing the National Education Association and a clutch of other education groups argued firmly and persuasively that if education is to keep up with our national growth, the present copyright law and the fair-use doctrine should be liberalized for scholastic purposes. The authors and publishers, not surprisingly, opposed such liberalization, insisting that any extension of existing rights would tend to deprive them of their livelihoods to some degree now, and to a far greater degree in the uncharted xerographic future. A bill that was approved in 1967 by the House Judiciary Committee seemed to represent a victory for them, since it explicitly set forth the fair-use doctrine and contained no educational-copying exemption. But the final outcome of the struggle was still uncertain late in 1968. McLuhan, for one, was convinced that all efforts to preserve the old forms of author protection represent backward thinking and are doomed to failure (or, anyway, he was convinced the day he wrote his American Scholar article). “There is no possible protection from technology except by technology,” he wrote. “When you create a new environment with one phase of technology, you have to create an anti-environment with the next.” But authors are seldom good at technology, and probably do not flourish in anti-environments.•

Tags: , ,

Grantland has many fine writers and reporters, but the twin revelations for me have been Molly Lambert and Alex Pappademas, whom I enjoy reading as much as anyone working at any American publication. The funny thing is, I’m not much into pop culture, which is ostensibly their beat. But as with the best of journalists, the subject they cover most directly is merely an entry into many other ones, long walks that end up in big worlds. 

Excerpts follow from a recent piece by each. In “Start-up Costs,” a look at Silicon Valley and Halt and Catch Fire, Pappademas circles back to Douglas Coupland’s 1995 novel, Microserfs, a meditation on the reimagined office space written just before Silicon Valley became fully a brand as well as a land. In Lambert’s “Life Finds a Way,” the release of Jurassic World occasions an exploration of the enduring beauty of decommissioned theme parks–dinosaurs in and of themselves–at the tail end of an entropic state. Both pieces are concerned with an imposition on the natural order of things by capitalism.

_______________________________

From Pappademas:

Microserfs hit stores in 1995, which turned out to be a pretty big year for Net-this and Net-that. Yahoo, Amazon, and Craigslist were founded; Javascript, the MP3 compression standard, cost-per-click and cost-per-impression advertising, the first “wiki” site, and the Internet Explorer browser were introduced. Netscape went public; Bill Gates wrote the infamous Internet Tidal Wave” memo to Microsoft executives, proclaiming in the course of 5,000-plus words that the Internet was “the most important single development to come along since the IBM PC was introduced in 1981.” Meanwhile, at any time between May and September, you could walk into a multiplex not yet driven out of business by Netflix and watch a futuristic thriller like Hackers or Johnny Mnemonic or Virtuosity or The Net, movies that capitalized on the culture’s tech obsession as if it were a dance craze, spinning (mostly absurd) visions of the (invariably sinister) ways technology would soon pervade our lives. Microserfs isn’t as hysterical as those movies, and its vision of the coming world is much brighter, but in its own way it’s just as wrongheaded and nailed-to-its-context.

“What is the search for the next great compelling application,” Daniel asks at one point, “but a search for the human identity?” Microserfs argues that the entrepreneurial fantasy of ditching a big corporation to work at a cool start-up with your friends can actually be part of that search — that there’s a way to reinvent work in your own image and according to your own values, that you can find the same transcendence within the sphere of commerce that the slackers in Coupland’s own Generation X4 eschewed McJobs in order to chase. The notion that cutting the corporate cord to work for a start-up often just means busting out of a cubicle in order to shackle oneself to a laptop in a slightly funkier room goes unexamined; the possibility that work within a capitalist system, no matter how creative and freeform and unlike what your parents did, might be fundamentally incompatible with self-actualization and spiritual fulfillment is not on the table.•

_______________________________

Lambert’s opening:

I drove out to the abandoned amusement park originally called Jazzland during a trip to New Orleans earlier this year. Jazzland opened in 2000, was rebranded as Six Flags New Orleans in 2003, and was damaged beyond repair a decade ago by the flooding caused by Hurricane Katrina. But in the years since it’s been closed, it has undergone a rebirth as a filming location. It serves as the setting for the new Jurassic World. As I approached the former Jazzland by car, a large roller coaster arced into view. The park, just off Interstate 10, was built on muddy swampland. I have read accounts on urban exploring websites by people who’ve sneaked into the park that say it’s overrun with alligators and snakes.

After the natural disaster the area wasted no time in returning to its primeval state: a genuine Jurassic World. It was in the Jurassic era when crocodylia became aquatic animals, beginning to resemble the alligators currently populating Jazzland. I saw birds of prey circling over the theme park as I reached the front gates, only to be told in no uncertain terms that the site is closed to outsiders. I pleaded with the security guard that I am a journalist just looking for a location manager to talk to, but was forbidden from driving past the very first entrance into the parking lot. I could see the ticket stands and Ferris wheel, but accepted my fate and drove away, knowing I’d have to wait for Jurassic World to see Jazzland. As I drove off the premises, I could still glimpse the tops of the coasters and Ferris wheel, obscured by trees.

I am fascinated by theme parks that return to nature, since the idea of a theme park is such an imposition on nature to begin with — an obsessively ordered attempt to overrule reality by providing an alternate, superior dimension.•

 

Tags: ,

Olaf Stampf, who always conducts smart interviews for Spiegel, has a Q&A with Johann-Dietrich Wörner, the new general director of the European Space Agency. Two quick excerpts follow, one about a moon colony and the other about the potential of a manned Mars voyage.

______________________________

Spiegel:

Which celestial body would you like to travel to most of all?

Johann-Dietrich Wörner:

My dream would be to fly to the moon and build permanent structures, using the raw materials available there. For instance, regolith, or moon dust, could be used to make a form of concrete. Using 3-D printers, we could build all kinds of things with that moon concrete — houses, streets and observatories, for example.

______________________________

Spiegel:

Wouldn’t it be a much more exciting challenge to hazard a joint, manned flight to Mars?

Johann-Dietrich Wörner:

Man will not give up the dream of walking on Mars, but it won’t happen until at least 2050. The challenges are too great, and we don’t have the technologies yet to complete this vast project. Most of all, a trip to Mars would take much too long today. It would be irresponsible, not just from a scientific standpoint, to send astronauts to the desert planet if they could only return after more than two years.•

Tags: ,

Rachel Armstrong, a medical doctor who became an architect, wants to combine her twin passions, believing buildings can be created not only from plastics recovered from our waterways but also biological materials. From Christopher Hume of the Toronto Star:

She also imagines using living organisms such as bacteria, algae and jellyfish as building materials. If that sounds far-fetched, consider the BIQ (Bio Intelligent Quotient) Building in Hamburg. Its windows are filled with water in which live algae that’s fed nutrients. When the sun comes out, the micro-organisms reproduce, raising the temperature of the water. BIQ residents say they love their new digs. It helps that they have no heating bills.

Armstrong then described how objects can be made of plastic dredged from the oceans. It could, she suggested, be a new source of material as well as a way to clean degraded waterways. Her basic desire is to make machinery more biological and unravel the machinery behind the biological. That means figuring out how bacteria talks to bacteria, how algae “communicate.” This isn’t new, of course, but this fusion draws closer all the time.

As that happens, she argues, “consumers can become producers.” In the meantime, the search for “evidence-based truth-seeking systems” continues.

Armstrong, who began her professional life as a doctor, credits her interest in architecture to the time she spent at a leper colony in India in the early ’90s. “What I saw was a different way of life,” she recalls. “I realized we need a more integrated way of being and living so we are at one with our surroundings.”•

Tags: ,

In a Foreign Affairs essay, Martin Wolf has a retort for techno-optimists, contending that wearables are merely the emperor’s new clothes. One of his arguments I’m curious about concerns the statistical evidence that output per worker has recently decreased. How, exactly, does automation fit into that equation? Technology would seem to only improve productivity among workers if it’s complementing, not replacing, them. I do think Wolf makes a great case that “unmeasured value” has been a big part of life long before the Internet. The phonograph, after all, couldn’t be any more fully measured than the iPod. An excerpt:

…the pace of economic and social transformation has slowed in recent decades, not accelerated. This is most clearly shown in the rate of growth of output per worker. The economist Robert Gordon, doyen of the skeptics, has noted that the average growth of U.S. output per worker was 2.3 percent a year between 1891 and 1972. Thereafter, it only matched that rate briefly, between 1996 and 2004. It was just 1.4 percent a year between 1972 and 1996 and 1.3 percent between 2004 and 2012.

On the basis of these data, the age of rapid productivity growth in the world’s frontier economy is firmly in the past, with only a brief upward blip when the Internet, e-mail, and e-commerce made their initial impact.

Those whom Gordon calls “techno-optimists”—Erik Brynjolfsson and Andrew McAfee of the Massachusetts Institute of Technology, for example—respond that the GDP statistics omit the enormous unmeasured value provided by the free entertainment and information available on the Internet. They emphasize the plethora of cheap or free services (Skype, Wikipedia), the scale of do-it-yourself entertainment (Facebook), and the failure to account fully for all the new products and services. Techno-optimists point out that before June 2007, an iPhone was out of reach for even the richest man on earth. Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indexes. Moreover, say the techno-optimists, the “consumer surplus” in digital products and services—the difference between the price and the value to consumers—is huge. Finally, they argue, measures of GDP underestimate investment in intangible assets.

These points are correct. But they are nothing new: all of this has repeatedly been true since the nineteenth century. Indeed, past innovations generated vastly greater unmeasured value than the relatively trivial innovations of today. Just consider the shift from a world without telephones to one with them, or from a world of oil lamps to one with electric light. Next to that, who cares about Facebook or the iPad? Indeed, who really cares about the Internet when one considers clean water and flushing toilets?•

Tags:

With the Astros having been on the receiving end of the lowest-tech breach imaginable, here’s a re-post of a 2014 Houston Chronicle piece which focused on “Ground Control,” the computer system that was helping baseball’s most tech-friendly front office rebuild the then-woeful club.

By the time Moneyball was adapted for the screen, the sport had already moved on to next-level analytics, a steady stream of data that keeps bending around new corners. One of this year’s global improvements, showcased at the MIT Sloan Sports Analytics Conference, will be the exceptionally close reading of fielders’ body movements while they make plays, but each “nation,” each team, has its own mechanism for measuring every aspect of the game. From Evan Drellich’s article about “Ground Control,” the database that GM Jeff Luhnow is hoping will help reverse the fortunes of the grounded Houston Astros:

One of Luhnow’s favorite songs is David Bowie’s “Space Oddity,” with the lyrics, “This is ground control to Major Tom.” He happens to be a big Bowie fan and joked that the tune should play every time the site is accessed.

“That was during my formative years,” Luhnow said of his affinity for Bowie.

The project itself is permanently in a formative state. There are constantly new features and abilities to add, and what makes Ground Control so powerful is its customizability.

Teams don’t have to build their own databases. When Luhnow arrived, the club used a popular system sold by Bloomberg Sports, and it kept using Bloomberg while Ground Control was built.

Priority No. 1 for the club was getting Ground Control up in time for that year’s amateur draft. Just like this year and 2013, the Astros had the first overall pick in 2012.

By the end of 2012, or maybe early 2013, Ground Control had reached a fully functional state, although that’s a disingenuous characterization considering it’s perpetually in flux.

“The analytical engine is separate from the interface, so there was a lot of work going on developing the database and developing the interface,” Luhnow said. “The database you have to build right away, because you can’t analyze without having the data in the right format. The priorities were the database first, then the analytical engine, and the interface was a third priority.”

If you’re looking for an optimistic rejoinder to the concern about mass technological unemployment, there’s “The Robots Are Coming,” a Foreign Affairs piece by MIT computer scientist Daniela Rus that looks at the future through rose-colored Google Glasses. Rus believes driverless cars and robotic assistants will be potent elements of the economy soon enough–something those who worry about automation concur with–but her contention is that these machines will co-exist with workers instead of replacing them and even create many new jobs. I doubt the former but the latter is certainly possible. 

The writer sees a future in which “people may wake up in the morning and send personal-shopping robots to the supermarket to bring back fruit and milk for breakfast.” Rus offers no precise timeframe for when these silicon servants will begin appearing, which is probably wise.

The opening:

Robots have the potential to greatly improve the quality of our lives at home, at work, and at play. Customized robots working alongside people will create new jobs, improve the quality of existing jobs, and give people more time to focus on what they find interesting, important, and exciting. Commuting to work in driverless cars will allow people to read, reply to e-mails, watch videos, and even nap. After dropping off one passenger, a driverless car will pick up its next rider, coordinating with the other self-driving cars in a system designed to minimize traffic and wait times—and all the while driving more safely and efficiently than humans.

Yet the objective of robotics is not to replace humans by mechanizing and automating tasks; it is to find ways for machines to assist and collaborate with humans more effectively. Robots are better than humans at crunching numbers, lifting heavy objects, and, in certain contexts, moving with precision. Humans are better than robots at abstraction, generalization, and creative thinking, thanks to their ability to reason, draw from prior experience, and imagine. By working together, robots and humans can augment and complement each other’s skills.

Still, there are significant gaps between where robots are today and the promise of a future era of “pervasive robotics,” when robots will be integrated into the fabric of daily life, becoming as common as computers and smartphones are today, performing many specialized tasks, and often operating side by side with humans. Current research aims to improve the way robots are made, how they move themselves and manipulate objects, how they reason, how they perceive their environments, and how they cooperate with one another and with humans.

Creating a world of pervasive, customized robots is a major challenge, but its scope is not unlike that of the problem computer scientists faced nearly three decades ago, when they dreamed of a world where computers would become integral parts of human societies. In the words of Mark Weiser, a chief scientist at Xerox’s Palo Alto Research Center in the 1990s, who is considered the father of so-called ubiquitous computing: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computers have already achieved that kind of ubiquity. In the future, robots will, too.•

Tags:

Top-Five-Taxi-e1418331590553

Uber isn’t good for Labor, no matter how much Travis Kalanick tries to convince us, but the company and other rideshares might be a boon in other ways beyond useful technological innovations. I argued last year that these services could provide options to those who’ve traditionally been shortchanged by predatory and racist taxi drivers. Of course, bigotry is a deep and enduring wound, and the digital realm isn’t impervious to it.

From Jenna Wortham’s smart Medium essay “Ubering While Black“:

I’ve endured humiliating experiences trying to get a cab in the various cities I’ve visited and lived in. Available taxis—as indicated by their roof lights—locked their doors with embarrassingly loud clicks as I approached. Or they’ve just ignored my hail altogether. It’s largely illegal for cab drivers to refuse a fare, but that rarely deters them, because who’s going to take the time to file a report? And once, horrifyingly, while I was in San Francisco, a taxi driver demanded I exit his car. Fed up, I stubbornly refused, so he hopped out of his seat, walked around to my side, and yanked me out.

After that last incident, which happened a few years ago, I avoided cabs altogether. I stuck to riding public transportation, and rented cars when I traveled.

In 2011, I covered Uber’s debut in New York. The service, then a scrappy start-up, promised to let people request rides from private cars and taxis with a smartphone application. It initially seemed like a hard sell in a city resplendent with transit options, but I quickly found myself using it more frequently, especially when I traveled back to San Francisco.

Latoya Peterson, the founder of a site called Racialicious, first blogged about her experiences with Uber in 2012, wondering whether or not the technology could be a panacea for the discrimination she experienced trying to hail cabs.

“The premium car service removes the racism factor when you need a ride,” she wrote. Peterson, who lives in D.C., said that since her original post, she has taken “hundreds of rides” with Uber. “The Uber experience is just so much easier for African-Americans,” she told me recently. “There’s no fighting or conversation. When I need a car, it comes. It takes me to my destination. It’s amazing that I have to pay a premium for that experience, but it’s worth it.”

Even though requesting a car through Uber can cost more than a regular taxi, Peterson and I are each usually willing to pay extra to avoid potential humiliation.•

Tags: ,

Online videos exploded because the Youtube founders didn’t wait for legislation to catch up to technology and just went ahead with their plans. That’s led to great things and bad things for content. Google similarly began testing driverless cars on public streets before laws were established governing them. It’s difficult to believe at this point that any auto (or auto-software) manufacturer, in Detroit or Silicon Valley, would risk flouting the growing legislation in regards to driverless. But other transportation innovations will arrive at a surprisingly brisk pace because laws haven’t yet anticipated them.

From “Tipping Point in Transit” by Farhad Manjoo at the New York Times:

Communication systems and sensors installed in streets and cars are creating the possibility of intelligent roads, while newer energy systems like solar power are altering the environmental costs of getting around. Technology is also creating new transportation options for short distances, like energy-efficient electric-powered bikes and scooters, or motorcycles that can’t tip over.

“Cars and transportation will change more in the next 20 years than they’ve changed in the last 75 years,” said M. Bart Herring, the head of product management at Mercedes-Benz USA. “What we were doing 10 years ago wasn’t that much different from what we were doing 50 years ago. The cars got more comfortable, but for the most part we were putting gas in the cars and going where we wanted to go. What’s going to happen in the next 20 years is the equivalent of the moon landing.”

Mr. Herring is one of many in the industry who say that we are on the verge of a tipping point in transportation. Soon, getting around may be cheaper and more convenient than it is today, and possibly safer and more environmentally friendly, too.

But the transportation system of the near future may also be more legally complex and, given the increasing use of private systems to get around, more socially unequal. And, as in much of the rest of the tech industry, the moves toward tomorrow’s transportation system may be occurring more rapidly than regulators and social norms can adjust to them.

“All the things that we think will happen tomorrow, like fully autonomous cars, may take a very long time,” said Bryant Walker Smith, an assistant professor at the University of South Carolina School of Law who studies emerging transportation systems. “But it’s the things we don’t even expect that will happen really fast.”•

Tags: ,

The near-term future of automation isn’t dramatic like the new Channel 4-AMC show Humans. There’ll be no Uncanny Valley to disorient us, just a downward slope. No struggle for dominance–it’s been decided. Tomorrow won’t look unsettlingly sort of like you and me. It will look nothing like us at all.

An entire team of Australian dockworkers has been disappeared by machines in the last two months. From Jacob Saulwick at the Sydney Morning Herald:

At Sydney’s Port Botany, every hour of every day, the robots are dancing.

Well, they look like they are dancing – these 45 so-called AutoStrads, or automated straddles, machines that have taken on the work that until a couple of months ago was at least in part performed by dockworkers.

Almost 20 years ago, the Patrick container terminal at Botany played host to one of the most divisive industrial battles in Australian history, as the stevedoring company attempted to break the back of its union-dominated workforce.

In some respects that battle was won in April.

It was then that Patrick introduced, following a four-year investment program, a level of automation into its stevedoring operation that might be unsurpassed in the world.

“This is fully automated, there are no human beings, literally from the moment this truck driver stepped out of his cabin from then onwards this AutoStrad will take it right through the quay line without any humans interfacing at all,” Alistair Field, the managing director of Patrick Terminals and Logistics, a division of Asciano, said on Wednesday.•

Tags: ,

Hod Lipson loves robots, but love is complicated. 

The robotics engineer is among the growing chorus of those concerned about technological unemployment leading to social unrest, something Norbert Wiener warned of more than 60 years ago. Is it, at long last, in this Digital Age, happening?

In a long-form MIT Technology Review article, David Rotman wonders if the new technologies may be contributing to wealth inequality and could ultimately lead to an even a greater divide, while considering the work of analysts on both sides of automation issue, including Sir Tony Atkinson, Martin Ford, Andrew McAfee and David Autor. The opening:

The way Hod Lipson describes his Creative Machines Lab captures his ambitions: “We are interested in robots that create and are creative.” Lipson, an engineering professor at Cornell University (this July he’s moving his lab to Columbia University), is one of the world’s leading experts on artificial intelligence and robotics. His research projects provide a peek into the intriguing possibilities of machines and automation, from robots that “evolve” to ones that assemble themselves out of basic building blocks. (His Cornell colleagues are building robots that can serve as baristas and kitchen help.) A few years ago, Lipson demonstrated an algorithm that explained experimental data by formulating new scientific laws, which were consistent with ones known to be true. He had automated scientific discovery.

Lipson’s vision of the future is one in which machines and software possess abilities that were unthinkable until recently. But he has begun worrying about something else that would have been unimaginable to him a few years ago. Could the rapid advances in automation and digital technology provoke social upheaval by eliminating the livelihoods of many people, even as they produce great wealth for others?

“More and more computer-guided automation is creeping into everything from manufacturing to decision making,” says Lipson. In the last two years alone, he says, the development of so-called deep learning has triggered a revolution in artificial intelligence, and 3-D printing has begun to change industrial production processes. “For a long time the common understanding was that technology was destroying jobs but also creating new and better ones,” says Lipson. “Now the evidence is that technology is destroying jobs and indeed creating new and better ones but also fewer ones. It is something we as technologists need to start thinking about.”•

Tags: , , , , ,

A couple months ago, I posted some exchanges from a Reddit Ask Me Anything conducted by a nonagenarian from Stuttgart who came of age during the rise of Naziism and even briefly met Adolf Hitler. What struck me about her attitude is that she didn’t seem to embrace her own culpability as a worker for the Nazi cause, something I’ve noticed over the years with other German citizens who grew up on the wrong side of World War II. It’s like they never fully processed the horrors that occurred–they were completely brainwashed but only partially deprogrammed–and some even seem to still harbor a degree of admiration for Hitler. It’s just stunning.

An Associated Press piece by Frank Jordans reports on a new study that gives credence to the worst fears about Germans of that generation, revealing that those indoctrinated into Nazism during their wonder years retained feelings of anti-Semitism. The effect was most pronounced in areas where anti-Semitism had been exhibited before the Nazis solidified power.

The opening:

BERLIN (AP) — Anti-Semitic propaganda had a life-long effect on German children schooled during the Nazi period, leaving them far more likely to harbor negative views of Jews than those born earlier and later, according to a study published Monday.

The findings indicate that attempts to influence public attitudes are most effective when they target young people, particularly if the message confirms existing beliefs, the authors said.

Researchers from the United States and Switzerland examined surveys conducted in 1996 and 2006 that asked respondents about a range of issues, including their opinions of Jews. The polls, known as the German General Social Survey, reflected the views of 5,300 people from 264 towns and cities across Germany, allowing the researchers to examine differences according to age, gender and location.

By focusing on those respondents who expressed consistently negative views of Jews in a number of questions, the researchers found that those born in the 1930s held the most extreme anti-Semitic opinions – even fifty years after the end of Nazi rule.

“It’s not just that Nazi schooling worked, that if you subject people to a totalitarian regime during their formative years it will influence the way their mind works,” said Hans-Joachim Voth of the University of Zurich, one of the study’s authors. “The striking thing is that it doesn’t go away afterward.”•

Tags: , ,

From the December 15, 1907 Brooklyn Daily Eagle:

 

More information readily available to us–more than we ever dreamed we could possess–has not clearly improved our decision-making process. Why? Perhaps, like Chauncey Gardener, we like to watch, but what we really love is to see what we want to see. Or maybe we just can’t assimilate the endless reams of virtual data.

In an Ask Me Anything at Reddit, behavioral economist Richard Thaler, who’s just published Misbehaving, has an interesting idea: What about online decision engines that help with practical problems the way Expedia does with travel itineraries? Not something feckless like the former Ask Jeeves, but a machine wiser and deeper.

Such a nudge would bring about all sorts of ethical questions. Should we be offloading decisions (or even a significant part of them) to algorithms? Are the people writing the coding manipulating us? But it would be a fascinating experiment. 

The exchange:

Question:

Do you think, with rapid advances in data collection, machine learning, ubiquity of technology that lowers barrier for precise calculation/ data interpretation etc, consumers/ humans will start to behave more like Econs? Do you think that would be OPTIMAL, i.e in our best interests? It seems a big ‘flaw’ in AI/ robotics right now is that they are not ‘human like,’ i.e. they are too much like Econs, they make no mistakes and always make optimal choices. Do you think it’s more optimal for human to become more like robots/ machine that make no ‘irrational’ errors? Do you think it would eventually become that way when technology makes it much lower efforts to actually evaluate rather than rely on intuitive heuristics?

Richard Thaler:

Two parts to this.

One is: I’ve long advocated using big data to help people make better decisions, an effort i call “smart disclosure.” I’ve a couple of New York Times columns devoted to this topic. The idea is that by making better data available, we can create new businesses that I call “choice engines.”

Think of them like travel websites, that would make, say, choosing a mortgage as easy as finding a plane ticket from New York to Chicago.

More generally, however, the goal is not to turn humans into Econs. Econs (*not economists) are jerks.

Econs don’t leave tips at restaurants they never intend to go back to. Don’t contribute to NPR. And don’t bother to vote.•

 

Tags:

In a recent interview conducted by Wait But Why writer Tim Urban, Elon Musk discussed his misgivings about genetic engineering (e.g., the Nazi connection). But a hammer is a tool or a weapon depending on how you swing it, and modifying genes could cure or even end an assortment of horrible diseases, especially rare ones which never receive adeqaute funds to make a cure possible.

At her blog, biology of aging specialist Maria Konovalenko offers a riposte to Musk and other doubters. The opening:

When I hear that the conversation is about an ethical problem I anticipate that right now the people are going to put everything upside down and end with common sense. Appealing to ethics has always been the weapon of conservatism, the last resort of imbecility.

How does it work? At the beginning you have some ideas, but in the end it’s always a “no.” The person speaking on the behalf of ethics or bioethics is always against the progress, because he or she is being based on their own conjectures. What if the GMO foods will crawl out of the garden beds and eat us all? What if there will be inequality when some will use genetic engineering for their kids and some won’t? Let’s then close down the schools and universities – the main source of inequality. What if some will get the education and other won’t?

That’s exactly the position that ‪Elon Musk took by fearing the advances in genetic engineering. Well, first of all, there already is plenty of inequality. It is mediated by social system, limited resources and genetic diversity. First of all, why should we strive for total equality? More precisely, why does the plank of equality has to be based on a low intellectual level? How bad is a world where the majority of people are scientists? How bad is a world where people live thousands of years and explore deep space? It’s actually genetic engineering that gives us these chances. From the ‪#‎ethics‬ point of view things are visa versa. It’s refusing the very possibility of helping people is a terrible deed. Let’s not improve a person, because if we do what if this person becomes better than everybody else? Let’s not treat this person, because if we do he might live longer than everybody else? Isn’t this complete nonsense?•

Tags: , ,

The DARPA Grand Challenge of 2004 quickly resulted in private companies investing heavily in driverless. Will the department’s recently completed Robotics Challenge lead to a similar public-to private shift? Gill Pratt, outgoing member of DARPA, explains to Sam Thielman of Guardian how he believes that will occur. An excerpt:

Question:

Google, Daimler and Uber all have self-driving cars now; how do you anticipate humanoid robots reaching the private sector?

Gill Pratt:

I think the next big thing to conquer is cost. All the prototypes that you saw were in the hundreds of thousands to millions of dollars. And once a market is identified, whether it’s in manufacturing or agriculture or ageing society, once someone kind of finds the match between the technology and the market, the costs will go way down, and that will be an amazing thing. The next neat thing that’s going to happen is cloud robotics: that’s where when one robot learns something they all learn something.

Let’s say you have a group of robots used for ageing society and their job is to clean up within your house. As each machine does its work, eventually one of them will come across an object and not know what it is, and it’ll reach out to the cloud, through the internet, and say: “Does anyone know what this thing is?” Let’s say that no one does. Then it’ll reach out to a person and the person will say: “Oh, that’s a jar of oil, and that belongs in the cupboard next to the jar of vinegar.” And the robot will say: “Got it!” And now every single one of them knows. In this way, you can bootstrap up the confidence of all the machines throughout the world. I think that will be the next technology.•

Tags: ,

A system based on prefabricated modules has allowed Zhang Yue to construct skyscrapers in heretofore unimaginably quick times. They might not be beautiful, but they are green and relatively inexpensive. As the buildings grow taller, the developer’s dreams grow wider. From Finn Aberdein at the BBC:

The revolution will be modular, Zhang insists. Mini Sky City was assembled from thousands of factory-made steel modules, slotted together like Meccano.

It’s a method he says is not only fast, but also safe and cheap.

Now he wants to drop the “Mini” and use the same technique to build the world’s tallest skyscraper, Sky City.

While the current record holder, the 828m-high Burj Khalifa in Dubai, took five years to “top out”, Zhang says his proposed 220-storey “vertical city” will take only seven months – four for the foundations, and three for the tower itself.

And it will be 10m taller.

But if that was not enough, Zhang Yue wants nothing less than to reimagine the whole urban environment.

He has a vision of a future where his company makes a third of the world’s buildings – all modular, all steel, and all green.

“The biggest problem we face in the world right now isn’t terrorism or world war. It’s climate change,” he says.•

Tags: ,

As I stated recently, humanoid robots aren’t likely to experience the head-spinning progress driverless cars have enjoyed because they’re more complicated machines that need to move in every direction, not just forward and reverse, and they require smart limbs as well as rolling wheels. But having numerous multibillion dollar corporations working on these hard problems is also an incredible force for improvement.

An Economist report of the recent DARPA Robotics Challenge notes the promise but also the frustrations of the field:

Running Man, the Atlas robot with which IHMC won second place, showed what the platform was capable of—though after completing its winning round it did rather let the side down by falling over as it struck a sequence of victorious poses. Jerry Pratt, who led the IHMC team, argues convincingly that, in principle, walking has huge advantages—a human can quite easily make progress along a discontinuous track no wider than a single foot, taking in its stride obstacles big enough to pose a problem to the wheels of anything short of a monster truck. But on the evidence of the DRC, the software and hardware needed to match that ability remain far off. For the time being, a robot designed for responding to DRC-style disasters looks likely to need an alternative to legs.

Getting a handle on how long that time being might be was another of the points of the DRC. Gill Pratt, who ran the programme at DARPA (and is no relation to IHMC’s Mr Pratt, though he did supervise his doctoral research) saw it as a way not just to stimulate progress in the field but also to gauge how quickly such progress could be made. Everyone involved in the DRC remembers the startling improvement between the first of DARPA’s “Grand Challenges” for autonomous vehicles—which, in 2004, saw the winning car travel just 11.8km of a 240km route—and the second, in 2005, in which five teams went the whole distance. That demonstration of rapidly expanding capabilities played a role in convincing people, such as the bosses of Google, that self-driving cars were a practical possibility in the not-too-distant future.

The progress between the first real-world DRC trials, in late 2013, and the finals this month was less spectacular. Humanoid robots are not yet at the ready-for-take-off point autonomous cars were at ten years ago. The teams using the Atlases knew that less than two years of working with their charges gave them time to implement little more than a simple ability to walk—one expert says that developing robust locomotion from the ground up and debugging it is more like a five-year job. There were also limitations with the hardware. Atlas’s arms were not strong enough to lift its 150kg bulk back up if it fell down.•

In an insightful and highly entertaining Business Insider profile of Elon Musk by Wait But Why writer Tim Urban, the technologist and journalist spend lunch discussing all manner of ideas: genetic engineering, superintelligence, consciousness, etc. Musk says he steers clear of genetic engineering because of its Nazi connection, and of course, he speaks of his desire to launch us into becoming a multi-planet species. An excerpt:

This guy has a lot on his mind across a lot of topics. In this one lunch alone, we covered electric cars, climate change, artificial intelligence, the Fermi Paradox, consciousness, reusable rockets, colonizing Mars, creating an atmosphere on Mars, voting on Mars, genetic programming, his kids, population decline, physics vs. engineering, Edison vs. Tesla, solar power, a carbon tax, the definition of a company, warping spacetime and how this isn’t actually something you can do, nanobots in your bloodstream and how this isn’t actually something you can do, Galileo, Shakespeare, the American forefathers, Henry Ford, Isaac Newton, satellites, and ice ages.

I’ll get into the specifics of what he had to say about many of these things in later posts, but some notes for now:

— He’s a pretty tall and burly dude. Doesn’t really come through on camera.

— He ordered a burger and ate it in either two or three bites over a span of about 15 seconds. I’ve never seen anything like it.

— He is very, very concerned about AI. I quoted him in my posts on AI saying that he fears that by working to bring about Superintelligent AI (ASI), we’re “summoning the demon,” but I didn’t know how much he thought about the topic. He cited AI safety as one of the three things he thinks about most—the other two being sustainable energy and becoming a multi-planet species, i.e. Tesla and SpaceX. Musk is a smart motherf—er, and he knows a ton about AI, and his sincere concern about this makes me scared.

— The Fermi Paradox also worries him. In my post on that, I divided Fermi thinkers into two camps—those who think there’s no other highly intelligent life out there at all because of some Great Filter, and those who believe there must be plenty of intelligent life and that we don’t see signs of any for some other reason. Musk wasn’t sure which camp seemed more likely, but he suspects that there may be an upsetting Great Filter situation going on. He thinks the paradox “just doesn’t make sense” and that it “gets more and more worrying” the more time that goes by. Considering the possibility that maybe we’re a rare civilization who made it past the Great Filter through a freak occurrence makes him feel even more conviction about SpaceX’s mission: “If we are very rare, we better get to the multi-planet situation fast, because if civilization is tenuous, then we must do whatever we can to ensure that our already-weak probability of surviving is improved dramatically.” Again, his fear here makes me feel not great.•

Tags: ,

While it shocks me that test subjects in psychologist Solomon Asch’s experiments on conformity were at all swayed to ridiculous conclusions by groupthink, economist Tim Harford finds a silver lining in the cloud in his latest Financial Times column: Participants were independent more often than influenced. That’s true, but if a few minutes of suggestion can alter beliefs to a significant degree, what can longer term and more subtle social pressures do?

From Harford:

Asch gave his subjects the following task: identify which of three different lines, A, B or C, was the same length as a “standard” line. The task was easy in its own right but there was a twist. Each individual was in a group of seven to nine people, and everyone else in the group was a confederate of Asch’s. For 12 out of 18 questions they had been told to choose, unanimously, a specific incorrect answer. Would the experimental subject respond by fitting in with the group or by contradicting them? Many of us know the answer: we are swayed by group pressure. Offered a choice between speaking the truth and saying something socially convenient, we opt for social convenience every time.

But wait — “every time”? In popular accounts of Asch’s work, conformity tends to be taken for granted. I often describe his research myself in speeches as an example of how easily groupthink can set in and silence dissent. And this is what students of psychology are themselves told by their own textbooks. A survey of these textbooks by three psychologists, Ronald Friend, Yvonne Rafferty and Dana Bramel, found that the texts typically emphasised Asch’s findings of conformity. That was in 1990 but when Friend recently updated his work, he found that today’s textbooks stressed conformity more than ever.

This is odd, because the experiments found something more subtle. It is true that most experimental subjects were somewhat swayed by the group. Fewer than a quarter of experimental subjects resolutely chose the correct line every time. (In a control group, unaffected by social pressure, errors were rare.) However, the experiment found that total conformity was scarcer than total independence. Only six out of 123 subjects conformed on all 12 occasions. More than half of the experimental subjects defied the group and gave the correct answer at least nine times out of 12. A conformity effect certainly existed but it was partial.•

_____________________________

An iteration of the Asch Experiment:

Tags: ,

If the heart is a lonely hunter, then the brain is a game of William Tell. It’s tough to hit the target, and sometimes missing can lead to horrible consequences.

From Tim Adams’ Guardian piece about neuroscientist Dr. Suzanne O’Sullivan’s new book concerning imaginary illnesses, It’s All in Your Head:

Some of its more avant-garde subjects have faced O’Sullivan in her treatment room. Her experience of this type of patient began when she was just qualified as a junior doctor, watching a woman she calls Yvonne being questioned by her consultant. Yvonne, after an accident in which she had been sprayed in the face with window-cleaning fluid, had convinced herself and her family that she was blind. After six months of tests doctors had found nothing wrong with her eyes. She was by this time on disability benefits with a full-time carer, unable to get around her house. O’Sullivan and her fellow junior doctors, certain she could see, found it hard not to suppress giggles as Yvonne described her condition. They were reprimanded by the consultant. The cause of Yvonne’s blindness was psychological rather than physical – a response, it later seemed, to unbearable tensions in her marriage. It was to her no less real, however: she had subconsciously persuaded herself that she had lost her sight. After six months of psychiatric help and family counselling, O’Sullivan reports, Yvonne’s vision was restored.

It is O’Sullivan’s contention that “psychosomatic disorders are physical symptoms that mask emotional distress”. In the 19th century sufferers of such conditions were paraded by the celebrated neurologist Jean-Marie Charcot, who revealed to sold-out audiences how such states could be induced by suggestion and hypnosis. Even with fMRI scans and advances in neural imaging, the means by which thought alone can conjure physical pain is an unfathomable mystery. “One day a woman loses the power of speech entirely and the next she speaks in the voice of a child. A girl has a lump in her throat and becomes convinced she cannot swallow. Eyes close involuntarily and no amount of coaxing will open them.” Each of O’Sullivan’s patients is different; however, buried trauma or stress (itself an undefined cause and effect) seems often to be a trigger.

Tags: ,

Michael Crichton was a major part of the first wave of very educated Americans weaned on genre entertainments who moved B movies to the A-List and put pulp novels atop the New York Times Bestsellers. All the while, he drew the ire of the science community by putting a spotlight on the Victor Frankenstein side of the laboratory, worrying about the Singularity long before the phrase came into vogue (Westworld), thinking about the value corporations might put on the things inside of us prior to Larry Page’s brain-implant dreams (Coma), and considering the perils of de-extinction (Jurassic Park).

The opening of Michael Weinreb’s terrific Grantland consideration of a bad writer who was also a great writer:

At the heart of nearly every Michael Crichton novel is the simplest of premises: a protagonist in trouble, losing control of his world, facing forces he can no longer contain. It’s not exactly a sophisticated plot device, but while Crichton could be a complex thinker in terms of subject matter and scientific inquiry, especially later in his career, he was also an utterly facile writer as far as sentence structure and characterization go. He wrote page-turners that aspired for dystopic realism, and because of this, he is still a polarizing figure whose literary legacy remains unsettled. He once said that scientists criticized him for co-opting their theories into fiction, and that book critics ripped him for writing bad prose.

But one might also argue that few writers in modern history have married high-concept ideas and base-level entertainment as well as Crichton did. His books are the ultimate union of the geeky and the pulpy. Which is why one of this summer’s surefire blockbusters, Jurassic World, and one of this fall’s signature HBO series, Westworld, are both based on ideas that originated in the mind of a man who died almost seven years ago.

♦♦♦

Start with, say, a handsome doctor lured by a beautiful woman to an island that is actually an experiment in the parameters of human need, run by a shadowy corporation that feeds people a drug that (for reasons unknown) turns their urine a bright and shiny blue. Or start with a vacationing playboy who finds himself trapped at a French villa by a surgeon who wields a scalpel as a weapon, like a James Bond villain. Or start with a heist gone wrong, or a madman wielding nerve gas and threatening to attack the Republican National Convention, or a doctor arrested and thrown in jail on charges of performing an illegal abortion.

Those are a few of the premises of the nine books Crichton wrote in the late 1960s and early 1970s under varied pseudonyms, when he wasn’t yet a full-time writer and was still playing around with what kind he’d want to be if and/or when he became one. In a way, these novels are the most fascinating experiments of his career, because they’re windows into his thought process, into his own angst about technology and humanity. They’re the demos and B-sides that eventually led to his first best-selling book, 1969’s The Andromeda Strain, about a microorganism run amok. And The Andromeda Strain eventually led to 1990’sJurassic Park, the story of the dinosaurs run amok, the story that turned Crichton into one of the most famous writers on the planet.•

Tags: ,

« Older entries § Newer entries »