You are currently browsing the archive for the Science/Tech category.


If you like your human beings to come with fingers and toes, you may be disquieted by the Reddit Ask Me Anything conducted by Andrew Hessel, a futurist and a “biotechnology catalyst” at Autodesk. It’s undeniably one of the smartest and headiest AMAs I’ve ever read, with the researcher fielding questions about a variety of flaws and illnesses plaguing people that biotech may be able to address, even eliminate. Of course, depending on your perspective, humanness itself can be seen as a failing, something to be “cured.” A few exchanges follow.


Four questions, all on the same theme:

1) What is the probable timeframe for when we’ll see treatments that will slow senescence?

2) Do we have a realistic estimate, in years or decades (or bigger, fingers crossed!), on the life extension potential of such treatments?

3) Is it realistic that such treatments would also include senescence reversal (de-ageing)?

4) Is there any indication at present as to what kind of form these treatments will take, particularly with regards to their invasiveness?

Andrew Hessel:

1) We are already seeing some interesting results here — probably the most compelling I’ve seen is in programming individually senescent cells to die. More work needs to be done. 2) In humans, no. We are already long-lived. Experiments that lead to longer life can’t be rushed — the results come at the end! 3) TBD — but I can’t see why not 4) Again, TBD, but I think it will involve tech like viruses and nanoparticles that can target cells / tissue with precision.

Overall, trying to extend our bodies may be throwing good effort at a bad idea. In some ways, the important thing is to be able to extract and transfer experience and memory (data). We do this when we upgrade our phones, computers, etc.


Can Cas9/Crispr edit any gene that control physical appearance in an adult human? say for example it it’s the gene that controls the growth of a tail? will reactivating it actually cause a tail to grow in an already mature human?

Andrew Hessel:

It’s a powerful editing technology that could potentially allow changing appearance. The problem is editing a fully developed organism is new territory. Also, there’s the challenges of reprogramming millions or billions of cells! But it’s only a 4 year old technology, lots of room to explore and learn.


I’m an artist who’s curious about using democratized genetic engineering techniques (i.e. CRISPR) to make new and aesthetically interesting plant life, like roses the size of sunflowers or lillies and irises in shapes and colors nobody has ever scene. Is this something that is doable by an non-scientist with the tools and understanding available today? I know there are people inserting phosphorescence into plant genes – I’d like to go one further and actually start designing flower, or at least mucking around with the code to see what kinds of (hopefully) pretty things emerge. I’d love your thoughts on this… Thanks!

Andrew Hessel:

I think it’s totally reasonable to start thinking about this. CRISPR allows for edits of genomes and using this to explore size/shape/color etc of plants is fascinating. As genome engineering (including whole genome synthesis) tech becomes cheaper and faster, doing more extensive design work will be within reach. The costs need to drop dramatically though — unless you’re a very rich artists. :-) As for training, biodesign is already so complicated that you need software tools to help. The software tools are going to improve a lot in the future, allowing designers to focus more on what they want to make, rather than the low level details of how to make it. But we still have a ways to go on this front. We still don’t have great programming tools for single cells, let alone more complex organisms. But they WILL come.


So my question is, do you think there will be a “biological singularity,” similar to Ray Kurzweil’s “technological singularity?”

Will there be a time in the near future where the exponential changes in genetic engineering (synthetic biology, dna synthesis, genome sequencing, etc.) will have such a profound impact on human civilization that it is difficult to predict what the future will be like?

Andrew Hessel:

I think it’s already hard to figure out where the future is going. Seriously. Who would have predicted politics to play out this way this year? But yes, I think Kurzweil calls it right that the combination of accelerating computation, biotech, etc creates a technological future that is hard to imagine. This said, I don’t think civilization will change that quickly. Computers haven’t changed the fundamentals of life, just the details of how we go about our days. Biotech changes can be no less profound, but they take longer to code, test, and implement., Overall, though, I think we come of this century with a lot more capabilities than we brought into it!•



Desperation sounds funny when expressed in words. A scream would probably be more coherent.

Nobody really knows how to remake newspapers and magazines of a bygone era to be profitable in this one, and the great utility they provided–the Fourth Branch of Government the traditional media was called–is not so slowly slipping away. What’s replaced much of it online has been relatively thin gruel, with the important and unglamorous work of covering local politics unattractive in a viral, big-picture machine.

All I know is when Condé Nast is using IBM’s Watson to help advertisers “activate” the “right influencers” for their “brands,” we’re all in trouble.

From The Drum:

With top titles like Vogue, Vanity Fair, Glamour and GQ, Conde Nast’s partnership heralds a key step merging targeted influencer marketing and artificial intelligence in the fashion and lifestyle industry. The platform will be used to help advertiser clients improve how they connect with audiences over social media and gain measurable insights into how their campaigns resonate.

“Partnering with Influential to leverage Watson’s cognitive capabilities to identify the right influencers and activate them on the right campaigns gives our clients an advantage and increases our performance, which is paramount in today’s distributed content world,” said Matt Starker, general manager, digital strategy and initiatives at Condé Nast. “We engage our audiences in innovative ways, across all platforms, and this partnership is another step in that innovation.”

By analyzing unstructured data from an influencer’s social media feed and identifying key characteristics that resonate with a target demographic, the Influential platform uses IBM’s personality insights into, for example, a beauty brand that focuses on self-enhancement, imagination and trust. This analysis helps advertisers identify the right influencers by homing in on previously hard-to-measure metrics–like how they are perceived by their followers, and how well their specific personality fits the personality of the brand.•


From the July 31, 1928 Brooklyn Daily Eagle:




In a Literary Review piece about Kyle Arnold’s new title, The Divine Madness Of Philip K. Dick, Mike Jay, who knows a thing or two about the delusions that bedevil us, writes about the insane inner world of the speed-typing, speed-taking visionary who lived during the latter stages of his life, quite appropriately, near the quasi-totalitarian theme park Disneyland, a land where mice talk and corporate propaganda is endlessly broadcast. Dick was a hypochondriac about the contents of his head, and it’s no surprise his life was littered with amphetamines, anorexia and anxiety, which drove his brilliance and abbreviated it.

The opening:

Across dozens of novels and well over a hundred short stories, Philip K Dick worried away at one theme above all others: the world is not as it seems. He worked through every imaginable scenario: consensus reality was variously a set of implanted memories, a drug-induced hallucination, a time slip, a covert military simulation, an illusion projected by mega-corporations or extraterrestrials, or a test set by God. His typical protagonist was conspired against, drugged, hypnotised, paranoid, schizophrenic – or, possibly, the only person in possession of the truth.

The preoccupation all too clearly reflected the author’s life. Dick was a chronic doubter, tormented, like René Descartes, by the suspicion that the world was the creation of an evil demon ‘who has directed his entire effort to misleading me’. But cogito ergo sum was not enough to rescue someone who in 1972, during one of his frequent bouts of persecution mania, called the police to confess to being an android. Dick took scepticism to a level that he made his own. It became his brand, and since his death it has been franchised across popular culture. He isn’t credited on Hollywood blockbusters such as The Matrix (in which reality is a simulation created by machines from the future) or The Truman Show (about a reality TV programme in which all but the protagonist are complicit), but their mind-bending plot twists are his in all but name.

As Kyle Arnold acknowledges early in his lucid and accessible study, it would be impossible to investigate the roots of Dick’s cosmic doubt more doggedly than he did himself. He was ‘his own best psychobiographer”…

Tags: ,

Some sort of survival mechanism allows us to forget the full horror of a tragedy, and that’s a good thing. That fading of facts makes it possible for us to go on. But it’s dangerous to be completely amnesiac about disaster.

Case in point: In 2014, Barry Diller announced plans to build a lavish park off Manhattan at the pier where Titanic survivors came to shore. Dial back just a little over two years ago to another waterlogged disaster, when Hurricane Sandy struck the city, and imagine such an island scheme even being suggested then. The wonder at that point was whether Manhattan was long for this world. Diller’s designs don’t sound much different than the captain of the supposed unsinkable ship ordering a swimming pool built on the deck after the ship hit an iceberg.

In New York magazine, Andrew Rice provides an excellent profile of scientist Klaus Joseph, who believes NYC, as we know it, has no future. The academic could be wrong, but if he isn’t, his words about the effects of Irene and Sandy are chilling: “God forbid what’s next.”

The opening:

Klaus Jacob, a German professor affiliated with Columbia’s University’s Lamont-Doherty Earth Observatory, is a geophysicist by profession and a doomsayer by disposition. I’ve gotten to know him over the past few years, as I’ve sought to understand the greatest threat to life in New York as we know it. Jacob has a white beard and a ponderous accent: Imagine if Werner Herzog happened to be a renowned expert on disaster risk. Jacob believes most people live in an irrational state of “risk denial,” and he takes delight in dispelling their blissful ignorance. “If you want to survive an earthquake, don’t buy a brownstone,” he once cautioned me, citing the catastrophic potential of a long-dormant fault line that runs under the city. When Mayor Bloomberg announced nine years ago an initiative to plant a million trees, Jacob thought, That’s nice — but what about tornadoes?

For the past 15 years or so, Jacob has been primarily preoccupied with a more existential danger: the rising sea. The latest scientific findings suggest that a child born today in this island metropolis may live to see the waters around it swell by six feet, as the previously hypothetical consequences of global warming take on an escalating — and unstoppable — force. “I have made it my mission,” Jacob says, “to think long term.” The life span of a city is measured in centuries, and New York, which is approaching its fifth, probably doesn’t have another five to go, at least in any presently recognizable form. Instead, Jacob has said, the city will become a “gradual Atlantis.”

The deluge will begin slowly, and irregularly, and so it will confound human perceptions of change. Areas that never had flash floods will start to experience them, in part because global warming will also increase precipitation. High tides will spill over old bulkheads when there is a full moon. People will start carrying galoshes to work. All the commercial skyscrapers, housing, cultural institutions that currently sit near the waterline will be forced to contend with routine inundation. And cataclysmic floods will become more common, because, to put it simply, if the baseline water level is higher, every storm surge will be that much stronger. Now, a surge of six feet has a one percent chance of happening each year — it’s what climatologists call a “100 year” storm. By 2050, if sea-level rise happens as rapidly as many scientists think it will, today’s hundred-year floods will become five times more likely, making mass destruction a once-a-generation occurrence. Like a stumbling boxer, the city will try to keep its guard up, but the sea will only gain strength.•

Tags: ,


Before robots take our jobs, a more mechanical form of human will handle many of them. In fact, it’s already happening.

The new connectedness and tools have allowed for certain types of employment to be shrunk if not disappeared. It’s true whether your collar is blue or white, whether you have a job or career, if you’re a taxi driver or the passenger being transported to a fancy office.

“Meatware”–a term which perfectly sums up a faceless type of human labor pool–minimizes formerly high-paying positions into tasks any rabbit can handle. It’s a race to the bottom, where there’s plenty of room, with the winners also being losers.

In Mark Harris’ insightful Backchannel article, the writer hired some Mechanical Turk-ers to explore the piecework phenomenon. The opening:

Harry K. sits at his desk in Vancouver, Canada, scanning sepia-tinted swirls, loops and blobs on his computer screen. Every second or so, he jabs at his mouse and adds a fluorescent dot to the image. After a minute, a new image pops up in front of him.

Harry is tagging images of cells removed from breast cancers. It’s a painstaking job but not a difficult one, he says: “It’s like playing Etch A Sketch or a video game where you color in certain dots.”

Harry found the gig on Crowdflower, a crowdworking platform. Usually that cell-tagging task would be the job of pathologists, who typically start their careers with annual salaries of around $200,000 — an hourly wage of about $80. Harry, on the other hand, earns just four cents for annotating a batch of five images, which takes him between two to eight minutes. His hourly wage is about 60 cents.

Granted, Harry can’t perform most of the tasks in a pathologist’s repertoire. But in 2016 — 11 years after the launch of the ur-platform, Amazon Mechanical Turk — crowdworking (sometimes also called crowdsourcing) is eating into increasingly high-skilled jobs. The engineers who are developing this model of labor have a bold ambition to atomize entire careers into micro-tasks that almost anyone, anywhere in the world, can carry out online. They’re banking on the idea that any technology that can make a complex process 100 times cheaper, as in Harry’s case, will spread like wildfire.•



In an Atlantic Q&A, Derek Thompson has a smart conversation with the Economist’s Ryan Avent, the author of the soon-to-be-published The Wealth of Humans, a book whose sly title suggests abundance may not arrive without a degree of menace. Avent is firmly in the McAfee-Brynjolfsson camp, believing the Digital Age will rival the Industrial one in its spurring of economic and societal disruption. An excerpt:

The Atlantic:

There is an ongoing debate about whether technological growth is accelerating, as economists like Erik Brynjolfsson and Andrew McAfee (the authors of The Second Machine Age) insist, or slowing down, as the national productivity numbers indicate. Where do you come down?

Ryan Avent:

I come down squarely in the Brynjolfsson and McAfee camp and strongly disagree with economists like Robert Gordon, who have said that growth is basically over. I think the digital revolution is probably going to be as important and transformative as the industrial revolution. The main reason is machine intelligence, a general-purpose technology that can be used anywhere, from driving cars to customer service, and it’s getting better very, very quickly. There’s no reason to think that improvement will slow down, whether or not Moore’s Law continues.

I think this transformative revolution will create an abundance of labor. It will create enormous growth in [the supply of workers and machines], automating a lot of industries and boosting productivity. When you have this glut of workers, it plays havoc with existing institutions.

I think we are headed for a really important era in economic history. The Industrial Revolution is a pretty good guide of what that will look like. There will have to be a societal negotiation for how to share the gains from growth. That process will be long and drawn out. It will involve intense ideological conflict, and history suggests that a lot will go wrong.•

Tags: ,


We were a web before there was a Web. Things didn’t begin going viral in the Digital Age, and human systems existed long before the Industrial Revolution or even agriculture. None of this is new. What’s different about the era of machines is the brute efficiency of data due to heightened computing power being applied to increasingly ubiquitous connectedness. 

Some more dazzling thoughts by Yuval Noah Harari’s on “Dataism” can be read in Wired UK, which presents a passage from the historian’s forthcoming book, Homo Deus: A Brief History of Tomorrow. Just two examples: 1) “The entire human species is a single data-processing system,” and 2) “We often imagine that democracy and the free market won because they were ‘good.’ In truth, they won because they improved the global data-processing system.”

Harari writes that humans are viewed increasingly as an anachronism by Dataists, who prefer intelligence to the continuation of the species, an “outdated technology.” Like the Finnish philosopher Erkki Kurenniemi, they doubt the long-term preservation of “slime-based machines.” I don’t know how widespread this feeling really is, but I have read theorists in computing who feel it their duty to always opt for greater intelligence, even if it should come at the cost of humanity. I think the greater threat to our survival isn’t conscious decisions made at our expense but rather the natural progression of systems that don’t necessarily require us.

An excerpt:

Like capitalism, Dataism too began as a neutral scientific theory, but is now mutating into a religion that claims to determine right and wrong. The supreme value of this new religion is “information flow”. If life is the movement of information, and if we think that life is good, it follows that we should extend, deepen and spread the flow of information in the universe. According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.

This vision is reminiscent of some traditional religious visions. Thus Hindus believe that humans can and should merge into the universal soul of the cosmos – the atman. Christians believe that after death, saints are filled by the infinite grace of God, whereas sinners cut themselves off from His presence. Indeed, in Silicon Valley, the Dataist prophets consciously use traditional messianic language. For example, Ray Kurzweil’s book of prophecies is called The Singularity is Near, echoing John the Baptist’s cry: “the kingdom of heaven is near” (Matthew 3:2).

Dataists explain to those who still worship flesh-and-blood mortals that they are overly attached to outdated technology. Homo sapiens is an obsolete algorithm. After all, what’s the advantage of humans over chickens? Only that in humans information flows in much more complex patterns than in chickens. Humans absorb more data, and process it using better algorithms. (In day-to-day language, that means that humans allegedly have deeper emotions and superior intellectual abilities. But remember that, according to current biological dogma, emotions and intelligence are just algorithms.)

Well then, if we could create a data-processing system that absorbs even more data than a human being, and that processes it even more efficiently, wouldn’t that system be superior to a human in exactly the same way that a human is superior to a chicken?•





A century ago in France it might have been as apt to refer to Georges Claude as a luminary as anyone else. The inventor of neon lights, which debuted at the Paris Motor Show of 1910, the scientist was often thought of as a “French Edison,” a visionary who shined his brilliance on the world. Problem was, there was a dark side: a Royalist who disliked democracy, Claude eagerly collaborated with the Nazis during the Occupation and was arrested once Hitler was defeated. He spent six years in prison, though he was ultimately cleared of the most serious charge of having invented the V-1 flying bomb for the Axis. Two articles below from the Brooklyn Daily Eagle chronicle his rise and fall. 

From February 25, 1931:



From September 20, 1944:




What perplexed me about Gawker during the last few years of existence and throughout its holy-shit Hulk Hogan trial was that the principals on the inside of the company seemed tone-deaf at best and oblivious at worst. That allowed an emotional homunculus like Peter Thiel to use a short stack from his billions to drive the media company into bankruptcy.

In Matthew Garrahan’s Financial Times interview with Nick Denton, the former owner discusses why Thiel and others in Silicon Valley were so angered about darts thrown at them by Gawker, stressing insulation from criticism on the outside can be vital when building a corporation. Perhaps the same is true of those running an independent media empire?

An excerpt:

The appeal is likely to take at least a year to get to court, which means Denton and Thiel will not be burying the hatchet soon. And yet they have much in common. They are of similar age: Denton turned 50 last month, while Thiel will be 49 in October. They are both gay, tech-obsessed European émigrés (Thiel is from Germany; Denton from the UK) and they are both libertarians.

There the similarities end, Denton suggests. “Thiel’s idea of freedom is that you can create a society that is insulated from mainstream society … and imagine your own world in which none of the old rules apply.” He is alluding to Thiel’s interest in seasteading — the largely theoretical creation of autonomous societies beyond the reach of meddling national governments. “My idea of free society always had more of an anarcho-syndicalist bent,” he says. “If I was in Barcelona during the Spanish civil war [an anarcho-syndicalist] is probably what I would have been.”

Still, he says he understands the desire to operate beyond the restrictions of normal society, saying that such thinking is common in start-up culture. He points to Uber, the ride-sharing app, to underline the point. When its founders set out to launch a product that would up-end the personal transportation industry, they had to protect their vision from external doubters or naysayers. “You need to be insulated from the critics if you’re going to persuade people to join you, believe in you, invest in you.” Great companies are often based on a single idea, he continues. “And if someone questions that idea, it can undermine the support within the organisation for that idea.”

This, he says, explains Thiel’s animosity towards Gawker. Valleywag, a Denton-owned tech site that was eventually folded into, used to cover Silicon Valley with a critical eye and was a constant thorn in the side of its community of companies and investors — including Thiel.•

Tags: ,


The robots may be coming for our jobs, but they’re not coming for our species, not yet.

Anyone worried about AI extincting humans in the short term is really buying into sci-fi hype far too much, and those quipping that we’ll eventually just unplug machines if they get too smart is underselling more distant dangers. But in the near term, Weak AI (e.g., automation) is far more a peril to society than Strong AI (e.g., conscious machines). It could move us into a post-scarcity tomorrow, or it could do great damage if it’s managed incorrectly.What happens if too many jobs are lost all at once? Will there be enough of a transition period to allow us to pivot?

In a Technology Review piece, Will Knight writes of a Stanford study on AI that predicts certain key disruptive technologies will not have cut a particularly wide swath by 2030. Of course, even this research, which takes a relatively conservative view of the future, suggests we start discussing social safety nets for those on the short end of what may become an even more imbalanced digital divide.

The opening:

The odds that artificial intelligence will enslave or eliminate humankind within the next decade or so are thankfully slim. So concludes a major report from Stanford University on the social and economic implications of artificial intelligence.

At the same time, however, the report concludes that AI looks certain to upend huge aspects of everyday life, from employment and education to transportation and entertainment. More than 20 leaders in the fields of AI, computer science, and robotics coauthored the report. The analysis is significant because the public alarm over the impact of AI threatens to shape public policy and corporate decisions.

It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030, but cautions that remaining technical obstacles will limit such technologies to certain niches. It also warns that the social and ethical implications of advances in AI, such as the potential for unemployment in certain areas and likely erosions of privacy driven by new forms of surveillance and data mining, will need to be open to discussion and debate.•



“What hath God wrought?” was the first piece of Morse code ever sent, a melodramatic message which suggested something akin to Mary Shelley’s monster awakening and, perhaps, technology putting old myths to sleep. In his movie, Lo and Behold: Reveries Of A Connected World, Werrner Herzog believes something even more profoundly epiphanic is happening in the Digital Age, and it’s difficult to disagree.

The director tells Ben Makuch of Vice that for him, technology is an entry point to learning about people (“I’m interested, of course, in the human beings”). Despite Herzog’s focus, the bigger story is events progressing in the opposite direction, from carbon to silicon.

In a later segment about space colonization, Herzog acknowledges having dreams of filming on our neighboring planet, saying, “I want to be the poet of Mars.” But, in the best sense, he’s already earned that title.

Tags: ,


Not an original idea: Driverless cars are perfected in the near future and join the traffic, and some disruptive souls, perhaps us, decide to purchase an autonomous taxi and set it to work. We charge less than any competitor, use our slim profits for maintenance and to eventually buy a second taxi. Those two turn into an ever-growing fleet. We subtract our original investment (and ourselves) from the equation, and let this benevolent monster grow, ownerless, allowing it to automatically schedule its own repairs and purchases. Why would anyone need Uber or Lyft in such a scenario? Those outfits would be value-less.

In a very good Vanity Fair “Hive” piece, Nick Bilton doesn’t extrapolate Uber’s existential risk quite this far, but he writes wisely of the technology that may make rideshare companies a shooting star, enjoying only a brief lifespan like Compact Discs, though minus the outrageous profits that format produced. 

The opening:

Seven years ago, just before Uber opened for business, the company was valued at exactly zero dollars. Today, it is worth around $68 billion. But it is not inconceivable that Uber, as mighty as it currently appears, could one day return to its modest origins, worth nothing. Uber, in fact, knows this better than almost anyone. As Travis Kalanick, Uber’s chief executive, candidly articulated in an interview with Business Insider, ride-sharing companies are particularly vulnerable to an impeding technology that is going to change our society in unimaginable ways: the driverless car. “The world is going to go self-driving and autonomous,” he unequivocally told Biz Carson. He continued: “So if that’s happening, what would happen if we weren’t a part of that future? If we weren’t part of the autonomy thing? Then the future passes us by, basically, in a very expeditious and efficient way.”

Kalanick wasn’t just being dramatic. He was being brutally honest. To understand how Uber and its competitors, such as Lyft andJuno, could be rendered useless by automation—leveled in the same way that they themselves leveled the taxi industry—you need to fast-forward a few years to a hypothetical version of the future that might seem surreal at the moment. But, I can assure you, it may well resemble how we will live very, very soon.•


A CBP Border Patrol Agent investigates a potential landing area for illegal immigrants along the Rio Grande River in Texas

Surveillance is a murky thing almost always attended by a self-censorship, quietly encouraging citizens to abridge their communication because maybe, perhaps someone is watching or listening. It’s a chilling of civil rights that happens in a creeping manner. Nothing can be trusted, not even the mundane, not even your own judgement. That’s the goal, really, of such a system–that everyone should feel endlessly observed.

In a Texas Monthly piece, Sasha Von Oldershausen, a border reporter in West Texas, finds similarities between her stretch of America, which feverishly focuses on security from intruders, and her time spent living under theocracy in Iran. An excerpt:

Surveillance is key to the CBP’s strategy at the border, but you don’t have to look to the skies for constant reminders that they’re there. Internal checkpoints located up to 100 miles from the border give Border Patrol agents the legal authority to search any person’s vehicle without a warrant. It’s enough to instill a feeling of guilt even in the most exemplary of citizens. For those commuting daily on roads fitted with these checkpoints, the search becomes rote: the need to prove one’s right to abide is an implicit part of life.

Despite the visible cues, it’s still hard to figure just how all-seeing the CBP’s eyes are. For one, understanding the “realities” of border security varies based on who you talk to.

Esteban Ornelas—a Mexican citizen who was charged with illegal entry into the United States in 2012 and deported shortly thereafter—swears that he was caught was because a friend he was traveling through the backcountry with sent a text message to his family. “They traced the signal,” he told me in his hometown of Boquillas.

When I consulted CBP spokesperson Brooks and senior Border Patrol agent Stephen Crump about what Ornelas had told me, they looked at each other and laughed. “That’s pretty awesome,” Crump said. “Note to self: develop that technology.”

I immediately felt foolish to have asked. But when I asked Pauling that same question, his reply was much more austere: “I can’t answer that,” he said, and left it at that.•




Some argue, as John Thornhill does in a new Financial Times column, that technology may not be the main impediment to the proliferation of driverless cars. I doubt that’s true. If you could magically make available today relatively safe and highly functioning autonomous vehicles, ones that operated on a level superior to humans, then hearts, minds and legislation would soon favor the transition. I do think driving as recreation and sport would continue, but much of commerce and transport would shift to our robot friends.

Earlier in the development of driverless, I wondered if Americans would hand over the wheel any sooner than they’d turn in their guns, but I’ve since been convinced we (largely) will. We may have a macro fear of robots, but we hand over control to them with shocking alacrity. A shift to driverless wouldn’t be much different.

An excerpt from Thornhill in which he lists the main challenges, technological and otherwise, facing the sector:

First, there is the instinctive human resistance to handing over control to a robot, especially given fears of cyber-hacking. Second, for many drivers cars are an extension of their identity, a mechanical symbol of independence, control and freedom. They will not abandon them lightly.

Third, robots will always be held to far higher safety standards than humans. They will inevitably cause accidents. They will also have to be programmed to make a calculation that could kill their passengers or bystanders to minimise overall loss of life. This will create a fascinating philosophical sub-school of algorithmic morality. “Many of us are afraid that one reckless act will cause an accident that causes a backlash and shuts down the industry for a decade,” says the Silicon Valley engineer. “That would be tragic if you could have saved tens of thousands of lives a year.”

Fourth, the deployment of autonomous vehicles could destroy millions of jobs. Their rapid introduction is certain to provoke resistance. There are 3.5m professional lorry drivers in the US.

Fifth, the insurance industry and legal community have to wrap their heads around some tricky liability issues. In what circumstances is the owner, car manufacturer or software developer responsible for damage?•


images (2)

The introduction to Nicholas Carr’s soon-to-be published essay collection, Utopia Is Creepy, has been excerpted at Aeon, and it’s a beauty. The writer argues (powerfully) that we’ve defined “progress as essentially technological,” even though the Digital Age quickly became corrupted by commercial interests, and the initial thrill of the Internet faded as it became “civilized” in the most derogatory, Twain-ish use of that word. To Carr, the something gained (access to an avalanche of information) is overwhelmed by what’s lost (withdrawal from reality). The critic applies John Kenneth Galbraith’s term “innocent fraud” to the Silicon Valley marketing of techno-utopianism. 

You could extrapolate this thinking to much of our contemporary culture: binge-watching endless content, Pokémon Go, Comic-Con, fake Reality TV shows, reality-altering cable news, etc. Carr suggests we use the tools of Silicon Valley while refusing the ethos. Perhaps that’s possible, but I doubt you can separate such things.

An excerpt:

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim inThe Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.•

Tags: ,


There’s probably no reason to think prognosticating crime via computer will be more biased than traditional racial profiling and other less-algorithmic methods of anticipating lawlessness, but it’s uncertain it will be an improvement. In any system with embedded prejudice–pretty much all of them–won’t those suspicions of some be translated into code? It doesn’t need be that way, but there will have to be an awful lot of skepticism and oversight to keep discrimination from taking a prominent place in the digital realm.

The opening of “The Power of Learning” at the Economist:

In Minority Report, a policeman, played by Tom Cruise, gleans tip-offs from three psychics and nabs future criminals before they break the law. In the real world, prediction is more difficult. But it may no longer be science fiction, thanks to the growing prognosticatory power of computers. That prospect scares some, but it could be a force for good—if it is done right.

Machine learning, a branch of artificial intelligence, can generate remarkably accurate predictions. It works by crunching vast quantities of data in search of patterns. Take, for example, restaurant hygiene. The system learns which combinations of sometimes obscure factors are most suggestive of a problem. Once trained, it can assess the risk that a restaurant is dirty. The Boston mayor’s office is testing just such an approach, using data from Yelp reviews. This has led to a 25% rise in the number of spot inspections that uncover violations.

Governments are taking notice. A London borough is developing an algorithm to predict who might become homeless. In India Microsoft is helping schools predict which students are at risk of dropping out. Machine-learning predictions can mean government services arrive earlier and are better targeted (see article). Researchers behind an algorithm designed to help judges make bail decisions claim it can predict recidivism so effectively that the same number of people could be bailed as are at present by judges, but with 20% less crime. To get a similar reduction in crime across America, they say, would require an extra 20,000 police officers at a cost of $2.6 billion.
But computer-generated predictions are sometimes controversial.•


download (1)More than six decades ago, long before Siri got her voice, Georgetown and IBM co-presented the first public demonstration of machine translation. Russian was neatly converted into English by an “electronic brain,” the IBM 701, and one of the principals involved, the university’s Professor Leon Dostert, excitedly reacted to the success, proclaiming the demo a “Kitty Hawk of electronic translation.” Certainly the impact from this experiment was nothing close to the significance of the Wright brothers’ achievement, but it was a harbinger of things to (eventually) come. An article in the January 8, 1954 Brooklyn Daily Eagle covered the landmark event.


Tags: ,


Beginning in the 1960s, freelance speleologist Michel Siffre embedded himself in glaciers and underground caves in an attempt to understand sensory privations astronauts might experience on long missions. It was dangerous and not just psychologically. Today’s “missions to Mars,” test runs in isolation like NASA’s one-year HI-SEAS project which just concluded in Hawaii, aren’t nearly as fraught. Only time will tell, however, if they’re an acceptable spacesuit dress rehearsal. The multinational HI-SEAS crew, having now “returned,” thinks the historic travel to Mars will be manageable, which it probably is, though it’s still a question as to whether humans need to be making such voyages right now.

From an un-bylined AP report:

HILO, Hawaii (AP) — Six scientists have completed a yearlong Mars simulation in Hawaii, where they lived in a dome in near isolation.

For the past year, the group in the dome on a Mauna Loa mountain could go outside only while wearing spacesuits.

On Sunday, the simulation ended, and the scientists emerged. 

Cyprien Verseux, a crew member from France, said the simulation shows a mission to Mars can succeed.

“I can give you my personal impression which is that a mission to Mars in the close future is realistic. I think the technological and psychological obstacles can be overcome,” Verseux said.

Christiane Heinicke, a crew member from Germany, said the scientists were able to find their own water in a dry climate.

“Showing that it works, you can actually get water from the ground that is seemingly dry. It would work on Mars and the implication is that you would be able to get water on Mars from this little greenhouse construct,” she said.•

Tags: ,

In 1976, Gail Jennes of People magazine conducted a Q&A with Michael L. Dertouzos, who was the Director of the Laboratory for Computer Science at MIT. He pretty much hit the bullseye on everything regarding the next four decades of computing, except for thinking Moore’s Law would reach endgame in the mid-’80s. An excerpt:


Could a computer ever become as “human” as the one named Hal in 2001: A Space Odyssey?

Michael Dertouzos:

A computer has already taken over a “human” mission on the Viking mission to Mars. But control over humans is a different issue. In open-heart surgery where a computer monitors bloodstream and vital functions, are we not under a machine’s control? A human being is often under the control of a machine and, in many situations, wants to be.


Will machines ever be more intelligent than humans?

Michael Dertouzos:

That is the important question, and the one on which scientists are split. One side says it’s impossible to make machines with the same intelligence, emotions and abilities as humans, and that therefore machines will only be able to do our bidding. The other side believes that it’s possible to make machines learn much more. Both sides argue from faith; neither from fact.


What do you think?

Michael Dertouzos:

I think progress will be a lot slower than predicted. Computers will get smarter gradually. I don’t know if they will get as smart as we are. If they did, it probably would take a long time. …


Will computers be widely used by the average person in coming years?

Michael Dertouzos:

We don’t see technical limitations in computer development until the mid-1980s. Until then, decreased cost will make computers smaller, cheaper and more accessible. In 10 or 15 years, one should cost about the same as a big color TV. This machine could become a playmate, testing your wits at chess or checkers. If a computer were hooked up to AP or UPI news-wires, it could be programmed to know that I’m interested in Greece, computers and music. Whenever it caught news items about these subjects, it would print them out on my console—so I would see only the things I wanted to see.


Will they transmit mail?

Michael Dertouzos:

We are already hooked into a network spanning the U.S. and part of Europe by which we send, collect and route messages easily. Although the transmission process is instant, you can let messages pile up until you turn on your computer and ask for your mail.


Will the computer eventually be as common as the typewriter?

Michael Dertouzos:

Perhaps even more so. It may be hidden so you won’t even know you’re using it. Don’t be surprised if there is one in every telephone, taking over most of the dialing. If you want to call your friend Joe, you just dial “JOE.” The same machine could take messages, advise if they were of interest and then could ring you. In the future, I would imagine there could be computerized cooking machines. You put in a little card that says Chateaubriand and it cooks the ingredients not only according to the best French recipe, but also to your particular taste.


Will robots ever be heavily relied upon?

Michael Dertouzos:

Robots are already doing things for us—for example, accounting and assembling cars. Two-legged robotic bipeds are a romantic notion and actually pretty unstable. But computer-directed robot machines with wheels, for example, may eventually do the vacuum cleaning and mow the lawn.


How might computers aid us in an election year?

Michael Dertouzos:

Voters might quickly find out political candidates’ positions on the issues by consulting computers. Government would then be closer to the pulse of the governed. If we had access to a very intelligent computer, we could probe to find out if the guy is telling the truth by having them check for inconsistency—but that is way in the future.


Should everyone be required to take a computer course?

Michael Dertouzos:

I’d rather see people choose to do so. Latin, the lute and the piano used to be required as a part of a proper upbringing. Computer science will be thought of in the same way. If we can use the computer early in life, we can understand it so we won’t be hoodwinked into believing it can do the impossible. A big danger is deferring to computers out of ignorance.•


First we slide machines into our pockets, and then we slide into theirs.

As long as humans have roamed the Earth, we’ve been part of a biological organism larger than ourselves. At first, we were barely connected parts, but gradually we became a Global Village. In order for that connectivity to become possible, the bio-organism gave way to a technological machine. As we stand now, we’re moving ourselves deeper and deeper into a computer, one with no OFF switch. We’ll be counted, whether we like it or not. Some of that will be great, and some not.

In the Financial Times, in one of his regularly heady and dazzling pieces of writing, Israeli historian Yuval Noah Harari examines this new normal, one that’s occurred without close study of what it will mean for the cogs in the machine–us. As he writes, “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.” The opening:

For thousands of years humans believed that authority came from the gods. Then, during the modern era, humanism gradually shifted authority from deities to people. Jean-Jacques Rousseau summed up this revolution inEmile, his 1762 treatise on education. When looking for the rules of conduct in life, Rousseau found them “in the depths of my heart, traced by nature in characters which nothing can efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I feel to be bad is bad.” Humanist thinkers such as Rousseau convinced us that our own feelings and desires were the ultimate source of meaning, and that our free will was, therefore, the highest authority of all.

Now, a fresh shift is taking place. Just as divine authority was legitimised by religious mythologies, and human authority was legitimised by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data. This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it.

We are already becoming tiny chips inside a giant system that nobody really understands.•



From the February 3, 1945 Brooklyn Daily Eagle:


Tags: ,


In 2016, the average branch of the New York Public Library operates as a lightly funded community center, with some classes for kids, a climate-controlled place for seniors to knit and Internet time for anyone who can’t afford their own. Books are largely beside the point, donations of gently used volumes not accepted, and quiet is no longer enforced since reading isn’t the primary function of the institution. It’s more about experience now.

In a really thought-provoking Business Insider piece, David Pecovitz of boing boing tells Chris Weller that experience is also the future of libraries, though he believes it will be of a much more technological kind, virtual as well as actual. He predicts we could wind up with a “library of experiences.” Perhaps, though such tools and access may become decentralized.

An excerpt:

The definition of a library is already changing.

Some libraries have 3D printers and other cutting-edge tools that makes them not just places of learning, but creation. “I think the library as a place of access to materials, physical and virtual, becomes increasingly important,” Pescovitz says. People will come to see libraries as places to create the future, not just learn about the present.

Pescovitz offers the example of genetic engineering, carried out through “an open-source library of genetic parts that can be recombined in various way to make new organisms that don’t exist in nature.”

For instance, people could create their own microbes that are engineered to detect toxins in the water, he says, similar to how people are already meeting up in biology-centered hacker spaces.

Several decades from now, libraries will morph even further.

Pescovitz speculates that humans will have collected so much data that society will move into the realm of downloading sensory data. What we experience could be made available for sharing.

“Right now the world is becoming instrumented with sensors everywhere — sensors in our bodies, sensors in our roads, sensors in our mobile phones, sensors in our buildings — all of which all collecting high-resolution data about the physical world,” he says. “Meanwhile, we’re making leaps in understanding how the brain processes experiences and translates that into what we call reality.”

That could lead to a “library of experiences.”

In such a library, Pescovitz imagines that you could “check out” the experience of going to another planet or inhabiting the mind of the family dog.•

Tags: ,


For the rest of this century (at least), it’s more likely machines will come or our jobs than our lives. The threat of extinction of human beings by superintelligence isn’t in sight, thankfully, but while automation could lead to post-scarcity, if we’re not careful and wise, it might instead stoke societal chaos.

In his London Review of Books essay about machine learning, Paul Taylor writes about the potential of this field, while acknowledging it’s not certain how well it will all work out despite early promise. The short-term issue, he believes, may be between haves and have-nots as it applies to computing power. An excerpt:

The solving of problems that until recently seemed insuperable might give the impression that these machines are acquiring capacities usually thought distinctively human. But although what happens in a large recurrent neural network better resembles what takes place in a brain than conventional software does, the similarity is still limited. There is no close analogy between the way neural networks are trained and what we know about the way human learning takes place. It is too early to say whether scaling up networks like Inception will enable computers to identify not only a cat’s face but also the general concept ‘cat’, or even more abstract ideas such as ‘two’ or ‘authenticity’. And powerful though Google’s networks are, the features they derive from sequences of words are not built from the experience of human interaction in the way our use of language is: we don’t know whether or not they will eventually be able to use language as humans do.

In 2006 Ray Kurzweil wrote a book about what he called the Singularity, the idea that once computers are able to generate improvements to their own intelligence, the rate at which their intelligence improves will accelerate exponentially. Others have aired similar anxieties. The philosopher Nick Bostrom wrote a bestseller, Superintelligence (2014), examining the risks associated with uncontrolled artificial intelligence. Stephen Hawking has suggested that building machines more intelligent than we are could lead to the end of the human race. Elon Musk has said much the same. But such dystopian fantasies aren’t worth worrying about yet. If there is something to be worried about today, it is the social consequences of the economic transformation computers might bring about – that, and the growing dominance of the small number of corporations that have access to the mammoth quantities of computing power and data the technology requires.•



Much has already been written abut the jaw-dropping discovery of “another Earth” near Proxima Centauri, something that seemed likely to happen sooner or later and has now occurred. My favorite words on the topic were penned by Olaf Stampf of Spiegel. He understands the magnitude of the finding while cautioning that even with the development of fusion propulsion, reaching our sister planet in a reasonable time frame is still a bridge too far. Perhaps it’s a better-than-ever time for the human-less probes to Alpha Centauri suggested by Ken Kalfus and being developed by Yuri Milner, with their destination shortened by just a bit. Presently, pretty much all space exploration should be handled by machines, anyhow.

The opening:

The faraway world exists in constant twilight. Although its nearby blood-red dwarf star only provides one tiny fraction of our sun’s light, its warmth might still be enough to create a life-sustaining climate.

But is there really life on this newly detected planet? Nobody knows — at least not yet. Only one thing is certain: Because of the darkness, animals and plants would look different from the ones we know from Earth. Trees and shrubs would have pitch-black leaves, as if they’d been burned. The alien flora would need to be darkly colored to use the dim starlight for its photosynthesis.

And what about higher forms of life, like animals or intelligent beings? It’s very possible that exotic organisms exist on the planet. Given that it is several million years older than the Earth, it would have had enough time for life to develop.

On the other hand, it would also have to repeatedly withstand hellish conditions. Its sun is a so-called flare star, a cosmic fire-breather that tends to produce apocalyptic eruptions of plasma. All of the planet’s oceans, rivers and lakes may well have long since evaporated.

The newly discovered planet doesn’t yet have a name, but the red dwarf star around which it circles is famous: Proxima Centauri, our nearest fixed star, only 4.24 lightyears away — our sun’s closest neighbor.

That’s what makes this finding so scientifically exciting.•


« Older entries § Newer entries »