You are currently browsing the archive for the Science/Tech category.



A few years back, I blogged about how the dream factory in California had migrated North, shifting from Hollywood to Silicon Valley. While Jules Verne’s visions required more than a century to be born, The Truman Show needed only a decade, and Her essentially came into the world already talking. 

Science fiction is no longer the starting gun for the future but is regularly lapped by it. It’s not only because Moore’s Law has speeded up research while culture still slowly gestates but because of the great wealth concentrated in the technology sector and the game-changing ambitions of those who possess it. They’re largely more concerned with legacy than bank ledgers, and ego is a powerful tool if harnessed correctly. Whether this new normal leads us to a better tomorrow is TBD.

In “The Future Is Almost Now,” Elizabeth Alsop writes wisely of this altered reality and more. An excerpt:

Many new works of science fiction seem to represent a strain of pre-apocalyptic cinema, characterized by a willingness to dramatize disasters that are less hypothetical than poised to happen. Both Ex Machina and Her, for instance, unfold against backdrops whose production design suggests that viewers are witnessing only a lightly futurized version of 21st-century life. However technically fictional the gadgets on display, the advances the films imagine—an artificially intelligent OS, a Turing-test approved robot—strike audiences as not just possible, but highly probable. As Ex Machina’s partly mad scientist declares, “[t]he arrival of strong AI has been inevitable for decades. The variable was when, not if.” Spike Jonze’s Her similarly takes its paradigm shift—humans falling in love with machines—for granted. Unlike The Terminator and Matrix franchises, these films don’t predict an apocalyptic “rise” of machines so much as a gradual digital takeover, the next phase of a revolution already in progress.

As such, the worlds of newer sci-fi films can look and feel eerily familiar. The opening shots of Interstellar, which feature hardscrabble towns and actual Depression-era footage, initially lead viewers to suspect they’re witnessing, if anything, the recent past. As the critic A.O. Scott noted in The New York Times,[the director Christopher] Nolan … drops us quietly into what looks like a fairly ordinary reality.” Or as NPR’s Amanda Fiegl put it, “it’s science fiction with an uncomfortable ring of truth.” It’s possible that such realistic settings—also seen in Ex Machina and Her—are meant to serve moralizing ends, reminding audiences that dystopia is nigh.•



If the only options many workers have are to either accept a less-than-living wage or be replaced by machines, we’re in trouble. Fast-food places use to offer starter jobs, but there’s no place for many burger-and-fries folks to advance to, so these are careers now, for lack of a better word. A $15 minimum wage is necessary yet may drive corporations to automate food ordering and even preparation. At the very least, some of the jobs will vanish. 

From Jed Graham at Investor’s Business Daily:

Wendy’s said that self-service ordering kiosks will be made available across its 6,000-plus restaurants in the second half of the year as minimum wage hikes and a tight labor market push up wages.

It will be up to franchisees whether to deploy the labor-saving technology, but Wendy’s President Todd Penegor did note that some franchise locations have been raising prices to offset wage hikes.

McDonald’s has been testing self-service kiosks. But Wendy’s, which has been vocal about embracing labor-saving technology, is launching the biggest potential expansion. …

In addition to self-order kiosks, the company is also getting ready to move beyond the testing phase with labor-saving mobile ordering and mobile payment available systemwide by the end of the year. Yum Brands and McDonald’s already have mobile ordering apps.•

Tags: ,


Thanks to Nicholas Carr’s great blog, Rough Type, for pointing me to Justin O’Beirne’s 2015 essay “The Universal Map,” which I missed last year. It addresses the quietly seismic changes occurring in cartography. The romance of the profession, formerly an often solitary and painstaking thing, has been replaced by (almost) real-time, computerized efficiency, or so it would seem.

So much of the next wave of AI and automation will demand insta-maps communicated from gadget to gadget and constantly updated (think driverless cars). That means maps will become increasingly universal as smartphones continue to spread. Mistakes will still sneak though most likely, and I suppose they’ll become universally accepted as well. Even if those flaws are corrected relatively quickly, they might cause problems for a brief spell. That at least is the best-case scenario.

As Carr notes, the map itself is being disappeared as tiny bits of information and directions obliterate the larger picture. O’Beirne himself is less sanguine on the topic a year later, writing that the work of Google Maps has surprisingly deteriorated. Is it more troubling to have one mediocre map for all of us than plenty of different ones of varying quality?

O’Beirne’s 2015 opening:

Just thirty years ago — and for most of human history — a cartographer would make a map, print and distribute it, and hope that maybe a few thousand or so people would ever use it before it went out of date. Apart from a handful of atlases and classroom maps, most maps had small, local audiences, went out of date quickly, and were often difficult to read and understand — let alone share.

Fast forward to today, and cartography has since undergone a number of profound changes:

  • An unprecedented level of detail is now available to the average person, for little or no cost. The same map literally shows every human settlement in the world at every scale, from the world’s largest cities to its tiniest neighborhoods and hamlets. Every country. Every city. Every road. All mapped in exquisite detail. Moreover, maps increasingly show every business open today — an interactive, visual yellow pages for the whole world. And add to that imagery, street view, and live transit and traffic. No one has ever had access to this much detail, for so cheaply, until now.
  • Maps are now always up to date. Errors are corrected in hours and minutes, instead of months and years — and new roads and businesses are added instantly. Unlike the paper maps of thirty years ago, today’s maps never expire.
  • Maps fit us, regardless of who or where we are. Foreign lands are presented in our own language, and we can easily and endlessly adjust scales, orientations, dimensions, and even time. We have day mode, night mode, and even basic personalization. And every corner of the globe is presented in the same style, and every map feature is made to be so intuitive, that there’s never a need for a map key. (Google and Apple Maps don’t even have one.) Thirty years ago, we adjusted ourselves to maps; now, maps adjust to us.
  • Maps are integrated with robust search & routing. No more looking up the coordinates of an obscure town or street in a map index. No more sitting down and painstakingly planning routes before you leave. Find any place in the world in milliseconds. Calculate any route — be it by walking, driving, or even flying — with unprecedented ease.
  • Advanced sensors keep us apprised of our current location, 24 hours a day.Now, we’re never lost.

These are all profound technical changes, 10x improvements that are hugely impactful in their own right. But there’s an even deeper, more profound cultural change seemingly on the horizon:


Think of how deeply profound this is.•



There’s an agency that enables you to hire your very own image-enhancing squad, so it’s no shock other companies are paid to pair off product pushers with social-media “stars,” celebrities shrunk down to smartphone size. Those who’ve mastered Vine or Instagram or Youtube, usually teens, are paid hundreds–occasionally hundreds of thousands–to post pictures of brands or do stunts involving them. If corporations feared wasting money on yesterday’s unquantifiable print ads, they’re really no more sure, even with all the new statistics, they aren’t burning currency on Gladwellian “influencers.” It doesn’t seem like sound business, even if it speaks to the further Warholization of fame.

From Shareen Pathak’s Digiday interview with an anonymous social-media executive about the new industry:


How do you find them?


Social team is a bunch of millennials, so we’ll often find someone we like and we’ll throw it into a database with keywords. But usually it’s a CEO or CMO or whoever saying, “Oh, my kid likes this guy.” At this major car brand I worked for, we paid $300,000 for a few photographs because the CEO’s kid liked someone.


What about the influencer agencies?


They’re huge now. Like the big media networks that say they work with 2,200 followers. They’re helpful. The big problem is, they don’t operate much like a traditional talent management company. They don’t provide insurance in case their talent doesn’t deliver or anything. Agencies can’t really hire them through them. They sort of just expect the brands to approach them. They don’t pitch them or anything. It’s silly.


Tell me about the process.


We’ll do a meet and greet. Tell them what we’re thinking and ask them for concepts. You can tell right away who is serious: The good ones come back within a day with ideas. Some send us decks or presentations that are pretty but not tailored to the brand. They’re all nuts. “I want to take a car and pick it up in London and drive it around Europe, so give me $100,000,” they say. Nope, let’s totally never do it that ever. These people don’t understand budgets.•



When my brother died last year, he remained in the ether. A devoted Facebook user, his likes and friends were gathered conveniently in one space, and the updates and comments keep coming even though he’s no longer here to read, or respond to, them. This Digital Age seance bothers me. It seems an odd and unsatisfactory afterlife of sorts.

That’s my own problem, however. He would have loved receiving messages on birthdays and holidays, even after he was gone. The question is this: If I knew my own end was approaching and I was the last relative alive who knew him, would I delete his account or let it “live on,” whatever that may come to mean? I’m pretty sure I would choose the latter.

In a BBC Future pieceBrandon Ambrosino writes about this social-media phenomenon, in which Facebook serves as not only a virtual city-state but also a necropolis, impacting the way we mourn, remember and forget. Someday, if the company lasts long enough–and not even that long, really–the dead will outnumber the living. An excerpt about the author’s late aunt:

Observing that phenomenon is a strange thing. There she is, the person you love – you’re talking to her, squeezing her hand, thanking her for being there for you, watching the green zigzag move slower and slower – and then she’s not there anymore.

Another machine, meanwhile, was keeping her alive: some distant computer server that holds her thoughts, memories and relationships.

While it’s obvious that people don’t outlive their bodies on digital technology, they do endure in one sense. People’s experience of you as a seemingly living person can and does continue online.

How is our continuing presence in digital space changing the way we die? And what does it mean for those who would mourn us after we are gone?

Observing that phenomenon is a strange thing. There she is, the person you love – you’re talking to her, squeezing her hand, thanking her for being there for you, watching the green zigzag move slower and slower – and then she’s not there anymore.

Another machine, meanwhile, was keeping her alive: some distant computer server that holds her thoughts, memories and relationships.

While it’s obvious that people don’t outlive their bodies on digital technology, they do endure in one sense. People’s experience of you as a seemingly living person can and does continue online.

How is our continuing presence in digital space changing the way we die? And what does it mean for those who would mourn us after we are gone?•


If driverless cars were to emerge only after all infrastructure has been uniformly upgraded and every possible hazard anticipated, it might be a long wait. An override to these problems is autonomous vehicles being connected to a network–and each other–and constantly be “educated.” In Steve Ranger’s ZDNet interview with Jim McBride of Ford’s driverless division, the latter addresses this issue, promising that shift from driver to driverless will “not terribly dissimilar from [the shift from] horses and carriages going to cars.” An excerpt:


What are the big technical challenges you are facing?

Jim McBride:

When you do a program like this, which is specifically aimed at what people like to call ‘level four’ or fully autonomous, there are a large number of scenarios that you have to be able to test for. Part of the challenge is to understand what we don’t know. Think through your entire lifetime of driving experiences and I’m sure there are a few bizarre things that have happened. They don’t happen very frequently but they do.


How do you build that kind of intelligence in?

Jim McBride:

It’s a difficult question because you can’t sit down and write a list of everything you might imagine, because you are going to forget something. You need to make the vehicle generically robust to all sorts of scenarios, but the scenarios that you do anticipate happening a lot, for example people violating red lights at traffic intersections, we can, under controlled conditions, test those very repeatedly. We have a facility near us called Mcity, and it’s basically a mock-urban environment where we control the infrastructure. While you and I may only see someone run a red light a few times a year, we can go out there and do it dozens of times just in the morning.

So for that category of things we can do the testing in a controlled environment, pre-planned. We can also do simulation work on data and, aside from that, it’s basically getting out on the roads and aggregating a lot of experiences.•

Tags: ,

spaceagemedicinedoctors (3)

Huawei executive Kevin Ho acknowledges science fiction has influenced his belief that the absence of disease, poverty and death could occur within 20 years. I’ll (sadly) bet the over on that one–way over.

In time, bioengineering will help ease disease and 3D printers will do the same for the want of material goods, though I think immortality through uploading isn’t arriving anytime soon. If it does become a reality at some point, such a system still won’t duplicate humans: Trading skin for a new casing alters identity in obvious and subtle ways.

From Bloomberg:

Chinese technology giant Huawei is preparing for a world where people live forever, dead relatives linger on in computers and robots try to kill humans.

Huawei is best known as one of the world’s largest producers of broadband network equipment and smartphones. But Kevin Ho, president of its handset product line, told the CES Asia conference in Shanghai on Wednesday the company used science fiction movies like The Matrix to envision future trends and new business ideas.

“Hunger, poverty, disease or even death may not be a problem by 2035, or 25 years from now,” he said. “In the future you may be able to purchase computing capacity to serve as a surrogate, to pass the baton from the physical world to the digital world.”

He described a future where children could use apps like WeChat to interact with dead grandparents, thanks to the ability to download human consciousness into computers. All of these technologies would require huge amounts of data storage, which in turn could generate business for Huawei, he added.



From the June 16, 1946 Brooklyn Daily Eagle:




Speaking of the End of Days, utter societal collapse in the United States doesn’t seem likely to me, even if we’re apparently dumber and more racist than feared. Many of my fellow Americans disagree, however, thinking things will soon fall apart. In advance of the November elections, the panic-room business is booming, as some among us are counting their gold coins and covering their asses.

The opening of “Prepping for Doomsday,” Clare Trapasso’s article:

The apocalypse has become big business. And it’s getting bigger every day.

In the ’50s, homeowners fearing Communist attacks built bunkers in their backyards and basements, hung up a few “God Bless Our Bomb Shelter” signs and called it a Cold War.

But today, Americans en masse are again preparing for the worst—and Communists are just about the only thing not on their list. What is? Terrorist attacks, a total economic collapse, perhaps even zombie invasions. Or maybe just a complete societal breakdown after this November’s scorched-Earth presidential election.

But this is not your Uncle Travis’ guns-and-canned-foods-militia vision of Armageddon preparedness. While the fears of survivalists and so-called preppers are modernizing, so too are their ideas and methods of refuge.

The business of disaster readiness is getting higher tech, higher priced, and way more geographically diverse, with state-of-the-art underground shelters tricked out with greenhouses, gyms, and decontamination units in the boondocks and the latest in plush panic rooms in city penthouses.

Welcome to the brave (and for some, highly profitable) new world of paranoia. 

“There’s a lot of uneasiness in society. You see it in politics. You see it in the economy. The world is changing really, really quickly and not always for the better,” says Richard Duarte, author of “Surviving Doomsday: A Guide for Surviving an Urban Disaster.”

Prepping “gives them a certain comfort that at least they’ve got some sort of preparations to … take care of their family if things start falling apart all around them,” he says.•

Tags: ,


Donald Trump announced his candidacy for President in June to a crowd of paid actors, which was sort of quaint in this Digital Age, as if he were Frank Sinatra in the ’40s crooning before “fainting” bobby soxers who’d been slipped a few dollars in advance to encourage their dizzy spells. You would think the practice of poseur appreciators and persuaders would be passé in our time, when there are bots and algorithms to goad the gormless, but there are things about human flesh that still cannot be replicated by machines. In some cases, all the world’s a stage and we’re all merely players–or at least some of us who are been compensated for pretending to be paparazzi or protesters or proponents. 

In “Crowd Source,” Davy Rothbart’s smart California Sunday Magazine article, the writer profiles a company that can make any carpet red and anyone a Kardashian, selling the aura of popularity in this Reality TV era. They offer a little extra–they offer extras, with titles like “Selfie Guy.” The opening:

The text message says to show up at the Los Angeles Airport Marriott Hotel at 11 a.m. on a Monday. But through some combination of traffic and my own chronic lateness, I find myself rushing into the lobby at 12 minutes after, aware that it’s not a good look to be late for work, my first day on a new job.

I’ve been hired by a company called Crowds on Demand. If you need a crowd of people — for nearly any reason — Crowds on Demand can make it happen. Now it has taken me on as one of its crowd members, although the specifics remain a mystery. It’s an odd sensation to be headed into a gig with no idea what task I’m expected to perform. All I know is that I’ll be making 15 bucks an hour.

In the hotel lobby, Adam Swart, the company’s 24-year-old CEO, is greeting a dozen other recruits. Handsome, fit, sporting slacks and a button-down shirt, Adam bears an uncanny resemblance to House Speaker Paul Ryan, though he’s more than 20 years younger. He circles around us with manic energy, as though jacked up on six cups of coffee. While he gently reprimands me for my lateness, I take his tone to mean, You’re off the hook this time, but don’t do it again. He leads us downstairs to a ballroom in the basement and gives us the lowdown.

The Marriott, Adam explains, is hosting a conference for life coaches from around the country. As these folks arrive in the ballroom to register and pick up their badges, lanyards, and gift bags, our job is to treat them like mega-celebrities, to behave like a wild throng of fans desperate for their love. As it turns out, this is one of Crowds on Demand’s most pop­­ular services.•


robot-congo-2 (1)

There’s nothing theoretically impossible, I think, about superintelligent machines, and if humans go on long enough, they and even far stranger things will come to pass. But the Singularity is not near, nowhere near near. There are plenty of machine-related issues to worry about in the meantime: Weak AI may decimate employment, the Internet of Things will place us inside of a machine with no OFF switch and automation could lead to a cascading disaster. Machines needn’t be conscious to help or hurt. 

In a smart Aeon essay, Luciano Floridi analyzes the increasingly popular idea that AI is our biggest existential threat, even more so than climate change. An excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine.•


Mathew Brady.

Mathew Brady.


President Lincoln.

Ulysses S. Grant.

Ulysses S. Grant.

Robert E. Lee.

Robert E. Lee.

The Civil War would have a name without Mathew Brady but not a face.

Other notable photographers worked in that tumultuous, internecine period, but it was Brady and his pioneering photojournalism that truly captured the visages burdened by the fate of a nation. While Brady was rich in life experience, his relentless attempt to record the Civil War with the expensive daguerreotype process essentially bankrupted him. He expected the U.S. government to eagerly purchase his trove in the post-war period and restore his financial standing, but the money never materialized. Brady died penniless in the charity ward of New York’s Presbyterian Hospital in 1896. Two years before his death, a Brooklyn Daily Eagle article misspelled his first name while chronicling how money troubles cost him his gallery in Washington D.C. 



Speaking of mind-altering substances, when a teenager, the French Surrealist writer René Daumal blasted his brain with the carbon tetrachloride he normally used to kill beetles for his insect collection. Not a good idea. By the time he was 36, he’d joined the bugs in the great beyond, no doubt in part because of his amateur chemistry experiments.

Known primarily today for the novel Mount Analogue: A Tale of Non-Euclidean and Symbolically Authentic Mountaineering Adventures, which Alejandro Jodorowsky used as the basis for his crazy-as-fuck 1973 film, Holy Mountain, Daumal’s recollection of his auto-dosing, “A Fundamental Experiment,” was reprinted in a 1965 Psychedelic Review. The opening:

The simple fact of the matter is beyond telling.  In the 18 years since it happened, I have often tried to put it into words.  Now, once and for all, I should like to employ every resource of language I know in giving an account of at least the outward and inward circumstances. This ‘fact’ consists in a certainty I acquired by accident at the age of sixteen or seventeen; ever since then, the memory of it has directed the best part of me toward seeking a means of finding it again, and for good.

My memories of child-hood and adolescence are deeply marked by a series of attempts to experience the beyond, and those random attempts brought me to the ultimate experiment, the fundamental experience of which I speak.

At about the age of six, having been taught no kind of religious belief whatsoever, I struck up against the stark problem of death.

I passed some atrocious nights, feeling my stomach clawed to shreds and my breathing half throttled by the anguish of nothingness, the ‘no more of anything’.

One night when I was about eleven, relaxing my entire body, I calmed the terror and revulsion of my organism before the unknown, and a new feeling came alive in me; hope, and a foretaste of the imperishable. But I wanted more, I wanted a certainty. At fifteen or sixteen I began my experiments, a search without direction or system.

Finding no way to experiment directly on death-on my death-I tried to study my sleep, assuming an analogy between the two.

By various devices I attempted to enter sleep in a waking state. The undertaking is not so utterly absurd as it sounds, but in certain respects it is perilous. I could not go very far with it; my own organism gave me some serious warnings of the risks I was running. One day, however, I decided to tackle the problem of death itself.

I would put my body into a state approaching as close as possible that of physiological death, and still concentrate all my attention on remaining conscious and registering everything that might take place.

I had in my possession some carbon tetrachloride, which I used to kill beetles for my collection. Knowing this substance belongs to the same chemical family as chloroform (it is even more toxic), I thought I could regulate its action very simply and easily: the moment I began to lose consciousness, my hand would fall from my nostrils carrying with it the handkerchief moistened with the volatile fluid. Later on I repeated the experiment –in the presence of friends, who could have given me help had I needed it.

The result was always exactly the same; that is, it exceeded and even overwhelmed my expectations by bursting the limits of the possible and by projecting me brutally into another world.

First came the ordinary phenomena of asphyxiation: arterial palpitation, buzzings, sounds of heavy pumping in the temples, painful repercussions from the tiniest exterior noises, flickering lights. Then, the distinct feeling: ‘This is getting serious. The game is up,’ followed by a swift recapitulation of my life up to that moment. If I felt any slight anxiety, it remained indistinguishable from a bodily discomfort that did not affect my mind.

And my mind kept repeating to itself : ‘Careful, don’t doze off. This is just the time to keep your eyes open.’

The luminous spots that danced in front of my eyes soon filled the whole of space, which echoed with the beat of my blood- sound and light overflowing space and fusing in a single rhythm. By this time I was no longer capable of speech, even of interior speech; my mind travelled too rapidly to carry any words along with it.

I realized, in a sudden illumination, that I still had control of the hand which held the handkerchief, that I still accurately perceived the position of my body, and that I could hear and understand words uttered nearby–but that objects, words, and meanings of words had lost any significance whatsoever. It was a little like having repeated a word over and over until it shrivels and dies in your mouth: you still know what the word ‘table’ means, for instance, you could use it correctly, but it no longer truly evokes its object.

In the same way everything that made up ‘the world’ for me in my ordinary state was still there, but I felt as if it had been drained of its substance. It was nothing more than a phantasmagoria-empty, absurd, clearly outlined, and necessary all at once.

This ‘world’ lost all reality because I had abruptly entered another world, infinitely more real, an instantaneous and intense world of eternity, a concentrated flame of reality and evidence into which I had cast myself like a butterfly drawn to a lighted candle.

Then, at that moment, comes the certainty; speech must now be content to wheel in circles around the bare fact.

Certainty of what?

Words are heavy and slow, words are too shapeless or too rigid. With these wretched words I can put together only approximate statements, whereas my certainty is for me the archetype of precision. In my ordinary state of mind, all that remains thinkable and formulable of this experiment reduces to one affirmation on which I would stake my life: I feel the certainty of the existence of something else, a beyond, another world, or another form of knowledge.

In the moment just described, I knew directly, I experienced that beyond in its very reality.

It is important to repeat that in that new state I perceived and perfectly comprehended the ordinary state of being, the latter being contained within the former, as waking consciousness contains our unconscious dreams, and not the reverse. This last irreversible relation proves the superiority (in the scale of reality or consciousness) of the first state over the second.

I told myself clearly: in a little while I shall return to the so-called ‘normal state’, and perhaps the memory of this fearful revelation will cloud over; but it is in this moment that I see the truth.

All this came to me without words; meanwhile I was pierced by an even more commanding thought. With a swiftness approaching the instantaneous, it thought itself so to speak in my very substance: for all eternity I was trapped, hurled faster and faster toward ever imminent annihilation through the terrible mechanism of the Law that rejected me.

‘That’s what it is. So that’s what it is.’

My mind found no other reaction. Under the threat of something worse, I had to follow the movement.

It took a tremendous effort, which became more and more difficult, but I was obliged to make that effort, until the moment when, letting go, I doubtless fell into a brief spell of unconsciousness. My hand dropped the handkerchief, I breathed air’, and for the rest of the day I remained dazed and stupefied-with a violent headache.•

“Nothing in your education or experience can have prepared you for this film.”



John McAfee, who’s never been charged for murder, is a Philip K. Dick character of his own making, speeded-up and paranoid. The erstwhile anti-virus emperor says he’s returning to the field of security software but who the fuck knows. McAfee’s apparently found financial backing, but he seems better suited to manning a gunboat in the proximity of a banana republic. From Richard Waters in the Financial Times:

John McAfee, the controversial former software boss, has made a move to win back a leading role in the security software industry that he helped to pioneer, taking the helm of a tiny public investment vehicle and declaring his aim of turning it into “a successful and major force in the space”.

Mr McAfee, creator of the widely used antivirus software that bears his name, sold his first company to Intel for $7.6bn six years ago, in one of the biggest software transactions ever. But he made international headlines four years ago when he went on the run after becoming the focus of a manhunt in Belize following the murder of his neighbour there. He fled over the border into Guatemala, before being deported back to the US at his request. He was never arrested or charged in the murder.

Mr McAfee’s erratic behaviour and claims that he was afraid for his safety if he was arrested by the local police prompted the Belize prime minister to suggest he was “bonkers.” He has since maintained an outspoken public stance on tech policy issues, including putting himself forward as an independent candidate in this year’s US presidential elections and denouncing the FBI’s attempt to force Apple to grant access to one of its iPhones this year as “the beginning of the end of the US as a world power.”•

Tags: ,

Donald Trump, the dunce cap on America’s pointy head, has been enabled by traditional media, new media and a besieged American middle class, as he’s attempted to become our first Twitter President. Mostly, though, I think he’s been abetted by the large minority of racist citizens who want someone to blame, especially in the wake of our first African-American President and recent myriad examples of social progress.

Trump is no mastermind. He seems to have gotten into the race impetuously to burnish his idiotic brand–you know, Mussolini as an insult comic. His main asset in this campaign season has been an utter shamelessness, a willingness to stoop as low as he needs to go. Whether that’s a prescription for general-election victory, we’ll soon see.

It’s true that in a more centralized media and political climate, the hideous hotelier would have likely been squeezed from the process by gatekeepers, but the more unfettered new normal only gave him opportunity, not the nomination. I don’t think dumb tweets and smartphones made the troll a realistic contender for king. It was we the people.

In a pair of pieces, Nick Bilton of Vanity Fair and Rory Cellan-Jones of the BBC see technology as the main cause for the rise of Trump, if in different ways. Excerpts from each follow.

From Bilton:

I’ve heard people say that if it wasn’t for CNN, FOX, and a dozen other television outlets that have “handed Trump the microphone,” there would be no Trump. But with all due respect to the television media, they’re just not that important anymore. Perhaps his popularity is a result of a broken political system, others suggest. But let’s be realistic, people have always believed the system is broken. (It’s that same broken system, it should be noted, that has helped create many of the disruptive unicorns in Silicon Valley.)

The only thing that’s really changed between Trump’s other attempts to run for office and now is the advent of social media. And Trump, who has spent his life offending people, knows exactly how to bend it to his will. Just look at what happens if someone says something even remotely politically incorrect today: the online immune system, known famously as a Twitter mob, sets in to hold that person accountable. These mobs demand results, like seeing someone fired, making them shamefully apologize, or even seeing their life torn to shreds.

Yet someone like Donald Trump doesn’t get fired, or apologize, which only makes the mobs grow more fervent and voluble. And the louder they get, the more the news media covers the backlash. The more the TV shows talk about him, the more we all talk about him. If you want to truly comprehend why Trump is so popular, you just have to behold what people are saying in 140 characters or less. It’s the same thing Kim Kardashian and Kanye West, and anyone else who wants attention, understand. If we’re talking about them, they’re winning the war for attention. No one knows this better than Trump. Prod the social-media tiger, you get attention: say Mexicans are rapists, make fun of the disabled, pick a fight with the Pope, attack women, call the media dumb, and social media shines a big, bright spotlight on Donald.

Arianna Huffington may have once famously decided to cover Trump in the entertainment section of the Huffington Post, but the reality is we now live in a world where there is no line between entertainment, politics, and media. And I know Silicon Valley knows this, because they are the ones that helped eviscerate it.•

From Cellan-Jones:

Over the past year we have seen plenty of warnings about the potential impact of robots and artificial intelligence on jobs.

Now one of the leading prophets of this robot revolution has told the BBC he is already seeing another side-effect of automation – the rise of politicians such as Donald Trump and the Democratic presidential hopeful Bernie Sanders.

Martin Ford’s Rise of the Robots won all sorts of awards for its compelling account of a wave of automation sweeping through every area of our lives, posing a serious threat to our economic well-being. But there has also been plenty of pushback from economists who reckon his conclusion is wrong and that, as in previous industrial revolutions, the overall impact on jobs will be positive.

In London to speak at a conference on robots held by the Bank of America, he told me that he didn’t think this latest technology upheaval would be as benign as in the past: “The thing is that this time machines are now in some sense beginning to think. And what that means is we’re seeing machines encroach on the kind of capabilities that set humans apart.”

He sees the robots moving up the value chain, threatening any jobs which involve humans sitting in front of screens dealing with information – the kind of work which we used to think offered security to middle-class people with average skills.•

Tags: , ,

George-Lawnmower-1950-1 (3)

Whether we’re talking about baseball umpires or long-haul truckers, I’m not so concerned about machines ruining the “romance” of traditional human endeavor, but I am very worried about technological unemployment destabilizing Labor. Perhaps history will repeat itself and more and better jobs will replace the ones likely to be disappeared in the coming decades, but even just the perfection of driverless cars will create a huge pothole in society. The Gig Economy is a diminishing of the workforce, and even those positions are vulnerable to automation. Maybe things will work themselves out, but it would be far better if we’re prepared for a worst-case scenario.

Excerpts from two articles follow: 1) Mark Karlin’s Truthout interview with Robert McChesney, co-author of People Get Ready, and 2) a Manu Saadia Tech Insider piece, “Robots Could Be a Big Problem for the Third World.”

From Truthout:


Let me start with the grand question raised by your book written with John Nichols. I think it is safe to say that the conventional thinking of the “wisdom class” for decades has been that the more advanced technology becomes (including robots and automated means of production, service and communication), the more beneficial it will be for humans. What is the basic challenge to that concept at the center of the new book by you and John?

Robert W. McChesney:

The conventional wisdom, embraced and propagated by many economists, has been that while new technologies will disrupt and eliminate many jobs and entire industries, they would also create new industries, which would eventually have as many or more new jobs, and that these jobs would generally be much better than the jobs that had been lost to technology.

And that has been more or less true for much of the history of industrial capitalism. Vastly fewer people were needed to work on farms by the 20th century and many ended up in factories; less are now needed in factories and they end up in offices. The new jobs tended to be better than the old jobs.

But we argue the idea that technology will create a new job to replace the one it has destroyed is no longer operative. Nor is the idea that the new job will be better than the old job, in terms of compensation and benefits. Capitalism is in a period of prolonged and arguably indefinite stagnation.•

From Tech Insider:

The danger lies in the transition to an economy where the cost of making stuff—industry—has become more or less like agriculture today (with very few people employed and a very low share of GDP). With appropriate policies in place, developed countries can probably manage that transition. They have in the past, and therefore it is safe to assume they most likely will in the future. It does not mean that we will not experience dislocations and conflicts, but we do have old and established institutions—government, the press, the public sphere— that allow us to resolve such conflicts over time for the greater benefit of all.

The real challenge will be beyond our comfortable borders, in the developing world. In both nineteenth-century Europe and twentieth-century Asia, national development has followed a similar pattern. People moved from the countryside to urban centers to take advantage of higher-paying jobs in factories and services. Again, South Korea offers a startling, fast-forward example of that: it underwent a complete transformation from a poor, rural country to a postindus trial, hyperurban powerhouse in less than fifty years. It was so rapid that most visible traces of the past have been erased and forgotten. The national museum in Seoul has a life-size reconstruction of a Seoul street in the 1950s, just like we have over here, but for the colonial era. And imagine this, China went down that very same path at an even faster clip. Half a billion impoverished people turned into middle-class consumers in three decades.

However, this may not happen again if manufacturing is reduced to the status of agriculture, a highly rationalized activity (read: employing very few people). The historically proven path to economic growth and prosperity taken by Korea and China might no longer be available to the next countries.•

Tags: , ,

Babe Ruth Slides Home

Count me among those wholeheartedly ready for robots to replace home-plate baseball umpires. Ball-and-strike calls are wrong about 10% of the time even with the best of umpires, and that leaves an awful lot of wiggle room for not only honest fallibility but even chicanery. To err is human, I know, but perhaps so is coming up with solutions to reduce incompetence? Experiments with robot umps begun in 1950 should be worked on today in the minor leagues. Then the buckets of bolts should be promoted.

Jason Gay, a talented writer for the Wall Street Journal, isn’t so sure. He believes something will be lost as something’s gained in the transfer of duties from carbon to silicon, not only because machines also malfunction (though less often, most likely), but also because of bigger-picture issues. An excerpt that pivots off of David Ortiz’s disputed strikeout at Yankee Stadium this weekend:

Disputed calls like that invariably provoke chatter about a surprisingly doable proposal: robot umps. Precise camera tech to pinpoint balls and strikes has existed for years. Even if the pitch tech at Yankee Stadium showed the calls against Ortiz were not so egregious, the suggestion is clear: Had a “robo-ump” been on ball-and-strikes duty, Big Papi may have marched to first base and tied a game the Red Sox instead wound up losing.

Seems reasonable, right? Whenever possible, shouldn’t tech be used to make the proper call? There are loads of examples of technology improving accuracy in sports—Hawk-Eye line-calling in tennis, for one, is crisp, quick and enjoyably theatrical (fans clap in anticipation!). The NFL, meanwhile, uses an oddball system in which an official crawls under Dracula’s cape to review replays. It mostly works, even if it often takes longer than a bus trip to Maine, and no one on earth seems to know what a catch is in the NFL anymore.

That’s a good reminder that technology isn’t a guaranteed savior. Not every play is reviewable. Machines falter. Software glitches. Some inevitabilities in life are utterly resistant to modernization, like making the bed, or LaGuardia Airport.•

Tags: ,

leary2 (1)

Although lysergic acid diethylamide, was, early in its discovery phase, considered a possible treatment for serious mental-health issues, it came to be seen during the ’60s, through the urging of Richard Alpert and Dr. Timothy Leary and others, as a societal powerwash of sorts, a tonic to radically remove the corrupting, conforming influences of gods and governments, a way to awaken the soporific, a means of cleansing the doors of perception. 

Revolutions are messy, however, and freakouts and flying teenagers did not stamp a smiley face on the “medicine.” It was just plain dangerous to unloose such unregulated experimentation into the world. Even Leary himself, who proselytized at campuses and correctional facilities alike, thought all along that the drug was a short-term panacea with diminishing returns, that soon something else would have to wake up the “beloved robots”–perhaps it would be computer software. Serious academic interest in the drug unsurprisingly idled.

Decades later, there are fewer flashbacks of the dosing and overdosing, and LSD is gaining currency again as a legitimate means of medical treatment. But will it ever shake off its bad reputation? And can its very real dangers be sufficiently neutralized?

From Jon Kelly at the BBC:

Mention LSD and you might think of the 1960s counterculture – kaftanned hippies in San Francisco, or the more adventurous end of the Beatles’ back catalogue, or the tragedy of Pink Floyd singer Syd Barrett losing his grip on reality.

But for the first time, researchers say they have visualised how LSD alters the way the brain works.

A team at Imperial College London says they found it broke down barriers between areas that control functions like vision, hearing and movement. The study was with a small group – 20 subjects – but theresearchers say it could lead to a revolution in the way addiction, anxiety and depression are treated.

For the past decade and a half, academics around the world have been studying whether psychedelic substances that cause hallucinations, changes in perception and mind-altering states could have medical benefits.

But this isn’t the first time we’ve been here. Back in the 1960s there were high hopes for the therapeutic potential of psychedelics, too. Four major scientific conferences were held on the subject. Thousands of papers were published.

But soon enough fears over the recreational use of LSD – or lysergic acid diethylamide, to give its full title – ensured research all but ground to a halt.•




From the August 11, 1925 Brooklyn Daily Eagle:




MOOCs mean more students, remote ones, who have lots of questions. To deal with the burden, Georgia Tech professor Ashok Goel, when offering an online Artificial Intelligence course, insinuated a robot Teaching Assistant powered by Watson into the proceedings. Most of the pupils never grew suspicious during their Q&As with the A.I. T.A., even a student who’d previously helped build Watson hardware. Does this demonstrate machine intelligence improving or humans becoming too passive in accepting what’s presented to them? Both, probably. It’s a dual lesson in technology and psychology.

From Melissa Korn at the Wall Street Journal:

Since January, “Jill,” as she was known to the artificial-intelligence class, had been helping graduate students design programs that allow computers to solve certain problems, like choosing an image to complete a logical sequence.

“She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin.

Ms. Watson—so named because she’s powered by International Business Machines Corp.’s Watson analytics system—wrote things like “Yep!” and “we’d love to,” speaking on behalf of her fellow TAs, in the online forum where students discussed coursework and submitted projects.

“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.

Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D. 

Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.

“Just when I wanted to nominate Jill Watson as an outstanding TA,” said Petr Bela.•

Tags: , , , ,


If we our species or some version of it persists long enough, conscious machines will be possible–probable, even. We’ll ultimately pull apart the vast mystery of the human brain, and unlocking those secrets will begin us on a path to making machines that are SMART, not just smart. It’s worth pursuing a Big Data workaround, a shortcut to superintelligence, but that seems less a sure thing.

In an Edge interview, psychologist Gary Marcus is concerned that the brute force of Big Data may be leading us astray in the search for Artificial Intelligence. If you recall, in late January the NYU psychologist argued the DeepMind AlphaGo system was overhyped, but by March he was proven wrong. His other questions about our ability to widely apply such an AI remain unsettled, however. Marcus feels particularly strongly that driverless cars will be hampered by real-world uncertainty.

From Edge:

If you’re talking about having a robot in your home—I’m still dreaming of Rosie the robot that’s going to take care of my domestic situation—you can’t afford for it to make mistakes. The DeepMind system is very much about trial and error on an enormous scale. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once. You can’t get the same scale of data. If you’re talking about a robot in a real-world environment, you need for it to learn things quickly from small amounts of data.                                 

The other thing is that in the Atari system, it might not be immediately obvious, but you have eighteen choices at any given moment. There are eight directions in which you can move your joystick or not move it, and you multiply that by either you press the fire button or you don’t. You get eighteen choices. In the real world, you often have infinite choices, or at least a vast number of choices. If you have only eighteen, you can explore: If I do this one, then I do this one, then I do this one—what’s my score? How about if I change this one? How about if I change that one?                                 

If you’re talking about a robot that could go anywhere in the room or lift anything or carry anything or press any button, you just can’t do the same brute force search of what’s going on. We lack for techniques that are able to do better than just these kinds of brute force things. All of this apparent progress is being driven by the ability to use brute force techniques on a scale we’ve never used before. That originally drove Deep Blue for chess and the Atari game system stuff. It’s driven most of what people are excited about. At the same time, it’s not extendable to the real world if you’re talking about domestic robots in the home or driving in the streets.•      



Would you like to survive if the sun dies? (When is actually more like it.) I would. Of course, I’ll be dead long before then, but in theory, anyway.

The sun will make Earth uninhabitable long before it completely burns out. Is it possible that our tools and technology will be so advanced in a couple hundred million years that we can “maintain” the sun or construct other ones as need be? Anything’s possible given enough time, I suppose, but other workarounds are likely more realistic.

In “How to Survive Doomsday,” an excellent Nautilus essay, Michael Hahn and Daniel Wolf Savin look at the daunting task of outlasting our star. An excerpt:

In a paltry 500 million years or so, no humans will remain on the surface of the Earth—at least, not outside of some hypothetical controlled environment. And things get worse from there. After the atmospheric CO2 is gone and no longer able to regulate Earth’s surface temperature, things will start to get very hot. In about a billion years, the average surface temperature will increase to above 45 degrees Celsius from the current 17 degrees Celsius. Important biochemical processes turn off at temperatures above 45 degrees Celsius, leaving most of the planetary surface uninhabitable. Animal life will need to migrate to the cooler poles to survive; but by 1.5 billion years from now, even the poles will be too hot. Not even cockroaches will survive.

Now, there are a few things we can do to stay our execution. We could, for example, move the Earth’s orbit. If we fired a 100 km wide asteroid on an elliptical orbit that passed close to the Earth every 5,000 years, we could slowly gravitationally nudge the planet’s orbit farther away from the sun, provided that we don’t accidentally hit the Earth. As a less precarious alternative, we could build a giant solar sail behind the Earth with enough mass to drag the planet away from the sun. Such a sail acts like a kite, where the photons from the sun are the wind and the gravity between the solar sail and the Earth acts as the string. The sail would need to have a diameter 20 times that of the Earth but a mass only about 2 percent that of Mt. Everest, a mere trillion metric tons. Strategies like these could, in principle, keep the Earth in the habitable zone until the sun expands into a red giant. (If some other civilization has already built such a large solar sail, we could detect it using the same photometric techniques that are currently used to find exoplanets.)

Another survival choice is more complicated—or simpler, depending on your perspective. The future Earth will actually be a pleasant home for non-biological life—better than it is today. For one thing, the brighter sun will provide more abundant solar power. The space weather will also be nicer. The sun is a dynamo spinning on its axis about every 24 days, generating giant magnetic storms that disrupt communication networks, overload power grids, and damage orbiting satellites. Robots today need fear that their circuits could be fried by a solar storm, such as the large solar storm in 1989 that caused a power failure across most of Quebec. Currently, such storms are estimated to occur about once or twice per century. But as the sun ages, this rotation slows down and the magnetic storms will abate.

Given these facts, we humans might simply decide to upload ourselves into machines, which would be relatively comfortable on the dystopic future Earth.•

Tags: ,


While robots and AI present grand challenges for society, you wouldn’t want to live in a nation that misses out on this wave. Just look at countries left behind by the Industrial Revolution and all the good that era delivered.

Yes, the shift from an agrarian economy to a machine-based one is largely responsible for our climate-induced peril, but technology has also given us the tools to neutralize the threats we’ve created and others we haven’t. The failure to address global warming is really now more a political breakdown than anything else. 

Superintelligent machines eradicating us in the short term is as likely as immortality soon arriving. We should already be considering these existential risks, though they will ultimately have to be addressed by our distant descendants who will better understand them. Let’s hope they choose wisely.

My main objection to the digitalization of the culture is that we’re all being placed inside a machine that measures and maps us, one that will only grow more precise and exacting, and there will be no opt-out switch. I really have no answer for that one.

In an smart Financial Times opinion piece, Andrew McAfee, co-author of The Second Machine Age, argues that shaking off the robot’s embrace makes little sense. An excerpt:

The second and much more important reason that robots are not our foes is that they make us richer overall. By increasing our capabilities and productivity, they create more bounty and abundance. We like to communicate, learn, entertain ourselves, travel and consume goods and services. Technological progress lets us do more of all of these things for a given amount of money (or, increasingly these days, for no money at all), and at higher levels of quality.

It is true that the way most of us gain access to much of this bounty is by getting paid for our labour. It is also true that this “labour bargain” is becoming a tougher one for more and more people as their skills become less valuable, because of both globalisation and technological progress. We need to figure out how to deal with this situation. This will be one of the most important policy arenas over the coming decades.

But we also need to keep in mind that this is a situation brought about by the fact that technology is letting us do and create much more with much less drudgery and toil. If we cannot figure out how to deal with this, and how to make sure that the fruits of robots’ labour are shared in a way that reflects our shared values and protects our most vulnerable, then shame on us. In that case, we will have met the real foe in that case, and it will be us.•



Speaking of automata through the ages, the article embedded below from the July 31, 1887 Brooklyn Daily Eagle surveys some highlights from the field, with special attention paid to 18th-century French inventor Jacques de Vaucanson, who breathed “life” into the Digesting Duck (pictured above), among other locomotion machines.




When you love yourself, even mirrors aren’t enough.

Homo sapiens have always been fascinated by looking into the glass, so much so that attempts to create machine versions of ourselves snakes back to ancient times. Machines surpassing us physically–and perhaps eventually emotionally–seem to have sneaked up on us, but it’s been a long time coming.

In “Frolicsome Engines: The Long Prehistory of Artificial Intelligence,” an excellent Public Domain Review article by Jessica Riskin, the Stanford historian writes the backstory of not only humanoid automata but efforts at all manner of simulacra. The opening:

How old are the fields of robotics and artificial intelligence? Many might trace their origins to the mid-twentieth century, and the work of people such as Alan Turing, who wrote about the possibility of machine intelligence in the ‘40s and ‘50s, or the MIT engineer Norbert Wiener, a founder of cybernetics. But these fields have prehistories — traditions of machines that imitate living and intelligent processes — stretching back centuries and, depending how you count, even millennia.

The word “robot” made its first appearance in a 1920 play by the Czech writer Karel Čapek entitled R.U.R., for Rossum’s Universal Robots. Deriving his neologism from the Czech word “robota,” meaning “drudgery” or “servitude,” Čapek used “robot” to refer to a race of artificial humans who replace human workers in a futurist dystopia. (In fact, the artificial humans in the play are more like clones than what we would consider robots, grown in vats rather than built from parts.)

There was, however, an earlier word for artificial humans and animals, “automaton”, stemming from Greek roots meaning “self-moving”. This etymology was in keeping with Aristotle’s definition of living beings as those things that could move themselves at will. Self-moving machines were inanimate objects that seemed to borrow the defining feature of living creatures: self-motion. The first-century-AD engineer Hero of Alexandria described lots of automata. Many involved elaborate networks of siphons that activated various actions as the water passed through them, especially figures of birds drinking, fluttering, and chirping.•


« Older entries § Newer entries »