Speaking of mind-altering substances, when a teenager, the French Surrealist writer René Daumal blasted his brain with the carbon tetrachloride he normally used to kill beetles for his insect collection. Not a good idea. By the time he was 36, he’d joined the bugs in the great beyond, no doubt in part because of his amateur chemistry experiments.

Known primarily today for the novel Mount Analogue: A Tale of Non-Euclidean and Symbolically Authentic Mountaineering Adventures, which Alejandro Jodorowsky used as the basis for his crazy-as-fuck 1973 film, Holy Mountain, Daumal’s recollection of his auto-dosing, “A Fundamental Experiment,” was reprinted in a 1965 Psychedelic Review. The opening:

The simple fact of the matter is beyond telling.  In the 18 years since it happened, I have often tried to put it into words.  Now, once and for all, I should like to employ every resource of language I know in giving an account of at least the outward and inward circumstances. This ‘fact’ consists in a certainty I acquired by accident at the age of sixteen or seventeen; ever since then, the memory of it has directed the best part of me toward seeking a means of finding it again, and for good.

My memories of child-hood and adolescence are deeply marked by a series of attempts to experience the beyond, and those random attempts brought me to the ultimate experiment, the fundamental experience of which I speak.

At about the age of six, having been taught no kind of religious belief whatsoever, I struck up against the stark problem of death.

I passed some atrocious nights, feeling my stomach clawed to shreds and my breathing half throttled by the anguish of nothingness, the ‘no more of anything’.

One night when I was about eleven, relaxing my entire body, I calmed the terror and revulsion of my organism before the unknown, and a new feeling came alive in me; hope, and a foretaste of the imperishable. But I wanted more, I wanted a certainty. At fifteen or sixteen I began my experiments, a search without direction or system.

Finding no way to experiment directly on death-on my death-I tried to study my sleep, assuming an analogy between the two.

By various devices I attempted to enter sleep in a waking state. The undertaking is not so utterly absurd as it sounds, but in certain respects it is perilous. I could not go very far with it; my own organism gave me some serious warnings of the risks I was running. One day, however, I decided to tackle the problem of death itself.

I would put my body into a state approaching as close as possible that of physiological death, and still concentrate all my attention on remaining conscious and registering everything that might take place.

I had in my possession some carbon tetrachloride, which I used to kill beetles for my collection. Knowing this substance belongs to the same chemical family as chloroform (it is even more toxic), I thought I could regulate its action very simply and easily: the moment I began to lose consciousness, my hand would fall from my nostrils carrying with it the handkerchief moistened with the volatile fluid. Later on I repeated the experiment –in the presence of friends, who could have given me help had I needed it.

The result was always exactly the same; that is, it exceeded and even overwhelmed my expectations by bursting the limits of the possible and by projecting me brutally into another world.

First came the ordinary phenomena of asphyxiation: arterial palpitation, buzzings, sounds of heavy pumping in the temples, painful repercussions from the tiniest exterior noises, flickering lights. Then, the distinct feeling: ‘This is getting serious. The game is up,’ followed by a swift recapitulation of my life up to that moment. If I felt any slight anxiety, it remained indistinguishable from a bodily discomfort that did not affect my mind.

And my mind kept repeating to itself : ‘Careful, don’t doze off. This is just the time to keep your eyes open.’

The luminous spots that danced in front of my eyes soon filled the whole of space, which echoed with the beat of my blood- sound and light overflowing space and fusing in a single rhythm. By this time I was no longer capable of speech, even of interior speech; my mind travelled too rapidly to carry any words along with it.

I realized, in a sudden illumination, that I still had control of the hand which held the handkerchief, that I still accurately perceived the position of my body, and that I could hear and understand words uttered nearby–but that objects, words, and meanings of words had lost any significance whatsoever. It was a little like having repeated a word over and over until it shrivels and dies in your mouth: you still know what the word ‘table’ means, for instance, you could use it correctly, but it no longer truly evokes its object.

In the same way everything that made up ‘the world’ for me in my ordinary state was still there, but I felt as if it had been drained of its substance. It was nothing more than a phantasmagoria-empty, absurd, clearly outlined, and necessary all at once.

This ‘world’ lost all reality because I had abruptly entered another world, infinitely more real, an instantaneous and intense world of eternity, a concentrated flame of reality and evidence into which I had cast myself like a butterfly drawn to a lighted candle.

Then, at that moment, comes the certainty; speech must now be content to wheel in circles around the bare fact.

Certainty of what?

Words are heavy and slow, words are too shapeless or too rigid. With these wretched words I can put together only approximate statements, whereas my certainty is for me the archetype of precision. In my ordinary state of mind, all that remains thinkable and formulable of this experiment reduces to one affirmation on which I would stake my life: I feel the certainty of the existence of something else, a beyond, another world, or another form of knowledge.

In the moment just described, I knew directly, I experienced that beyond in its very reality.

It is important to repeat that in that new state I perceived and perfectly comprehended the ordinary state of being, the latter being contained within the former, as waking consciousness contains our unconscious dreams, and not the reverse. This last irreversible relation proves the superiority (in the scale of reality or consciousness) of the first state over the second.

I told myself clearly: in a little while I shall return to the so-called ‘normal state’, and perhaps the memory of this fearful revelation will cloud over; but it is in this moment that I see the truth.

All this came to me without words; meanwhile I was pierced by an even more commanding thought. With a swiftness approaching the instantaneous, it thought itself so to speak in my very substance: for all eternity I was trapped, hurled faster and faster toward ever imminent annihilation through the terrible mechanism of the Law that rejected me.

‘That’s what it is. So that’s what it is.’

My mind found no other reaction. Under the threat of something worse, I had to follow the movement.

It took a tremendous effort, which became more and more difficult, but I was obliged to make that effort, until the moment when, letting go, I doubtless fell into a brief spell of unconsciousness. My hand dropped the handkerchief, I breathed air’, and for the rest of the day I remained dazed and stupefied-with a violent headache.•

“Nothing in your education or experience can have prepared you for this film.”



John McAfee, who’s never been charged for murder, is a Philip K. Dick character of his own making, speeded-up and paranoid. The erstwhile anti-virus emperor says he’s returning to the field of security software but who the fuck knows. McAfee’s apparently found financial backing, but he seems better suited to manning a gunboat in the proximity of a banana republic. From Richard Waters in the Financial Times:

John McAfee, the controversial former software boss, has made a move to win back a leading role in the security software industry that he helped to pioneer, taking the helm of a tiny public investment vehicle and declaring his aim of turning it into “a successful and major force in the space”.

Mr McAfee, creator of the widely used antivirus software that bears his name, sold his first company to Intel for $7.6bn six years ago, in one of the biggest software transactions ever. But he made international headlines four years ago when he went on the run after becoming the focus of a manhunt in Belize following the murder of his neighbour there. He fled over the border into Guatemala, before being deported back to the US at his request. He was never arrested or charged in the murder.

Mr McAfee’s erratic behaviour and claims that he was afraid for his safety if he was arrested by the local police prompted the Belize prime minister to suggest he was “bonkers.” He has since maintained an outspoken public stance on tech policy issues, including putting himself forward as an independent candidate in this year’s US presidential elections and denouncing the FBI’s attempt to force Apple to grant access to one of its iPhones this year as “the beginning of the end of the US as a world power.”•

Tags: ,

Donald Trump, the dunce cap on America’s pointy head, has been enabled by traditional media, new media and a besieged American middle class, as he’s attempted to become our first Twitter President. Mostly, though, I think he’s been abetted by the large minority of racist citizens who want someone to blame, especially in the wake of our first African-American President and recent myriad examples of social progress.

Trump is no mastermind. He seems to have gotten into the race impetuously to burnish his idiotic brand–you know, Mussolini as an insult comic. His main asset in this campaign season has been an utter shamelessness, a willingness to stoop as low as he needs to go. Whether that’s a prescription for general-election victory, we’ll soon see.

It’s true that in a more centralized media and political climate, the hideous hotelier would have likely been squeezed from the process by gatekeepers, but the more unfettered new normal only gave him opportunity, not the nomination. I don’t think dumb tweets and smartphones made the troll a realistic contender for king. It was we the people.

In a pair of pieces, Nick Bilton of Vanity Fair and Rory Cellan-Jones of the BBC see technology as the main cause for the rise of Trump, if in different ways. Excerpts from each follow.

From Bilton:

I’ve heard people say that if it wasn’t for CNN, FOX, and a dozen other television outlets that have “handed Trump the microphone,” there would be no Trump. But with all due respect to the television media, they’re just not that important anymore. Perhaps his popularity is a result of a broken political system, others suggest. But let’s be realistic, people have always believed the system is broken. (It’s that same broken system, it should be noted, that has helped create many of the disruptive unicorns in Silicon Valley.)

The only thing that’s really changed between Trump’s other attempts to run for office and now is the advent of social media. And Trump, who has spent his life offending people, knows exactly how to bend it to his will. Just look at what happens if someone says something even remotely politically incorrect today: the online immune system, known famously as a Twitter mob, sets in to hold that person accountable. These mobs demand results, like seeing someone fired, making them shamefully apologize, or even seeing their life torn to shreds.

Yet someone like Donald Trump doesn’t get fired, or apologize, which only makes the mobs grow more fervent and voluble. And the louder they get, the more the news media covers the backlash. The more the TV shows talk about him, the more we all talk about him. If you want to truly comprehend why Trump is so popular, you just have to behold what people are saying in 140 characters or less. It’s the same thing Kim Kardashian and Kanye West, and anyone else who wants attention, understand. If we’re talking about them, they’re winning the war for attention. No one knows this better than Trump. Prod the social-media tiger, you get attention: say Mexicans are rapists, make fun of the disabled, pick a fight with the Pope, attack women, call the media dumb, and social media shines a big, bright spotlight on Donald.

Arianna Huffington may have once famously decided to cover Trump in the entertainment section of the Huffington Post, but the reality is we now live in a world where there is no line between entertainment, politics, and media. And I know Silicon Valley knows this, because they are the ones that helped eviscerate it.•

From Cellan-Jones:

Over the past year we have seen plenty of warnings about the potential impact of robots and artificial intelligence on jobs.

Now one of the leading prophets of this robot revolution has told the BBC he is already seeing another side-effect of automation – the rise of politicians such as Donald Trump and the Democratic presidential hopeful Bernie Sanders.

Martin Ford’s Rise of the Robots won all sorts of awards for its compelling account of a wave of automation sweeping through every area of our lives, posing a serious threat to our economic well-being. But there has also been plenty of pushback from economists who reckon his conclusion is wrong and that, as in previous industrial revolutions, the overall impact on jobs will be positive.

In London to speak at a conference on robots held by the Bank of America, he told me that he didn’t think this latest technology upheaval would be as benign as in the past: “The thing is that this time machines are now in some sense beginning to think. And what that means is we’re seeing machines encroach on the kind of capabilities that set humans apart.”

He sees the robots moving up the value chain, threatening any jobs which involve humans sitting in front of screens dealing with information – the kind of work which we used to think offered security to middle-class people with average skills.•

Tags: , ,

George-Lawnmower-1950-1 (3)

Whether we’re talking about baseball umpires or long-haul truckers, I’m not so concerned about machines ruining the “romance” of traditional human endeavor, but I am very worried about technological unemployment destabilizing Labor. Perhaps history will repeat itself and more and better jobs will replace the ones likely to be disappeared in the coming decades, but even just the perfection of driverless cars will create a huge pothole in society. The Gig Economy is a diminishing of the workforce, and even those positions are vulnerable to automation. Maybe things will work themselves out, but it would be far better if we’re prepared for a worst-case scenario.

Excerpts from two articles follow: 1) Mark Karlin’s Truthout interview with Robert McChesney, co-author of People Get Ready, and 2) a Manu Saadia Tech Insider piece, “Robots Could Be a Big Problem for the Third World.”

From Truthout:


Let me start with the grand question raised by your book written with John Nichols. I think it is safe to say that the conventional thinking of the “wisdom class” for decades has been that the more advanced technology becomes (including robots and automated means of production, service and communication), the more beneficial it will be for humans. What is the basic challenge to that concept at the center of the new book by you and John?

Robert W. McChesney:

The conventional wisdom, embraced and propagated by many economists, has been that while new technologies will disrupt and eliminate many jobs and entire industries, they would also create new industries, which would eventually have as many or more new jobs, and that these jobs would generally be much better than the jobs that had been lost to technology.

And that has been more or less true for much of the history of industrial capitalism. Vastly fewer people were needed to work on farms by the 20th century and many ended up in factories; less are now needed in factories and they end up in offices. The new jobs tended to be better than the old jobs.

But we argue the idea that technology will create a new job to replace the one it has destroyed is no longer operative. Nor is the idea that the new job will be better than the old job, in terms of compensation and benefits. Capitalism is in a period of prolonged and arguably indefinite stagnation.•

From Tech Insider:

The danger lies in the transition to an economy where the cost of making stuff—industry—has become more or less like agriculture today (with very few people employed and a very low share of GDP). With appropriate policies in place, developed countries can probably manage that transition. They have in the past, and therefore it is safe to assume they most likely will in the future. It does not mean that we will not experience dislocations and conflicts, but we do have old and established institutions—government, the press, the public sphere— that allow us to resolve such conflicts over time for the greater benefit of all.

The real challenge will be beyond our comfortable borders, in the developing world. In both nineteenth-century Europe and twentieth-century Asia, national development has followed a similar pattern. People moved from the countryside to urban centers to take advantage of higher-paying jobs in factories and services. Again, South Korea offers a startling, fast-forward example of that: it underwent a complete transformation from a poor, rural country to a postindus trial, hyperurban powerhouse in less than fifty years. It was so rapid that most visible traces of the past have been erased and forgotten. The national museum in Seoul has a life-size reconstruction of a Seoul street in the 1950s, just like we have over here, but for the colonial era. And imagine this, China went down that very same path at an even faster clip. Half a billion impoverished people turned into middle-class consumers in three decades.

However, this may not happen again if manufacturing is reduced to the status of agriculture, a highly rationalized activity (read: employing very few people). The historically proven path to economic growth and prosperity taken by Korea and China might no longer be available to the next countries.•

Tags: , ,

Babe Ruth Slides Home

Count me among those wholeheartedly ready for robots to replace home-plate baseball umpires. Ball-and-strike calls are wrong about 10% of the time even with the best of umpires, and that leaves an awful lot of wiggle room for not only honest fallibility but even chicanery. To err is human, I know, but perhaps so is coming up with solutions to reduce incompetence? Experiments with robot umps begun in 1950 should be worked on today in the minor leagues. Then the buckets of bolts should be promoted.

Jason Gay, a talented writer for the Wall Street Journal, isn’t so sure. He believes something will be lost as something’s gained in the transfer of duties from carbon to silicon, not only because machines also malfunction (though less often, most likely), but also because of bigger-picture issues. An excerpt that pivots off of David Ortiz’s disputed strikeout at Yankee Stadium this weekend:

Disputed calls like that invariably provoke chatter about a surprisingly doable proposal: robot umps. Precise camera tech to pinpoint balls and strikes has existed for years. Even if the pitch tech at Yankee Stadium showed the calls against Ortiz were not so egregious, the suggestion is clear: Had a “robo-ump” been on ball-and-strikes duty, Big Papi may have marched to first base and tied a game the Red Sox instead wound up losing.

Seems reasonable, right? Whenever possible, shouldn’t tech be used to make the proper call? There are loads of examples of technology improving accuracy in sports—Hawk-Eye line-calling in tennis, for one, is crisp, quick and enjoyably theatrical (fans clap in anticipation!). The NFL, meanwhile, uses an oddball system in which an official crawls under Dracula’s cape to review replays. It mostly works, even if it often takes longer than a bus trip to Maine, and no one on earth seems to know what a catch is in the NFL anymore.

That’s a good reminder that technology isn’t a guaranteed savior. Not every play is reviewable. Machines falter. Software glitches. Some inevitabilities in life are utterly resistant to modernization, like making the bed, or LaGuardia Airport.•

Tags: ,


The publication of a recent unauthorized biography of Joan Didion has reopened the conversation on her career, with some turning their guns on her canon, but I still vote “yes,” especially in regards to her writing about her native California. 

One assignment in the Golden State that never panned out as planned was her 1976 reportage of the Patty Hearst trial in San Francisco, which was supposed to run in Rolling Stone. Didion couldn’t find the thread of the court proceedings of the debutante terrorist but used the experience to work over some of her own knots.

A few of her recollections of this period have been published in the New York Review of Books. The essay jumps around, touching on two different coming-of-age stories which occurred, roughly speaking, in the same milieu. Really intended for Didion completists. The introduction:

I had told Jann Wenner of Rolling Stone that I would cover the Patty Hearst trial, and this pushed me into examining my thoughts about California. Some of my notes from the time follow here. I never wrote the piece about the Hearst trial, but I went to San Francisco in 1976 while it was going on and tried to report it. And I got quite involved in uncovering my own mixed emotions. This didn’t lead to my writing the piece, but eventually it led to—years later—Where I Was From (2003).

When I was there for the trial, I stayed at the Mark. And from the Mark, you could look into the Hearst apartment. So I would sit in my room and imagine Patty Hearst listening to Carousel. I had read that she would sit in her room and listen to it. I thought the trial had some meaning for me—because I was from California. This didn’t turn out to be true.

—March 23, 2016•

Tags: ,

leary2 (1)

Although lysergic acid diethylamide, was, early in its discovery phase, considered a possible treatment for serious mental-health issues, it came to be seen during the ’60s, through the urging of Richard Alpert and Dr. Timothy Leary and others, as a societal powerwash of sorts, a tonic to radically remove the corrupting, conforming influences of gods and governments, a way to awaken the soporific, a means of cleansing the doors of perception. 

Revolutions are messy, however, and freakouts and flying teenagers did not stamp a smiley face on the “medicine.” It was just plain dangerous to unloose such unregulated experimentation into the world. Even Leary himself, who proselytized at campuses and correctional facilities alike, thought all along that the drug was a short-term panacea with diminishing returns, that soon something else would have to wake up the “beloved robots”–perhaps it would be computer software. Serious academic interest in the drug unsurprisingly idled.

Decades later, there are fewer flashbacks of the dosing and overdosing, and LSD is gaining currency again as a legitimate means of medical treatment. But will it ever shake off its bad reputation? And can its very real dangers be sufficiently neutralized?

From Jon Kelly at the BBC:

Mention LSD and you might think of the 1960s counterculture – kaftanned hippies in San Francisco, or the more adventurous end of the Beatles’ back catalogue, or the tragedy of Pink Floyd singer Syd Barrett losing his grip on reality.

But for the first time, researchers say they have visualised how LSD alters the way the brain works.

A team at Imperial College London says they found it broke down barriers between areas that control functions like vision, hearing and movement. The study was with a small group – 20 subjects – but theresearchers say it could lead to a revolution in the way addiction, anxiety and depression are treated.

For the past decade and a half, academics around the world have been studying whether psychedelic substances that cause hallucinations, changes in perception and mind-altering states could have medical benefits.

But this isn’t the first time we’ve been here. Back in the 1960s there were high hopes for the therapeutic potential of psychedelics, too. Four major scientific conferences were held on the subject. Thousands of papers were published.

But soon enough fears over the recreational use of LSD – or lysergic acid diethylamide, to give its full title – ensured research all but ground to a halt.•




From the August 11, 1925 Brooklyn Daily Eagle:




When Tyler Cowen wrote his provocative 2013 Time cover story about Texas being the future of America, I pushed back a bit, wondering if the swarm of transplants to the state might change it profoundly, whether Texas as we know it–conservative, small-government, uber-capitalist–was even the future of Texas, let alone the rest of us. 

In a wonderfully written article, Manny Fernandez of the New York Times explores this tension between red and blue and old and new, with lifelong Texans making a fierce stand culturally, attempting to turn their home into something of a “superstate.” The new attitude is a blend of official and unofficial initiatives that began, not coincidentally, after the election of the first African-American President. Despite the pride and ardor, it may be a last stand in this digital, multicultural age.

An excerpt:

The idea that Texas is the last place is part of a new phenomenon. People throughout the state say they believe that their way of life is under assault and that they are making a kind of last stand by simply being Texan. It is this fear, anger and sometimes paranoia that lurks beneath the surface of Texas politics and that underlies the expansion of gun rights, the reflexive antagonism toward Washington, and the opposition to abortion, same-sex marriage and other issues that seems essential for succeeding in state politics these days. Senator Ted Cruz’s remarks dismissing New York values at a Republican debate should come as no surprise. That’s how people from the last best place talk about other places.

But Texas is not under attack. It is merely changing as America changes with it. It is a majority-minority state that has become increasingly diverse and nonwhite — rural Texas is shrinking while urban and suburban Texas is expanding — and the tension between what Texas is and what it was has come to define the state.

The hard-right domination of Texas politics frustrates the state’s Democrats and plenty of others in Austin, Houston, Dallas and San Antonio. They are agitated, but they stay put because they view Texas as forever, and Republican Texas as a kind of temporary occupation. It’s hard to know if they’re right, but easy to see why people’s emotional investment in Texas transcends conservative politics.

As the world grows smaller, as technology obliterates the significance of where we live and work, as Americans become more transient, Texas resists. It declares, to itself and the nation: Place matters. America needs a superstate, or to put it another way, an antistate. Sometimes we love it here and sometimes we are disgusted here, but, to twist Gertrude Stein’s line about Oakland, Calif., there is a here here. We tattoo Texas on our arms, buy Texas-built trucks and climb fire escapes with Texas dirt in our pockets. Place, we are unsubtly suggesting, matters.•




10 search-engine keyphrases bringing traffic to Afflictor this week:

  1. the incredible bread machine film
  2. theo kamecke’s documentary moonwalk one
  3. moshé feldenkrais method
  4. george ripley’s brook farm utopia
  5. dorothy stratten story
  6. groucho discussing chaplin in playboy interview
  7. why has the marriage rate decline in the u.s.?
  8. mars one project flawed
  9. has violence in the world really declined?
  10. gm working on driverless cars
This week, Ted Cruz selected Carly Fiorina as Vice President of all the doggies.

This week, Ted Cruz selected Carly Fiorina as Vice President of all the doggies.

I’ll be making all the decisions now, Thelma and Louise.

Hurry, Louise, lets escape while we can.

Hurry, Louise, let’s escape while we can.

Capture them and rerurn the to me.

Capture them and return them to me.

Turn you engines off and place your paws in plain view.

Turn you engines off and place your paws in plain view.

Let's not get caught.

Let’s not get caught.



MOOCs mean more students, remote ones, who have lots of questions. To deal with the burden, Georgia Tech professor Ashok Goel, when offering an online Artificial Intelligence course, insinuated a robot Teaching Assistant powered by Watson into the proceedings. Most of the pupils never grew suspicious during their Q&As with the A.I. T.A., even a student who’d previously helped build Watson hardware. Does this demonstrate machine intelligence improving or humans becoming too passive in accepting what’s presented to them? Both, probably. It’s a dual lesson in technology and psychology.

From Melissa Korn at the Wall Street Journal:

Since January, “Jill,” as she was known to the artificial-intelligence class, had been helping graduate students design programs that allow computers to solve certain problems, like choosing an image to complete a logical sequence.

“She was the person—well, the teaching assistant—who would remind us of due dates and post questions in the middle of the week to spark conversations,” said student Jennifer Gavin.

Ms. Watson—so named because she’s powered by International Business Machines Corp.’s Watson analytics system—wrote things like “Yep!” and “we’d love to,” speaking on behalf of her fellow TAs, in the online forum where students discussed coursework and submitted projects.

“It seemed very much like a normal conversation with a human being,” Ms. Gavin said.

Shreyas Vidyarthi, another student, ascribed human attributes to the TA—imagining her as a friendly Caucasian 20-something on her way to a Ph.D. 

Students were told of their guinea-pig status last month. “I was flabbergasted,” said Mr. Vidyarthi.

“Just when I wanted to nominate Jill Watson as an outstanding TA,” said Petr Bela.•

Tags: , , , ,


During the darkest days of the second Bush Administration, the comedian Lewis Black had a great joke about hoping for the first time in his life that there would be a military coup in America. The politicians were so bad that the generals were clearly preferable.

At the center of the Army’s appeal stood David Petraeus, a talented commander who was built to mythical proportions out of political expedience, so that a President who’d lost the faith of the people could still operate abroad militarily. But his surge did not last. Like most heroes of convenience, the general was bound for a fall, but the extent of his defeat and surrender was shocking. Scandals personal and professional ended his brilliant career, even if he managed to avoid prison time.

The former CIA Director, now building equity on Wall Street, sat down for lunch with the Financial Times’ Edward Luce, who’s done some of the best writing on American politics during this crazy election year. The journalist finds a subject who doesn’t seem given to deep self-analysis despite his precipitous fall from grace. The opening:

On the dot of noon, as agreed, General David Petraeus strolls into the Four Seasons Restaurant. His arrival causes a flurry among the floor staff. Dressed in a navy blue suit and plain red tie, the former CIA chief is businesslike — in keeping with his new role on Wall Street. When I inquire what keeps him busy nowadays his answer goes on for so long I half regret asking. In addition to a lucrative job in private equity and a clutch of teaching jobs, he is “on the [paid] speaking circuit.” Chuckling, and in an apparent reference to Bernie Sanders’ attacks on Hillary Clinton’s gilded speaking career, he adds: “Many have noted it is the highest form of white-collar crime.” Only when I touch on the scandal that ended his meteoric public career does he assume a crisper tone.

Just four years ago, Petraeus was lionised as the Douglas MacArthur of his generation. Even discounting the hype, he stood head and shoulders above other US generals. In the depths of the Iraq war, when the country was undergoing death by a thousand improvised explosive devices, he was dubbed “King David” of Mosul — a city he cleared and held before it fell back into rebel hands. He was then appointed chief architect of George W Bush’s 2007 Iraq surge and, after a stint as head of the Pentagon’s Central Command, Barack Obama put him in charge of his own surge in Afghanistan. His reward was to be made head of the Central Intelligence Agency in 2011. Many thought the CIA was a springboard for Petraeus’s own presidential ambitions. America loves a successful general and his approval ratings were stratospheric. Could anything stand in his way?

The answer was yes — David Petraeus himself. Shortly after Obama’s re-election in 2012, Petraeus abruptly resigned from the CIA when it emerged he had shared eight notebooks of classified information with his biographer, Paula Broadwell. They had also had an affair. Rarely has a fall from grace been so brutal.•

Tags: ,


The Trump campaign, the moral equivalent of Hitler using the N-word, stopped to take a leak in West Virginia. Some locals eagerly drank from that bowl because the hideous hotelier’s lies sound less polished than the ones they’ve heard before, because the promises he would break are different than other politicians’ broken promises. His words have an unfamiliar and angrier and more accusatory tone, placing the blame elsewhere, allowing the worst of our citizens to feel relieved of their flaws and failings.

This election season has shown us there are a surprising number of damaged, racist Americans who want to hold non-white people accountable for their problems, and the idea that all of them are poor, struggling folks is a falsehood. They come from all manner of background and financial situation and are united in that they look at Trump’s ugliness and see themselves.

Ben Jacobs’ of the Guardian has written an excellent account of rockhead visiting coal country. I will only say that I hope to never have Greg Bonecutter Jr. as my nurse. An excerpt:

The rally at the Charleston Civic Center, a brutalist hunk of concrete, started to fill up hours before Trump arrived and an orderly line outside dissolved into a horde of people desperate to make it into the event.

Greg Bonecutter Jr, a former nurse on disability from Letart, West Virginia, was an avid Trump supporter wearing a Make America Great Again hat and a shirt that proclaimed “Hillary sucks but not like Monica”.

He was a longtime Trump supporter who backed the nominee because he was someone with whom “you knew where you stood” and was sick “of politicians, big money scams and cover-up lies”. A registered independent, he said he thought Obama was “sucking Muslim tail and an apologist to terrorist actions” and “if it was up to me we’d bring back public execution and there’d be several trap doors on the White House lawn.” Bonecutter warned darkly that if Clinton was elected there might be another civil war.

Sandra Riddle of North Charleston shared his pessimism. She was worried about the supreme court and that if Clinton was elected “we might lose freedom of speech and assembly” as well as the second amendment. She wasn’t a gun owner but noted “we have to protect guns … because of people coming from Isis”.

Yet others simply liked Trump for his populist appeal.•

Tags: , , ,




Of all the early 20th-century American astrologers, Evangeline Adams probably did the most to modernize and legitimize the craft, and that’s a shame.

Adams, installed in an apartment above Carnegie Hall at the end of her brilliant career, spent her early years dodging prison sentences for practicing fortune telling before the bullshit was legalized. She differentiated herself from the competition by updating the lexicon for the Industrial Age crowd, sprinkling her predictions with terms like “machines” and “electrical forces.” She was also quite adept at using her powers of persuasion to draw in gullible boldface names (Eugene O’Neill, Tallulah Bankhead, J. Pierpont Morgan, etc.), who gave her a cachet she would not have otherwise enjoyed. But the astrologer’s greatest gift may have been playing the press, aggressively publicizing those occasions she guessed correctly and making her many boneheaded pronouncements go quietly away.

In 1929, she uttered her worst prediction, telling a radio reporter the Dow Jones “could climb to Heaven” just weeks before the bottom fell out. Also interesting is that her most celebrated on-target prognostication, in which she said in 1923 that America would be engaged in a world war in 1942, looks less impressive if you read the fine print. WWII would be provoked, she asserted, when a second American Civil War spread all over the globe. Her 1932 Brooklyn Daily Eagle obituary, embedded below, lauds her “extraordinary record for accuracy.”





If we our species or some version of it persists long enough, conscious machines will be possible–probable, even. We’ll ultimately pull apart the vast mystery of the human brain, and unlocking those secrets will begin us on a path to making machines that are SMART, not just smart. It’s worth pursuing a Big Data workaround, a shortcut to superintelligence, but that seems less a sure thing.

In an Edge interview, psychologist Gary Marcus is concerned that the brute force of Big Data may be leading us astray in the search for Artificial Intelligence. If you recall, in late January the NYU psychologist argued the DeepMind AlphaGo system was overhyped, but by March he was proven wrong. His other questions about our ability to widely apply such an AI remain unsettled, however. Marcus feels particularly strongly that driverless cars will be hampered by real-world uncertainty.

From Edge:

If you’re talking about having a robot in your home—I’m still dreaming of Rosie the robot that’s going to take care of my domestic situation—you can’t afford for it to make mistakes. The DeepMind system is very much about trial and error on an enormous scale. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once. You can’t get the same scale of data. If you’re talking about a robot in a real-world environment, you need for it to learn things quickly from small amounts of data.                                 

The other thing is that in the Atari system, it might not be immediately obvious, but you have eighteen choices at any given moment. There are eight directions in which you can move your joystick or not move it, and you multiply that by either you press the fire button or you don’t. You get eighteen choices. In the real world, you often have infinite choices, or at least a vast number of choices. If you have only eighteen, you can explore: If I do this one, then I do this one, then I do this one—what’s my score? How about if I change this one? How about if I change that one?                                 

If you’re talking about a robot that could go anywhere in the room or lift anything or carry anything or press any button, you just can’t do the same brute force search of what’s going on. We lack for techniques that are able to do better than just these kinds of brute force things. All of this apparent progress is being driven by the ability to use brute force techniques on a scale we’ve never used before. That originally drove Deep Blue for chess and the Atari game system stuff. It’s driven most of what people are excited about. At the same time, it’s not extendable to the real world if you’re talking about domestic robots in the home or driving in the streets.•      



Would you like to survive if the sun dies? (When is actually more like it.) I would. Of course, I’ll be dead long before then, but in theory, anyway.

The sun will make Earth uninhabitable long before it completely burns out. Is it possible that our tools and technology will be so advanced in a couple hundred million years that we can “maintain” the sun or construct other ones as need be? Anything’s possible given enough time, I suppose, but other workarounds are likely more realistic.

In “How to Survive Doomsday,” an excellent Nautilus essay, Michael Hahn and Daniel Wolf Savin look at the daunting task of outlasting our star. An excerpt:

In a paltry 500 million years or so, no humans will remain on the surface of the Earth—at least, not outside of some hypothetical controlled environment. And things get worse from there. After the atmospheric CO2 is gone and no longer able to regulate Earth’s surface temperature, things will start to get very hot. In about a billion years, the average surface temperature will increase to above 45 degrees Celsius from the current 17 degrees Celsius. Important biochemical processes turn off at temperatures above 45 degrees Celsius, leaving most of the planetary surface uninhabitable. Animal life will need to migrate to the cooler poles to survive; but by 1.5 billion years from now, even the poles will be too hot. Not even cockroaches will survive.

Now, there are a few things we can do to stay our execution. We could, for example, move the Earth’s orbit. If we fired a 100 km wide asteroid on an elliptical orbit that passed close to the Earth every 5,000 years, we could slowly gravitationally nudge the planet’s orbit farther away from the sun, provided that we don’t accidentally hit the Earth. As a less precarious alternative, we could build a giant solar sail behind the Earth with enough mass to drag the planet away from the sun. Such a sail acts like a kite, where the photons from the sun are the wind and the gravity between the solar sail and the Earth acts as the string. The sail would need to have a diameter 20 times that of the Earth but a mass only about 2 percent that of Mt. Everest, a mere trillion metric tons. Strategies like these could, in principle, keep the Earth in the habitable zone until the sun expands into a red giant. (If some other civilization has already built such a large solar sail, we could detect it using the same photometric techniques that are currently used to find exoplanets.)

Another survival choice is more complicated—or simpler, depending on your perspective. The future Earth will actually be a pleasant home for non-biological life—better than it is today. For one thing, the brighter sun will provide more abundant solar power. The space weather will also be nicer. The sun is a dynamo spinning on its axis about every 24 days, generating giant magnetic storms that disrupt communication networks, overload power grids, and damage orbiting satellites. Robots today need fear that their circuits could be fried by a solar storm, such as the large solar storm in 1989 that caused a power failure across most of Quebec. Currently, such storms are estimated to occur about once or twice per century. But as the sun ages, this rotation slows down and the magnetic storms will abate.

Given these facts, we humans might simply decide to upload ourselves into machines, which would be relatively comfortable on the dystopic future Earth.•

Tags: ,


While robots and AI present grand challenges for society, you wouldn’t want to live in a nation that misses out on this wave. Just look at countries left behind by the Industrial Revolution and all the good that era delivered.

Yes, the shift from an agrarian economy to a machine-based one is largely responsible for our climate-induced peril, but technology has also given us the tools to neutralize the threats we’ve created and others we haven’t. The failure to address global warming is really now more a political breakdown than anything else. 

Superintelligent machines eradicating us in the short term is as likely as immortality soon arriving. We should already be considering these existential risks, though they will ultimately have to be addressed by our distant descendants who will better understand them. Let’s hope they choose wisely.

My main objection to the digitalization of the culture is that we’re all being placed inside a machine that measures and maps us, one that will only grow more precise and exacting, and there will be no opt-out switch. I really have no answer for that one.

In an smart Financial Times opinion piece, Andrew McAfee, co-author of The Second Machine Age, argues that shaking off the robot’s embrace makes little sense. An excerpt:

The second and much more important reason that robots are not our foes is that they make us richer overall. By increasing our capabilities and productivity, they create more bounty and abundance. We like to communicate, learn, entertain ourselves, travel and consume goods and services. Technological progress lets us do more of all of these things for a given amount of money (or, increasingly these days, for no money at all), and at higher levels of quality.

It is true that the way most of us gain access to much of this bounty is by getting paid for our labour. It is also true that this “labour bargain” is becoming a tougher one for more and more people as their skills become less valuable, because of both globalisation and technological progress. We need to figure out how to deal with this situation. This will be one of the most important policy arenas over the coming decades.

But we also need to keep in mind that this is a situation brought about by the fact that technology is letting us do and create much more with much less drudgery and toil. If we cannot figure out how to deal with this, and how to make sure that the fruits of robots’ labour are shared in a way that reflects our shared values and protects our most vulnerable, then shame on us. In that case, we will have met the real foe in that case, and it will be us.•



Speaking of automata through the ages, the article embedded below from the July 31, 1887 Brooklyn Daily Eagle surveys some highlights from the field, with special attention paid to 18th-century French inventor Jacques de Vaucanson, who breathed “life” into the Digesting Duck (pictured above), among other locomotion machines.




When you love yourself, even mirrors aren’t enough.

Homo sapiens have always been fascinated by looking into the glass, so much so that attempts to create machine versions of ourselves snakes back to ancient times. Machines surpassing us physically–and perhaps eventually emotionally–seem to have sneaked up on us, but it’s been a long time coming.

In “Frolicsome Engines: The Long Prehistory of Artificial Intelligence,” an excellent Public Domain Review article by Jessica Riskin, the Stanford historian writes the backstory of not only humanoid automata but efforts at all manner of simulacra. The opening:

How old are the fields of robotics and artificial intelligence? Many might trace their origins to the mid-twentieth century, and the work of people such as Alan Turing, who wrote about the possibility of machine intelligence in the ‘40s and ‘50s, or the MIT engineer Norbert Wiener, a founder of cybernetics. But these fields have prehistories — traditions of machines that imitate living and intelligent processes — stretching back centuries and, depending how you count, even millennia.

The word “robot” made its first appearance in a 1920 play by the Czech writer Karel Čapek entitled R.U.R., for Rossum’s Universal Robots. Deriving his neologism from the Czech word “robota,” meaning “drudgery” or “servitude,” Čapek used “robot” to refer to a race of artificial humans who replace human workers in a futurist dystopia. (In fact, the artificial humans in the play are more like clones than what we would consider robots, grown in vats rather than built from parts.)

There was, however, an earlier word for artificial humans and animals, “automaton”, stemming from Greek roots meaning “self-moving”. This etymology was in keeping with Aristotle’s definition of living beings as those things that could move themselves at will. Self-moving machines were inanimate objects that seemed to borrow the defining feature of living creatures: self-motion. The first-century-AD engineer Hero of Alexandria described lots of automata. Many involved elaborate networks of siphons that activated various actions as the water passed through them, especially figures of birds drinking, fluttering, and chirping.•



As I’ve mentioned before, Google does not want to be primarily a search giant in a decade. That would leave the company in a well-appointed grave. That’s why the X division–a bold attempt at a latter-day, privately held Bell Labs–is so critical, moonshots so meaningful. If the company hits on a few, it can reinvent itself on the fly.

Of course, what’s good for an individual corporation is much more of a mixed blessing for a society. Pretty much all of these endeavors have a surveillance aspect, can only be commodified by knowing where we are, what we’re doing and what we’re thinking. They’re aimed at moving us all inside the Plex.

In a Backchannel piece, Astor Teller, X Director and true believer, culls the cutting-room-floor material from his recent TED Talk to further discuss the creative process. It’s a mix of sound advice and Silicon Valley self-mythologizing. The opening:

Almost every day in the moonshot factory is messy. Even when you’re sure you’re learning lots of valuable things during weeks or months of frustration, everyone worries, “What happens if I fail? Will people laugh at me? Will I get fired?” At the end of the day, we all have to pay the bills and want the people around us to think highly of us. So it’s human nature to gravitate toward the paths that feel psychologically safe.

That’s why, if you want your team to be audacious, you have to make being audacious the path of least resistance. People have to feel safe even as they make mistakes or fail altogether — which means we, as managers and leaders, have to make it easy and rewarding to take risks and run enthusiastically at really hard things. Here are a few things we’ve tried at X so our emotional environment keeps us brave enough to say and act on things that have a very good chance of being wrong — and just might be crazy enough to be brilliant.•


George Will needs to call out Geroge Stephanopoulos

George Will sucks at math. In 2012, right before he predicted Romney would beat Obama in a landslide, the pundit handicapped Hillary Clinton’s odds of becoming President in 2016. He failed spectacularly. The former Secretary of State may or may not win the general election but regardless of the outcome, Will’s calculations were yikes. He even thought Martin O’Malley had a better chance of reaching the Oval Office. From an appearance that year on Alec Baldwin’s Here’s the Thing:

Alec Baldwin:

What do you think [Hillary Clinton’s] political future is?

George Will:

Zero. There’s a whole generation of coming candidates. Andrew Cuomo in New York. Governor O’Malley in Maryland. Countless people. Paul Ryan. All kinds of good people out there.•



Key to making driverless cars a going concern is enabling them to communicate with other vehicles, to receive constant updates, to have maps redrawn in real time. That conversation won’t only be amongst vehicles, however. It will involve all manner of smartphones and sensors and more, utilizing an Internet of Things approach in advance of a truly dense IoT.

In a Detroit News article by Neil Winton, Delphi Automotive executive Jeff Owens believes we may see fleets of driverless taxis popping up in municipalities within five years. Well, who knows? It wouldn’t stun me if someone tried it in that span, though there’ll still be lots of work to do. He touches on the connectivity issue. An excerpt:

Automotive manufacturers have made great strides in automating almost all functions, but it’s the final 5 percent which might be the hardest hurdle to jump. A self-driving car would be able to handle all kinds of physical decisions for braking, steering and avoiding other cars, but how would it handle a situation where a legal decision was required? …

“At the end of the day, technology won’t be the inhibitor, it will be the legal framework,” he said.

Owens said vehicle connectivity which allows cars to talk to each other and share data is building up ahead of full autonomy to improve safety and avoid accidents.

“Vehicle control algorithms will be ready to take on all kinds of problems including that cyclist example. Already cars like the Mercedes S class (its top-of-the-range sedan) and the Audi Q7 (SUV, and the Tesla Model S) allow you to set the auto pilot on the highway which allows hands-off driving. The driver will still be keeping watch, but it helps for a relaxed experience,” Owens said.

“Connectivity used to be just entertainment, now it’s vehicle-to-everything — literally really connected to everything like the infrastructure and providing cloud-based information that will help a safe journey,” he said.•



The original American revolutionaries sometimes resembled a torch-carrying mob, but they were mostly crazy like foxes. The Tea Party born in 2009 is just plain crazy. A chemtrail of a political movement, it was steeped from the start in extreme paranoia and prejudice.

Many of those Republicans who thought they were creating a big tent when welcoming this sideshow into the center ring are now disgruntled that Donald Trump is their candidate. Funny thing is, Trump is really no different than traditional bigotry merchants Atwater and Rove, who gained power by more subtly selling racism and sexism. The hideous hotelier has merely replaced the dog whistles with dog bites, trading the soft, coded language of Gingrich for graphic soundbites about rape, assassination and genitalia. Funnier still (though not in a ha-ha way), the end result would remain the same should he become President, as Trump would use, as predecessors did, the greatest seat of U.S. power to tilt the game further in favor of the wealthiest.

Ben Howe is just such a conservative Rip Van Winkle, awakened too late to find that his complicity with Birthers and Truthers has ultimately unloosed his nightmare. From his Red State essay:

Allies aren’t friends. They may not even be colleagues. They are simply people that you find enough agreement with on enough issues to not go after each other. You don’t have to overtly support one another but you certainly don’t try to hurt each other.

As more and more people knew who I was and I fostered relationships and allies, I found myself more and more having to look the other way. Moments where I would cringe at something someone said, or quietly roll my eyes at a post they wrote, thinking “Gosh, I can’t believe they think that way” or “I swear that person is one tweet away from saying Obama is from Kenya.”

I justified it quietly to myself the way we had at the beginning of the tea party when such things would happen. People would say outlandish things and I would find myself nodding my head and awkwardly walking away, not calling them out for their silliness.

After all, there were more pressing matters.

And so, as I said, I kept quiet about these allies in new media and in Washington. People who I thought I agreed with only 70% of the time. Which normally is a great reason to consider someone an ally, but not when the other 30% is cringe-inducing paranoia and vapid stupidity.

I chose peace over principle. I chose to go along with those I disagreed with on core matters because I believed we were jointly fighting for other things that were more important.  I ignored my gut and my moral compass.

The result is that, almost to a man, every single person I cringed at or thought twice about, is now a supporter and cheerleader of Donald Trump.

I looked the other way, and I’m sure many others did too, as these people rose to prominence and their microphones got louder.  I ignored it at times because I hate self-righteous liberals who tell anyone they disagree with that they don’t want to be around them and I didn’t want to be like that. At other times because, well, it was easier than standing against foolishness.

I’m done with that now. Albeit a bit too late.•

Tags: ,


Google Glass may in some form be the future, but what a huge fiasco it was annals of product development. The narrative within the Plex is that the uncool tool was introduced to the market too soon, before all the kinks had been worked out, but that’s blind to its bigger, existential problem: It was a Segway for your face.

As an Economist piece points out, however, a vision that’s not a winner in public parks may be one within the office park. One way or another, Augmented Reality will likely become a boon companion in the workplace. The opening:

AHEAD of its time or just plain weird? Whatever the answer, Google last year stopped selling consumer prototypes of its controversial Google Glass, a camera-equipped head-mounted display resembling a pair of spectacles. Using a process known as augmented reality (AR), Glass can display in the viewer’s line of sight information about what they are looking at, among other things.

What consumers found unusual, factories and other businesses may not. Workers are often required to wear odd-looking safety equipment, such as helmets and protective glasses. It is more normal to be filmed. And indeed, the workplace is where AR equipment is taking hold, which is why Google is revamping Glass with business uses in mind.

Engineers that work on and repair transformers that distribute electricity can spend up to half their time searching for technical data in assorted software, databases, activity logs and even old-fashioned filing cabinets, says Alain Dedieu, a vice-president in the Shanghai operations of Schneider Electric. The French multinational is now testing AR systems that make the technical information that is being sought appear before their engineers’ eyes.•

« Older entries § Newer entries »