Industrial robots are built to be great (perfect, hopefully) at limited, repetitive tasks. But with Deep Learning experiments, the machines aren’t programmed for chores but rather to teach themselves to learn how to master them from experience. Since every situation in life can’t be anticipated and pre-coded, truly versatile AI needs to autonomously conquer obstacles that arise. In these trials, the journey has as much meaning–more, really–than the destination.

Of course, not everyone would agree that humans are operating from such a blank slate, that we don’t already have some template for many behaviors woven into our neurons–a collective unconsciousness of some sort. Even if that’s so, I’d think there’ll soon be a way for robots to transfer such knowledge across generations.

One current Deep Learning project: Berkeley’s Brett robot, designed to be like a small child, though a growing boy. The name stands for “Berkeley Robot for the Elimination of Tedious Tasks,” and you might be tempted to ask how many of them it would take to screw in a light bulb, but it’s already far beyond the joke stage. As usual with this tricky field, it may take longer than we’d like for the emergence of such highly functional machines, but perhaps not as long as we’d expect.

Jack Clark of Bloomberg visited the motherless “child” at Berkeley and writes of it and some of the other current bright, young things. An excerpt from his report:

What makes Brett’s brain tick is a combination of two technologies that have each become fundamental to the AI field: deep learning and reinforcement learning. Deep learning helps the robot perceive the world and its mechanical limbs using a technology called a neural network. Reinforcement learning trains the robot to improve its approach to tasks through repeated attempts. Both techniques have been used for many years; the former powers Google and other companies’ image and speech recognition systems, and the latter is used in many factory robots. While combinations of the two have been tried in software before, the two areas have never been fused so tightly into a single robot, according to AI researchers familiar with the Berkeley project. “That’s been the holy grail of robotics,” says Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington.

After years of AI and robotics research, Berkeley aims to devise a system with the intelligence and flexibility of Rosie from The Jetsons. The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot. Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people. These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

 

Tags: , ,

man-rocket_180892k

John Lanchester, who wrote one of my favorite articles of the year with “The Robots Are Coming in the London Review of Books, returns to that same publication to think about more tinkerers and their machines, namely the Wright brothers and Elon Musk.

The occasion is a dual review of David McCullough’s new work about the former and Ashlee Vance’s of the latter. As the piece notes, the aviation pioneer Wrights were ignored, disbelieved and mocked during their first couple of successful flights, the press too skeptical to accept what was clear as the sky if only they would open their eyes.

Puzzlingly, Lanchester is of the notion that the SpaceX founder Elon Musk is less than a household name, which is a curious thing since the Iron Man avatar is one of the most famous people on Earth, receiving the type of wide acclaim before coming close to Mars that was denied the Wrights even after they successfully took flight in Kitty Hawk. Just strange.

Otherwise it’s a very well-written piece, and one that astutely points out that tinkerers today who want to do more than merely create apps often need a planeload of cash, something the Wrights didn’t require. Perhaps 3-D printers will change that?

A passage in which Lanchester compares the siblings to their spiritual descendant:

When David McCullough’s book came out, it went straight to the top of the US bestseller list, taking up a position right next to Ashlee Vance’s biography of Elon Musk. At which point you may well be asking, who he? The answer is that Musk is the South African-born entrepreneur who runs three of the most interesting companies in America, in the fields of clean energy and interplanetary exploration: SolarCity (solar batteries), Tesla (electric cars), and SpaceX (commercial spaceflight). It’s the third of these companies which is the maddest and most entertaining. Where most corporate mission statements are so numbing they’d be useful as a form of medical anaesthesia, SpaceX’s is ‘creating the technology needed to establish life on Mars’. ‘I would like to die thinking that humanity has a bright future,’ Musk explained to Vance. ‘“If we can solve sustainable energy and be well on our way to becoming a multiplanetary species with a self-sustaining civilisation on another planet – to cope with a worst-case scenario happening and extinguishing human consciousness – then,” and here he paused for a moment, “I think that would be really good.”’

There are a number of suggestive parallels between Musk and the Wrights, beyond the obvious ones to do with an interest in flight. The bishop had very high standards and set no limits on the intellectual curiosity he encouraged in his children; Musk’s father had the same standards and the same insistence on no limits, but was (is) a tortured and difficult presence, ‘good at making life miserable’, in Musk’s words: ‘He can take any situation no matter how good it is and make it bad.’ The Wrights were poorish, the Musks affluentish, but both grew up with an emphasis on learning things first-hand. ‘It is remarkable how many different things you can get to explode,’ Musk says about his childhood experiments. ‘I’m lucky I have all my fingers.’ One very odd thing is a parallel to do with bullies: Musk was set on and beaten half to death by a gang of thugs at his school in Johannesburg; Wilbur Wright was attacked so badly at the age of 18 – beaten with a hockey stick – that he took years to recover from his injuries and missed a college education as a result. His assailant, Oliver Crook Haugh, went on to become a notorious serial killer. Something about these very bright young men set off the bullies’ hatred for difference.

The Wrights took calculated risks. Musk does the same.•

Tags: , , , ,

From the June 2, 1854 Brooklyn Daily Eagle:

 

 

Tags: ,

dt2

If Donald Trump grew a small, square mustache above his lip, would his poll numbers increase yet again? For a candidate running almost purely on attention, can any shock really be deleterious?

Howard Dean was the first Internet candidate and Barack Obama the initial one to ride those new rules to success. But things have already markedly changed: That was a time of bulky machines on your lap, and the new political reality rests lightly in your pocket. A smartphone’s messages are brief and light on details, and its buzzing is more important than anything it delivers.

The diffusion of media was supposed to make it impossible for a likable incompetent like George W. Bush to rise. How could such a person survive the scrutiny of millions of “citizen journalists” like us? If anything, it’s made it easier, even for someone who’s unlikable and incompetent. For a celeb with a Reality TV willingness to be ALL CAPS all the time, facts get lost in the noise, at least for awhile.

That doesn’t mean Donald Trump, an adult baby with an attention span that falls somewhere far south of 15 months, will be our next President, but it does indicate that someone ridiculously unqualified and hugely bigoted gets to be on the national stage and inform our political discourse. The same way Jenny McCarthy used her platform to play doctor and spearhead the anti-vaccination movement, Trump gets to be a make-believe Commander-in-Chief for a time.

Unsurprisingly, Nicholas Carr has written the best piece on the dubious democracy the new tools have delivered, a Politico Magazine article that analyzes election season in a time that favors a provocative troll, a “snapchat personality,” as he terms it. The opening:

Our political discourse is shrinking to fit our smartphone screens. The latest evidence came on Monday night, when Barack Obama turned himself into the country’s Instagrammer-in-Chief. While en route to Alaska to promote his climate agenda, the president took a photograph of a mountain range from a window on Air Force One and posted the shot on the popular picture-sharing network. “Hey everyone, it’s Barack,” the caption read. “I’ll be spending the next few days touring this beautiful state and meeting with Alaskans about what’s going on in their lives. Looking forward to sharing it with you.” The photo quickly racked up thousands of likes.

Ever since the so-called Facebook election of 2008, Obama has been a pacesetter in using social media to connect with the public. But he has nothing on this year’s field of candidates. Ted Cruz live-streams his appearances on Periscope. Marco Rubio broadcasts “Snapchat Stories” at stops along the trail. Hillary Clinton and Jeb Bush spar over student debt on Twitter. Rand Paul and Lindsey Graham produce goofy YouTube videos. Even grumpy old Bernie Sanders has attracted nearly two million likers on Facebook, leading the New York Times to dub him “a king of social media.”

And then there’s Donald Trump. If Sanders is a king, Trump is a god. A natural-born troll, adept at issuing inflammatory bulletins at opportune moments, he’s the first candidate optimized for the Google News algorithm.•

Tags: ,

There’s good news about life on Earth after climate change, but first the bad news: Death, massive amounts of death.

As Lizzie Wade states in her smart Wired article, we’ll likely be around to see the disaster we’ve created, but we don’t have a great shot at waiting out the recovery. That will take eons. The positive side doesn’t involve us, but rather the creatures that may thrive and replenish the landscape after we’re gone. But first they’ll have to survive us. Godspeed to them. An excerpt:

The flip side of mass extinction, however, is rapid evolution. And if you’re willing to take the long view—like, the million-year long view—there’s a ray of hope to be found in today’s rare species. The Amazon, in particular, is packed with plant species that pop up few and far between and don’t even come close to playing a dominant role in the forest. But they might have treasure buried in their genes.

Rare species—especially those that are only distantly related to today’s common ones—“have all kind of traits that we don’t even know about,” says [evolutionary geneticist Christopher] Dick. Perhaps one will prove to thrive in drought, and another will effortlessly resist new pests that decimate other trees. “These are the species that have all the possibilities for becoming the next sets of dominant, important species after the climate has changed,” Dick says.

That’s why humans can’t cut them all down first, he argues. If rainforests are going to have a fighting chance of recovering their biodiversity and ecological complexity, those rare species and their priceless genes need to be ready and able to step into the spotlight. It might to be too late to save the world humanity knows and loves. But it still can still do its best to make sure the new one is just as good—someday.•

Tags: ,

A digitized Automat with no visible workers roughly describes Eatsa, a San Francisco fast-casual eatery for tomorrow that exists today. Tamara Palmer of Vice visited the restaurant and found it “much more reminiscent of an Apple store than a fast food franchise.” Its design may be too cool to work everywhere in America, but I bet some variation of it will. Sooner or later, Labor in the sector will be noticeably dinged by technological unemployment. The opening:

People often muse on a future controlled by machines, but that is already well in motion here in the Bay Area, where hotels are employing robot butlers, Google and Tesla are putting driverless vehicles on the road, and apps that live every aspect of your life for you continue to proliferate. The rush to put an end to human contact is at a fever pitch around these parts, where a monied tech elite has the deep pockets to support increasingly absurd services.

Right on trend, this week marks the debut of Eatsa, a quick-service quinoa bowl “unit” (as one owner called it) billing itself as San Francisco’s premiere “automated cafe.”

I attended a media preview lunch at Eatsa last week to test out the concept before the doors officially opened. Pushing a button to summon an Uber ride to my door, I wondered how good automated food might be.

I realized it doesn’t really matter, because as California inches towards a $15 per hour minimum wage, that’s the direction we’re headed in, starting with a people-free fast food world.•

Tags:

I’m mixing my 20th-century sci-fi authors, but like Billy Pilgrim naked in a Tralfamadore zoo, we may be kept as pets by intelligent machines. That’s what Philip K. Dick Android, who can learn new words in real-time, promises his NOVA interlocutors.

Or perhaps they’ll eliminate us. Or maybe by the time they exist, we will be very different. We might become those conscious machines we so fear. We might be them. Nobody knows.

My first Virtual Reality experience was during the 1990s while working in a non-profit media place that had a clunky VR helmet for visitors to experience. One guest was rock icon Lou Reed, who sat in a chair and pulled the device over his head. He paused a moment, and then said to the woman who was assisting, “What happens now? Does someone pull on my cock?”

Perhaps because it didn’t come with free tug jobs or maybe because the technology was still lacking, Virtual Reality was a bomb two decades ago. Those who’ve tested the latest models are awed by what years of development and greater computing power has wrought. The tool certainly could be a tremendous boon to education, but you could say the same of gaming, and that’s never been leveraged correctly. 

The opening of “Grand Illusions,” an Economist report:

YOUR correspondent stands, in a pleasingly impossible way, in orbit. The Earth is spread out beneath. A turn of the head reveals the blackness of deep space behind and above. In front is a table full of toys and brightly coloured building blocks, all of which are resolutely refusing to float away—for, despite his being in orbit, gravity’s pull does not seem to have vanished. A step towards the table brings that piece of furniture closer. A disembodied head appears, and pair of hands offer a toy ray-gun. “Go on, shoot me with it,” says the head, encouragingly. Squeezing the trigger produces a flash of light, and the head is suddenly a fraction of its former size, speaking in a comic Mickey-Mouse voice (despite the lack of air in low-Earth orbit) as the planet rotates majestically below.

It is, of course, an illusion, generated by a virtual-reality (VR) company called Oculus. The non-virtual reality is a journalist wearing a goofy-looking headset and clutching a pair of controllers in a black, soundproofed room at a video-gaming trade fair in Germany. But from the inside, it is strikingly convincing. The virtual world surrounds the user. A turn of the head shifts the view exactly as it should. Move the controllers and, in the simulation, a pair of virtual arms and hands moves with them. The disembodied head belongs to an Oculus employee in another room, who is sharing the same computer-generated environment. The blocks on the table obey the laws of physics, and can be stacked up and knocked down just like their real-world counterparts. The effect, in the words of one VR enthusiast, is “like sticking your head into a wormhole that leads to some entirely different place.”

Matrix algebra

The idea of virtual reality—of building a convincing computer-generated world to replace the boring old real one—has fuelled science fiction’s novels and movies since the 1950s. In the 1990s, as computers became commonplace, several big firms tried to build headsets as a first attempt to realise the idea. They failed. The feeble computers of the time could not produce a convincing experience. Users suffered from nausea and headaches, and the kit was expensive and bulky. Although VR found applications in a few bits of engineering and science, the consumer version was little more than a passing fad in the world’s video-game arcades. But now a string of companies are betting that information technology, both hardware and software, has advanced enough to have another go. They are convinced that their new, improved virtual reality will shake up everything from video-gaming to social media, and from films to education.•

 

Vladimir Bekhterev had a great brain, but he lacked diplomacy.

Joseph Stalin probably was a “paranoiac with a short, dry hand,” but when the Russian neurologist reportedly spoke that diagnosis after examining the Soviet leader, he died mysteriously within days. Many thought he’d been poisoned to avenge the slight. Or maybe it was just a coincidence. A cloud of paranoia envelops all under an autocratic regime, whether we’re talking about Stalin in the 20th century or Vladimir Putin today: Some deaths are very suspect, so all of them become that way. At any rate, the scientist’s gray matter became an exhibit in his own collection of genius brains. An article in the December 27, 1927 Brooklyn Daily Eagle recorded the unusual series of events.

Tags: ,

Terrible products that fail miserably delight us not only because of the time-tested humor of a spectacular pratfall, but because it’s satisfying to feel now and then that we’re not just a pack of Pavlovian dogs prepared to lap up whatever is fed us, especially if it’s a Colgate Ready Meal and a Crystal Pepsi.

In a really smart Financial Times column, Tim Harford takes a counterintuitive look at how companies can avoid attempting to launch surefire duds. The usual manner has been to find out which products representative people want, but he writes of an alternative strategy: Discover what consumers of horrible taste embrace and then bury those products deep in a New Mexico desert alongside Atari’s E.T. video games. Of course, it does say something that companies can’t just identify what’s awful. Why do almost all businesses become echo chambers?

An excerpt:

If savvy influential consumers can help predict a product’s success, might it not be that there are consumers whose clammy embrace spells death for a product? It’s a counter-intuitive idea at first but, on further reflection, there’s a touch of genius about it.

Let’s say that some chap — let’s call him “Herb Inger” — simply adored Clairol’s Touch of Yogurt shampoo. He couldn’t get enough of Frito-Lay’s lemonade (nothing says “thirst-quenching” like salty potato chips, after all). He snapped up Bic’s range of disposable underpants. Knowing this, you get hold of Herb and you let him try out your new product, a zesty Cayenne Pepper eyewash. He loves it. Now you know all you need to know. The product is doomed, and you can quietly kill it while it is still small enough to drown in your bathtub.

A cute idea in theory — does it work in practice? Apparently so. Management professors Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker have studied people, such as Herb, whom they call “Harbingers of Failure.” (Their paper by that name is forthcoming in the Journal of Marketing Research.) They used a data set from a chain of more than 100 convenience stores. The data covered more than 100,000 customers with loyalty cards, more than 10 million transactions and nearly 10,000 new products. Forty per cent of those products were no longer stocked after three years, and were defined as “flops.”•

Tags: , , , ,

In a newly revised edition of Federico Fellini’s 1980 book, Making a Film, there’s a fresh translation of “A Spectator’s Autobiography,” the wonderful essay by Italo Calvino that begins the volume. It’s been adapted for publication by the NYRB.

In the piece, Calvino notes that the unpunctual habits of Italian moviegoers in the 1930s portended the ultimate widespread fracturing of the traditional narrative structure, an artifice intended to satisfy, if fleetingly, our deep craving for order, to deliver us a simple solution to the complex puzzle of life and its jagged pieces. 

An excerpt:

Italian spectators barbarously made entering after the film already started a widespread habit, and it still applies today. We can say that back then we already anticipated the most sophisticated of modern narrative techniques, interrupting the temporal thread of the story and transforming it into a puzzle to put back together piece by piece or to accept in the form of a fragmentary body. To console us further, I’ll say that attending the beginning of the film after knowing the ending provided additional satisfaction: discovering not the unraveling of mysteries and dramas, but their genesis; and a vague sense of foresight with respect to the characters. Vague: just like soothsayers’ visions must be, because the reconstruction of the broken plot wasn’t always easy, especially if it was a detective movie, where identifying the murderer first and the crime afterward left an even darker area of mystery in between. What’s more, sometimes a part was still missing between the beginning and the end, because suddenly while checking my watch I’d realize I was running late; if I wanted to avoid my family’s wrath I had to leave before the scene that was playing when I entered came back on. Therefore lots of films ended up with holes in the middle, and still today, more than thirty years later—what am I saying?—almost forty, when I happen to see one of those films from back then—on television, for example—I recognize the moment in which I entered the theater, the scenes that I’d watched without understanding them, and I recover the lost pieces, I put the puzzle back together as if I’d left it incomplete the day before.•

  • See also:

Fellini feuds with Oriana Fallaci. (1963)

Tags: ,

Ah, to be a fly on the wall in the White House in the aftermath of 9/11, once President Bush finally rested his copy of The Pet Goat and returned to the business at hand. If Al-Qaeda’s destruction of the World Trade Center was merely Step 1 of its plan to damage America, it was a scheme ultimately realized on a grand level. Our decisions in response to the large-scale terrorism, for the better part of the decade, did more harm to us than even the initial attack. Of course, in retrospect, there were potential reactions with even more far-reaching implications that went unrealized.

In a Spiegel Q&A, René Pfister and Gordon Repinski ask longtime German diplomat Michael Steiner about an alternative history that might have unfolded in the wake of September 11. An excerpt:

Spiegel:

The attacks in the United States on Sept. 11, 2001 came during your stint as Chancellor Gerhard Schröder’s foreign policy advisor. Do you remember that day?

Michael Steiner:

Of course, just like everybody, probably. Schröder was actually supposed to hold a speech that day at the German Council on Foreign Relations in Berlin. My people had prepared a nice text for him, but when he was supposed to head out, he — like all of us — couldn’t wrest himself away from the TV images of the burning Twin Towers. Schröder said: “Michael, you go there and explain to the people that I can’t come today.”

Spiegel:

What was it like in the days following the attacks?

Michael Steiner:

Condoleezza Rice was George W. Bush’s security advisor at the time. I actually had quite a good relationship with her. But after Sept. 11, the entire administration positively dug in. We no longer had access to Rice, much less to the president. It wasn’t just our experience, but also that of the French and British as well. Of course that made us enormously worried.

Spiegel:

Why?

Michael Steiner:

Because we thought that the Americans would overreact in response to the initial shock. For the US, it was a shocking experience to be attacked on their own soil.

Spiegel:

What do you mean, overreact? Were you afraid that Bush would attack Afghanistan with nuclear weapons?

Michael Steiner:

The Americans said at the time that all options were on the table. When I visited Condoleezza Rice in the White House a few days later, I realized that it was more than just a figure of speech.

Spiegel:

The Americans had developed concrete plans for the use of nuclear weapons in Afghanistan?

Michael Steiner:

They really had thought through all scenarios. The papers had been written.•

Tags: , , ,

The future seldom arrives in a hurry, which is usually a good thing from a practical standpoint. Today and tomorrow don’t always mix so well.

In an opinion piece at The Conversation, David Glance of the University of Western Australia argues that fears of near-term technological unemployment are overstated. He may be right in the big picture, but if just one significant area is realized in short order, defying business-as-usual stasis–driverless cars is the most obvious example–a large swath of Labor will be blown sideways. 

From Glance:

The trouble with predicting the future is that the more dramatic the prediction the more likely the media will pick it up and amplify it in the social media-fed echo chamber. What is far less likely to be reported are the predictions that emphasise that it is unlikely that things will change that radically because the of the massive inertia that is built into industry, governments and the general workers’ appetite for change.

Economists at the OECD may have another explanation for why it is unwise to equate the fact that something “could” be done with the fact that it “will” be done. In a report on the future of productivity, the authors detail how it is only a small number of “frontier companies” have managed to implement changes to achieve high levels of productivity growth. The companies that haven’t achieved anywhere near the same productivity growth are the “non-frontier companies” or simply “laggards.” The reasons for this are probably many but lack of leadership, vision, skills or ability may factor into it.

The point is that since 2000 many companies didn’t adopt technology and change their business processes to see improvements in productivity even though they clearly “could” have done.•

Tags:

From the July 8, 1889 Brooklyn Daily Eagle:

Tags: ,

Some people don’t know how to accept a gift. America has many such people among its government, as apparently do numerous other developed nations. 

One of the few upsides to the colossal downside of the 2008 economic collapse is the rock-bottom interest rates that offer countries the opportunity to rebuild their infrastructure at virtually no added cost. It’s a tremendous immediate stimulus that also pays long-term dividends. But deficit hawks have made it impossible for President Obama to take advantage of this rare and relatively short-term opportunity. While some of it is certainly partisanship, it does seem like a large number of elected officials have pretty much no idea of basic economics.

From the Economist:

IT IS hard to exaggerate the decrepitude of infrastructure in much of the rich world. One in three railway bridges in Germany is over 100 years old, as are half of London’s water mains. In America the average bridge is 42 years old and the average dam 52. The American Society of Civil Engineers rates around 14,000 of the country’s dams as “high hazard” and 151,238 of its bridges as “deficient”. This crumbling infrastructure is both dangerous and expensive: traffic jams on urban highways cost America over $100 billion in wasted time and fuel each year; congestion at airports costs $22 billion and another $150 billion is lost to power outages.

The B20, the business arm of the G20, a club of big economies, estimates that the global backlog of spending needed to bring infrastructure up to scratch will reach $15 trillion-20 trillion by 2030. McKinsey, a consultancy, reckons that in 2007-12 investment in infrastructure in rich countries was about 2.5% of GDP a year when it should have been 3.5%. If anything, the problem is becoming more acute as some governments whose finances have been racked by the crisis cut back. In 2013 in the euro zone, general government investment—of which infrastructure constitutes a large part—was around 15% below its pre-crisis peak of €3 trillion ($4 trillion), according to the European Commission, with drops as high as 25% in Italy, 39% in Ireland and 64% in Greece. In the same year government spending on infrastructure in America, at 1.7% of GDP, was at a 20-year low.

This is a missed opportunity. Over the past six years, the cost of repairing old infrastructure or building new projects has been much cheaper than normal, thanks both to rock-bottom interest rates and ample spare capacity in the construction industry.•

Sad to hear of the passing of Dr. Oliver Sacks, the neurologist and writer, who made clear in his case studies that the human brain, a friend and a stranger, was as surprising as any terrain we could ever explore. It feels like we’ve not only lost a great person, but one who was uniquely so. He became hugely famous with the publication of his 1985 collection, The Man Who Mistook His Wife For A Hat, which built upon the template of A.R. Luria’s work with better writing and a wider array of investigations. Two years prior, he published an essay In the London Review of Books that became the title piece. An excerpt: 

I stilled my disquiet, his perhaps too, in the soothing routine of a neurological exam – muscle strength, co-ordination, reflexes, tone. It was while examining his reflexes – a trifle abnormal on the left side – that the first bizarre experience occurred. I had taken off his left shoe and scratched the sole of his foot with a key – a frivolous-seeming but essential test of a reflex – and then, excusing myself to screw my ophthalmoscope together, left him to put on the shoe himself. To my surprise, a minute later, he had not done this.

‘Can I help?’I asked.

‘Help what? Help whom?’

‘Help you put on your shoe.’

‘Ach,’ he said, ‘I had forgotten the shoe,’ adding, sotto voce: ‘The shoe! The shoe?’ He seemed baffled.

‘Your shoe,’ I repeated. ‘Perhaps you’d put it on.’

He continued to look downwards, though not at the shoe, with an intense but misplaced concentration. Finally his gaze settled on his foot: ‘That is my shoe, yes?’

Did I mishear? Did he mis-see? ‘My eyes,’ he explained, and put a hand to his foot. ‘This is my shoe, no?’

‘No, it is not. That is your foot. There is your shoe.’

‘Ah! I thought that was my foot.’

Was he joking? Was he mad? Was he blind? If this was one of his ‘strange mistakes’, it was the strangest mistake I had ever come across.

I helped him on with his shoe (his foot), to avoid further complication. Dr P. himself seemed untroubled, indifferent, maybe amused. I resumed my examination. His visual acuity was good: he had no difficulty seeing a pin on the floor, though sometimes he missed it if it was placed to his left.

He saw all right, but what did he see? I opened out a copy of the National Geographic Magazine, and asked him to describe some pictures in it. His eyes darted from one thing to another, picking up tiny features, as he had picked up the pin. A brightness, a colour, a shape would arrest his attention and elicit comment, but it was always details that he saw – never the whole. And these details he ‘spotted’, as one might spot blips on a radar-screen. He had no sense of a landscape or a scene.

I showed him the cover, an unbroken expanse of Sahara dunes.

‘What do you see here?’I asked.

‘I see a river,’ he said. ‘And a little guesthouse with its terrace on the water. People are dining out on the terrace. I see coloured parasols here and there.’ He was looking, if it was ‘looking’, right off the cover, into mid-air, and confabulating non-existent features, as if the absence of features in the actual picture had driven him to imagine the river and the terrace and the coloured parasols.

I must have looked aghast, but he seemed to think he had done rather well. There was a hint of a smile on his face. He also appeared to have decided the examination was over, and started to look round for his hat. He reached out his hand, and took hold of his wife’s head, tried to lift it off, to put it on. He had apparently mistaken his wife for a hat!•

Tags:

Long before Caitlyn Jenner, there was Christine Jorgensen, a Bronx military veteran who traveled to Denmark in the early 1950s to transition surgically into a woman. It was, as you might expect, a huge sensation at the time, but Jorgensen was always above the fray, whether guesting on ur-shock jock Joe Pyne’s gleefully tasteless talk show in 1966, or visiting with Tom Snyder in 1982, as she revived her cabaret act.

Life is full of inconvenient truths, and one of them is that Theodor Geisel, better known as Dr. Seuss, the wonderful storyteller who continues to teach children to read and think, was responsible for some shockingly racist drawings and ad campaigns early in his career. In 1958, he appeared on To Tell the Truth, at the time The Cat in the Hat, his most popular work, was becoming a huge bestseller.

Col. Harland Sanders was 62 when, as the story goes, he used his first Social Security check to found his bird-slaughter enterprise, Kentucky Fried Chicken. He was 74 in 1964 when he sold the business for $2 million. Sanders appeared directly after the sale on I’ve Got a Secret.

Tags: , , ,

Before Jerry Bruckheimer was one of the world’s most successful film and TV producers, he and his partner Don Simpson were 1980s Hollywood wunderkinds, matching high energy to pop music in a handful of brash blockbuster vehicles. The most successful of them was probably Top Gun, a muscular ode to Reagan Era militarization.

But even by the lax standards of Hollywood, Simpson was a huge mess, addicted to drugs, plastic surgery, prostitutes and S&M. Bruckheimer dragged his feet on dissolving the partnership, but he knew he needed to distance himself from his toxic collaborator. As a 1996 Wall Street Journal report by Thomas R. King and John Lippman put it in the wake of Simpson’s death due to 21 different drugs in his system, the end came like this: “For Jerry Bruckheimer, the last straw was the dead doctor in the pool house.”

Another excerpt from that same WSJ piece:

Surgery and Diets

The hits, however, seemed to dry up. The producers, close friends say, were still reeling from the disappointment of Days of Thunder and were struggling to figure out how their formula went wrong.

Friends noticed that Mr. Simpson, who had a weight problem and a penchant for yo-yo dieting, seemed increasingly determined to reinvent himself. He underwent a series of plastic-surgery operations; one friend says that among the procedures he had were a chin implant, several face lifts, and placenta injections. He began disappearing for months at a time, telling friends he was at Canyon Ranch, where most visitors stay only a few days. And he began talking about finding new projects in which he could appear as an actor.

At night, he led a life that a number of people close to him thought was growing increasingly dangerous. He had always been known for his appetite for prostitutes; he was close friends with Hollywood madam Heidi Fleiss.

But Mr. Simpson was going beyond sex, sinking deeper into increasingly sadomasochistic and destructive behavior, say people who know him. His reputation was such that he is the subject of an entire chapter — titled “Don Simpson: An Education in Pain” — in a salacious new book penned by four Hollywood prostitutes. The book, You’ll Never Make Love in This Town Again, says his “serious bondage games were like something out of Marquis de Sade.”

A Prostitute and Kierkegaard

James Toback, a screenwriter who may have been the last person to talk with Mr. Simpson before his death, says his friend would frequently regale him with stories of his exploits with women.

“I know that he was obsessed with women, but it was not just sexual — it was psychological,” says Mr. Toback, a screenwriter of movies such as Bugsy. “He was never just interested in [having sex] with a girl. Even if it was a call girl, it was to get into some kind of serious philosophical discussion with her. He wanted to know what she read, what her parents were like, why she did what she did.” Mr. Toback tells of one conversation with Mr. Simpson: “He said he had met this girl, that she was fascinating and that her favorite philosopher was Kierkegaard.”

Mr. Toback says that he never saw Mr. Simpson take drugs. “But I had the feeling in many of our conversations, the last one included, that he was hyper and speeded up at the beginning,” Mr. Toback says. “But in the last hour, he’d been drinking a lot of red wine and he would wind down.”•

Tags: , , ,

Wernher von Braun wasn’t worried about helping to murder millions of people, but he was concerned about the solitude of astronauts during space travel. Odd priorities.

The philosophical spelunker Michel Siffre went so far as to embed himself in caves and icebergs for months at a time in the 1960s and 1970s to understand prolonged isolation. Time stopped having meaning for him. The pristine terrain he ultimately explored was inside his own head.

It’s perplexing in this age of robotics that extended space trips to Mars and such need to have humans at all. They’re far cheaper with just robots and can collect the same information. While colonization is the ultimate goal, it needn’t be the immediate one.

But we’re likely going up sooner than later, since peopled space flights are an easier sell. They flatter us, remind us of ourselves. Therefore, the loneliness of the long distance “runner” is a complicated problem for NASA and private programs. The longest such experiment testing human endurance in seclusion has just begun.

From the BBC:

A team of NASA recruits has begun living in a dome near a barren volcano in Hawaii to simulate what life would be like on Mars.

The isolation experience, which will last a year starting on Friday, will be the longest of its type attempted.

Experts estimate that a human mission to the Red Planet could take between one and three years.

The six-strong team will live in close quarters under the dome, without fresh air, fresh food or privacy.

They closed themselves away at 15:00 local time on Friday (01:00 GMT Saturday).

A journey outside the dome – which measures only 36ft (11m) in diameter and is 20ft (6m) tall – will require a spacesuit.

A French astrobiologist, a German physicist and four Americans – a pilot, an architect, a journalist and a soil scientist – make up the NASA team.

The men and women will each have a small sleeping cot and a desk inside their rooms. Provisions include powdered cheese and canned tuna.•

Tags:

Back when people were impressed by those who possessed lots of fairly useless facts, I was always good at trivia, and it never once made me feel smart or satisfied. Because it was just a parlor trick, really. Read a lot and in an irregular pattern and you too can be crammed with minutiae. Now that everyone can look up every last thing on their phones in just seconds, all of life has become an open-book test. Trivial knowledge is (thankfully) no longer valued.

From Douglas Coupland’s FT column about his participation in a Trivia Night contest:

The larger question for me during the trivia contest evening was, “Wait — we used to have all of this stuff stored in our heads but now, it would appear, we don’t. What happened?” The answer is that all of this crap is still inside our heads — in fact, there’s probably more crap than ever inside our heads — it’s just that we view it differently now. It’s been reclassified. It’s not trivia any more: it’s called the internet and it lives, at least for the foreseeable future, outside of us. The other thing that happened during the trivia contest is the realisation that we once had a thing called a-larger-attention-span-than-the-one-we-now-have. Combine these two factors together and we have a reasonably good reason to explain why a game of trivia in 2015 almost feels like torture. I sat there with four other reasonably bright people, not necessarily knowing the answers to all of the questions, but knowing that the answers, no matter how obtuse, could be had in a few seconds without judgment on my iPhone 6 Plus. But then I decided the evening was also a good reminder of how far things have come since the early 1980s heyday of the board game Trivial Pursuit.

Q: What country is north, east, south and west of Finland?

A: Norway.

Q: Clean, Jerk and Snatch are terms used in which sport?

A: Weightlifting.

Q: Why was trivia such a big thing in the late 20th century?

A: Because society was generating far more information than it was generating systems with which to access that information. People were left with constellations of disconnected, randomly stored facts that could leave one feeling overwhelmed. Trivia games flattered 20th-century trivia players by making them feel that there was both value to having billions of facts in one’s head, and that they were actually easily retrieved. But here in 2015 we know that facts are simply facts. We know where they’re stored and we know how to access them. If anything, we’re a bit ungrateful, given that we know the answer to just about everything.•

Tags:

 

10 search-engine keyphrases bringing traffic to Afflictor this week:

  1. howard carter finding king tut
  2. edward o. thorp on gambling
  3. diarrhea in a spaghetti pot
  4. woman swallows lizard
  5. fran lebowitz recent comments los angeles
  6. larry flynt and terry southern
  7. america’s very first freak show
  8. hugh hefner paul snider
  9. frank gifford fred exley
  10. what would aleksandr solzhenitsyn have thought of putin?
This week,

This week, President George W. Bush, who watched indifferently as New Orleans sank, returned to finish the job with a rain dance.

 

  • Evan Osnos explores the meaning of Trump’s early support.
  • Joseph Stiglitz offers a straightforward prescription for wealth inequality.
  • Biomimetics has progressed remarkably in the last decade.
  • Forrester Reports offers a relatively sanguine take on automation.
  • Julian Baggini explains why ISIS attacks on antiquities are so troubling.
  • Steve Ross rose from the funeral biz to the head of Warner Communications.
  • A brief note from 1891 about show biz.

It’s logical if not desirable that war becomes more automated, since it only takes one nation pursuing the dream of a robot army to detonate a new arms race. I’ve thought more about weapons systems discrete from human beings than I have about enhanced soldiers, but the U.S. Army Research Laboratory has already given great consideration to the latter. The recent report “Visualizing the Tactical Ground Battlefield in  the Year 2050imagines fewer of us going into battle but those that do being “super humans” augmented by exoskeletons, implants and internal sensors. It certainly ranges into what currently would be considered sci-fi territory.

From Patrick Tucker at Defense One:

People, too, will be getting a technological upgrade. “The battlefield of the future will be populated by fewer humans, but these humans would be physically and mentally augmented with enhanced capabilities that improve their ability to sense their environment, make sense of their environment, and interact with one another, as well as with ‘unenhanced humans,’ automated processes, and machines of various kinds,” says the report.

What exactly constitutes an enhanced human is a matter of technical dispute. After all, night-vision goggles represent a type of enhancement, as does armor. The military has no problem discussing future plans in those areas, but what the workshop participants anticipate goes well beyond flak jackets and gear. …

The report envisions enhancement taking several robotic steps forward. “To enable humans to partner effectively with robots, human team members will be enhanced in a variety of ways. These super humans will feature exoskeletons, possess a variety of implants, and have seamless access to sensing and cognitive enhancements. They may also be the result of genetic engineering. The net result is that they will have enhanced physical capabilities, senses, and cognitive powers. The presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution,” says the report.•

Tags:

Attempting to reverse aging–even defeat death–seems like science-fiction to most, but it’s just science to big-picture gerontologist Aubrey de Grey, who considers himself a practical person. Given enough time it certainly makes sense that radical life-extension will be realized, but the researcher is betting the march toward a-mortality will begin much sooner than expected. It frustrates him to no end that governments and individuals alike usually don’t accept death as a sickness to be treated. Some of those feelings boiled over when he was interviewed by The Insight. An excerpt:

The Insight:

I’m interested in the psychology of people, I guess you can put them into two camps: one doesn’t have an inherent understanding of what you’re doing or saying, and the other camp willingly resign themselves to living a relatively short life.

You’ve talked to a whole wealth of people and come across many counter-opinions, have any of them had any merit to you, have any of them made you take a step back and question your approach?

Aubrey de Grey:

Really, no. It’s quite depressing. At first, really, I was my own only affective critic for the feasibility – certainly never a case or example of an opinion that amounted to a good argument against the desirability of any of this work; that was always 100% clear to me, that it would be crazy to consider this to be a bad idea. It was just a question of how to go about it. All of the stupid things that people say, like, “Where would we put all the people?” or, “How would we pay the pensions?” or, “Is it only for the rich?” or, “Wont dictators live forever?” and so on, all of these things… it’s just painful. Especially since most of these things have been perfectly well answered by other people well before I even came along. So, it’s extraordinarily frustrating that people are so wedded to the process of putting this out of their minds, by however embarrassing their means; coming up with the most pathetic arguments, immediately switching their brains off before realising their arguments might indeed be pathetic.

The Insight:

It might be a very obvious question, but it just sprung to mind – maybe you’ve been asked this before, it’s extremely philosophical and speculative – what do you think happens when you die?

Aubrey de Grey:

Oh, fuck off. I don’t give a damn. I’m a practical kind of guy – I’m not intending to be that experiment.•

Tags:

The main difference between rich people and poor people is that rich people have more money. 

That’s it, really. Those with wealth are just as likely to form addictions, get divorces and engage in behaviors we deem responsible for poverty. They simply have more resources to fall back on. People without that cushion often land violently, land on the streets. Perhaps they should be extra careful since they’re in a more precarious position, but human beings are human beings: flawed. 

In the same ridiculously simple sense, homeless people are in that condition because they don’t have homes. A lot of actions and circumstances may have contributed to that situation, but the home part is the piece of the equation we can actually change. The Housing First initiative has proven thus far that it’s good policy to simply provide homes to people who have none. It makes sense in both human and economic terms. But it’s unpopular in the U.S. because it falls under the “free lunch” rubric, despite having its roots in the second Bush Administration. Further complicating matters is the shortage of urban housing in general.

In a smart Aeon essay, Susie Cagle looks at the movement, which has notably taken root in the conservative bastion of Utah, a state which has reduced homelessness by more than 90% in just ten years. An excerpt:

A new optimistic ideology has taken hold in a few US cities – a philosophy that seeks not just to directly address homelessness, but to solve it. During the past quarter-century, the so-called Housing First doctrine has trickled up from social workers to academics and finally to government. And it is working. On the whole, homelessness is finally trending down.

The Housing First philosophy was first piloted in Los Angeles in 1988 by the social worker Tanya Tull, and later tested and codified by the psychiatrist Sam Tsemberis of New York University. It is predicated on a radical and deeply un-American notion that housing is a right. Instead of first demanding that they get jobs and enroll in treatment programmes, or that they live in a shelter before they can apply for their own apartments, government and aid groups simply give the homeless homes.

Homelessness has always been more a crisis of empathy and imagination than one of sheer economics. Governments spend millions each year on shelters, health care and other forms of triage for the homeless, but simply giving people homes turns out to be far cheaper, according to research from the University of Washington in 2009. Preventing a fire always requires less water than extinguishing it once it’s burning.

By all accounts, Housing First is an unusually good policy. It is economical and achievable.•

Tags:

« Older entries