In a Washington Post piece, Vivek Wadha reveals how bullish he is on the near-term future of robotics in the aftermath of the DARPA challenge. He believes Jetsons-level assistants are close, and although he acknowledges such progress would promote technological unemployment, he doesn’t really dwell on that thorny problem. An excerpt:

For voice recognition, we are already pretty close to C-3PO-like capabilities. Both Apple and Google use artificial intelligence to do a reasonably good job of translating speech to text, even in noisy environments. No bot has passed the Turing Test yet, but they are getting closer and closer. When it happens, your droid will be able to converse with you in complex, human-like interactions.

The computational power necessary to enable these robots to perform these difficult tasks is still lacking. Consider, however, that in about seven or eight years, your iPhone will have the computational ability of a human brain, and you can understand where we are headed.

Robots will be able to walk and talk like human beings.

What are presently halting steps moving up stairs will, in the next DARPA challenge, become sure-footed ascents. The ability to merely open a door will become that of opening a door and holding a bag of groceries and making sure the dog doesn’t get out.

And, yes, Rosie will replace lots of human jobs, and that is reason to worry — and cheer.•

Tags:

Saudi Arabia is not customarily a place associatd with green energy and women’s rights, but a rich country that wants to stay that way needs to adapt. Two excerpts from articles about the nation transforming in at least some ways: Jeffrey Ball’s Atlantic piece “Why the Saudis Are Going Solar” and Juliane von Mittelstaedt and Samiha Shafy’s Spiegel feature “Lifting the Veil.”

____________________________

From Ball:

The Saudis burn about a quarter of the oil they produce—and their domestic consumption has been rising at an alarming 7 percent a year, nearly three times the rate of population growth. According to a widely read December 2011 report by Chatham House, a British think tank, if this trend continues, domestic consumption could eat into Saudi oil exports by 2021 and render the kingdom a net oil importer by 2038.

That outcome would be cataclysmic for Saudi Arabia. The kingdom’s political stability has long rested on the “ruling bargain,” whereby the royal family provides citizens, who pay no personal income taxes, with extensive social services funded by oil exports. Left unchecked, domestic consumption could also limit the nation’s ability to moderate global oil prices through its swing reserve—the extra petroleum it can pump to meet spikes in global demand. If Saudi rulers want to maintain control at home and preserve their power on the world stage, they must find a way to use less oil.

Solar, they have decided, is an obvious alternative. In addition to having some of the world’s richest oil fields, Saudi Arabia also has some of the world’s most intense sunlight. (On a map showing levels of solar radiation, with the sunniest areas colored deep red, the kingdom is as blood-red as a raw steak.) Saudi Arabia also has vast expanses of open desert seemingly tailor-made for solar-panel arrays.•

____________________________

From Von Mittelstaedt and Samiha Shafy:

In 2012, Saudi Arabia began enforcing a law that allows only females to work in lingerie stores. Gradually, women were also granted the right to sell abayas, make-up, handbags and shoes. Children’s toys. Clothes. Slowly but surely, men were banished from these realms.

Female participation in the workforce, however, brought with it a host of new problems. How could women get to work, when they’re not allowed to drive? Who was going to look after their children? What happens if they’re expecting? More laws have subsequently been passed, from a right to ten-weeks of paid parental leave, to a right to work part-time and a right to childcare support. A revolution started by lingerie. Only in Saudi Arabia.

Society has undergone dramatic change in the last ten years, ever since the late King Abdullah succeeded to the throne in 2005. The change has been especially dramatic since 2011. The main reason for the transformation is that a growing number of women are now working, and not just as civil servants, teachers and doctors. They’re increasingly better-educated and financially independent and above all, they’re a far more visible presence. They’re leaving the isolation of their homes and are free to travel around inside the country, at least, to stay in hotels, and to set up companies. There are now even women’s shelters in Saudi Arabia and discussions of violence against women are no longer the taboo they used to be. The way women are perceived has changed – as has the way they perceive themselves.

“I used to be afraid all the time, I avoided speaking to strangers,” says Alamri. “But then I started to open up and meet people, and to enjoy life.” Her husband, however, began to stop by the store where she worked. He spied on her and told her she wasn’t allowed to speak to strange men. At home, he shouted at her. She began to ask herself why she needed him. She was earning money, after all. Not a lot, but enough to support herself. After two years, she filed for divorce.•

Tags: , ,

I was on the subway the other day and a disparate group of six people of different ages, races and genders began a spontaneous conversation about how the they couldn’t afford to live anywhere nice anymore and how the middle class was gone in America, that the country wasn’t for them anymore. Small sample size to be sure, but one that’s backed up by more than four decades of research. Part of the problem could be remedied politically if finding solutions was in vogue in America, but the bigger picture would seem to be a grand sweep of history that announced itself in the aftermath of the Great Recession, as profits returned but not jobs.

I fear Derek Thompson’s excellent Atlantic feature “A World Without Work” may be accurate in its position that this time it’s different, that technological unemployment may take root in America (and elsewhere), and I think one of the writer’s biggest contributions is explaining how relatively quickly the new normal can take hold. (He visits Youngstown, a former industrial boomtown that went bust, to understand the ramifications of work going away.)

I don’t believe a tearing of the social fabric need attend an enduring absence of universal employment provided wealth isn’t aggregated at one end of the spectrum, but I don’t have much faith right now in government to step into the breach should such opportunities significantly deteriorate. Much of Thompson’s piece is dedicated finding potential solutions to a radical decline of Labor–a post-workist world. He believes America can sustain itself if citizens are working fewer hours but perhaps not if most don’t need to punch the clock at all. I’m a little more sanguine than that if basic needs are covered. Then I think we’ll see people get creative.

An excerpt:

After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.

For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.

Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”

Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.

To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.

Tags: ,

Excerpts from a pair of recent Harvard Business Review articles which analyze the increasing insinuation of robots in the workplace. The opening of Walter Frick’s “When Your Boss Wears Metal Pants” examines the emotional connection we quickly make with robots who can feign social cues. In “The Great Decoupling,” Amy Bernstein and Anand Raman discuss technological unemployment, among other topics, with Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age.

___________________________

From Frick:

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society.•

___________________________

From Bernstein and Raman:

Harvard Business Review:

As the Second Machine Age progresses, will there be any jobs for human beings?

Andrew McAfee:

Yes, because humans are still far superior in three skill areas. One is high-end creativity that generates things like great new business ideas, scientific breakthroughs, novels that grip you, and so on. Technology will only amplify the abilities of people who are good at these things.

The second category is emotion, interpersonal relations, caring, nurturing, coaching, motivating, leading, and so on. Through millions of years of evolution, we’ve gotten good at deciphering other people’s body language…

Eric Brynjolfsson:

…and signals, and finishing people’s sentences. Machines are way behind there.

The third is dexterity, mobility. It’s unbelievably hard to get a robot to walk across a crowded restaurant, bus a table, take the dishes back into the kitchen, put them in the sink without breaking them, and do it all without terrifying the restaurant’s patrons. Sensing and manipulation are hard for robots.

None of those is sacrosanct, though; machines are beginning to make inroads into each of them.

Andrew McAfee:

We’ll continue to see the middle class hollowed out and will see growth at the low and high ends. Really good executives, entrepreneurs, investors, and novelists—they will all reap rewards. Yo-Yo Ma won’t be replaced by a robot anytime soon, but financially, I wouldn’t want to be the world’s 100th-best cellist.•

Tags: , , , ,

softbank-pepper-robot-shop-store-staff-humanoid-2

Softbank’s Pepper looks like a child killed by a lightning strike who returned as a ghost to make you pay for handing him a watering can during an electrical storm.

He’s described as an “emotional robot,” which makes me take an immediate disliking to him. Manufactured to express feelings based on stimuli in his surroundings, Pepper is supposed to be shaped by his environment, but I wonder if his behavior will shape those who own him. We may get an answer since the robot sold out in Japan in under a minute and will soon be available for sale internationally.

From Marilyn Malara at UPI:

The humanoid robot is described as one that can feel emotion in a way humans do naturally through a system similar to a human’s hormonal response to stimuli. The robot can generate its own emotions by gathering information from its cameras and various sensors. Softbank says that Pepper is a “he” and can read human facial expressions, words and surroundings to make decisions. He can sigh or even raise his voice; he can get scared from dimming lights and happy when praised.

Along with the product’s launch, 200 applications are available to download into the robot including one that can record everyday life in the form of a robotic scrapbook.

Last year, Nestle Japan used Pepper to sell Nescafe coffee machines in appliance stores all over the country. “Pepper will be able to explain Nescafe products and services and engage in conversation with consumers,” Nestle Japan CEO Kohzoh Takaoka said in October before its roll-out.•

____________________________

“Can you lend me a $100?”

Tags:

In a New York Times review, A.O. Scott, who is quietly one of the funniest writers working anywhere, offers a largely positive review of philosopher Susan Neiman’s new book about perpetual adolescence, something that’s become the norm in this era of fanboy (and -girl) ascendancy, its commodification seemingly having reached a saturation point until, yes, the next comic-book or YA franchise. The opening:

A great deal of modern popular culture — including just about everything pertaining to what French savants like to call le nouvel âge d’or de la comédie américaine — runs on the disavowal of maturity. The ideal consumer is a mirror image of a familiar comic archetype: a man-child sitting in his parents’ basement with his video games and his Star Wars figurines; a postgraduate girl and her pals treating the world as their playground. Baby boomers pursue perpetual youth into retirement. Gen-Xers hold fast to their skateboards, their Pixies T-shirts and their Beastie Boys CDs. Nobody wants to be an adult anymore, and every so often someone writes an article blaming Hollywood, attachment parenting, global capitalism or the welfare state for this catastrophe. I’ve written one or two of those myself. It’s not a bad racket, and since I’m intimately acquainted, on a professional basis, with the cinematic oeuvre of Adam Sandler, I qualify as something of an expert. 

In the annals of anti-infantile cultural complaint, Susan Neiman’s new book, Why Grow Up?, is both exemplary and unusual. An American-born philosopher who lives in Berlin, Neiman has a pundit’s fondness for the sweeping generalization and the carefully hedged argumentative claim. “I’m not suggesting that we do without the web entirely,” she writes in one of her periodic reflections on life in the digital age, “just that we refuse to let it rule.” Elsewhere she observes that “if you spend your time in cyberspace watching something besides porn and Korean rap videos, you can gain a great deal,” a ­hypothesis I for one am eager to test.•

Tags: ,

Wow, this is wonderful: Nicholas Carr posted a great piece from a recent lecture in which he addressed Marshall McLuhan’s idea of automation as media. In this excerpt, he tells a history of how cartography, likely the first medium, went from passive to active player as we transitioned from paper to software:

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving.•

Tags:

I believe Weak AI can remake a wide swath of our society in the coming decades, but the more sci-fi Strong AI moonshots don’t seem within reach to me. When Yuval Harari worries that technologists may play god, he’s saying nothing theoretically impossible. In fact, the innovations he’s discussing (genetic engineering, cyborgism, etc.) will almost definitely occur if we get lucky (and creative) and survive any near-term extinction.

But the thing about Silicon Valley remaking our world is that it’s really tough do that, especially when dealing with such hard problems–even the hard problem (i.e., consciousness). Google is an AI company disguised as a search company, but it’s certainly possible that it never becomes great at anything beyond search and (perhaps) a few Weak AI triumphs. Time will tell. But it probably will take significant time.

In a Washington Post piece, Bhaskar Chakravorti wonders if Google X is more moonshot or crater, though I think it’s too early to be assesssing such things. Creating 100% driverless autos wasn’t going to happen overnight, let alone radical life extension. An excerpt:

In its relentless hunt for innovation, Google is a voracious acquirer of innovative companies. In the two years prior to 2014, it outspent its five closest rivals combined on acquisitions. Here, too, it has failed in dramatic ways. A single acquisition, Motorola Mobility, cost $12 billion — almost half the amount that Google spent on all its acquisitions over a decade — which it sold for $3 billion two years later.

None of these factors deters Google’s leaders or its many admirers. Much of the public focus has shifted recently to its Google X unit, which not only has a chief most appropriately named, Astro Teller, it has a manager with an official job title of Head of Getting Moonshots Ready for Contact With the Real World. Now, the drumbeat has picked up as some of Google’s moonshots come closer to landing. The Google self-driven car is coming around the corner quite literally.  Google’s high-altitude balloons are being tested to offer Internet access to those without access. And the latest: Google intends to take on the myriad urban innovation challenges with its brand new Sidewalk Labs. Beyond the roads, sidewalks and the skies, Google wants to tinker with life itself, from glucose-monitoring contact lenses to longevity research.

Google’s revenue source is essentially unchanged and yet it spends disproportionately to move the needle. But these unprecedented moonshots could simply be money pits.•

Tags:

Prior to 1975, the summer was a dead season for movies, but Jaws changed all that. Released in the warm months to capitalize on its beach theme, Steven Spielberg’s adaptation of Peter Benchley’s bestseller remade the film business, and not only for the better, as the chase for the next blockbuster, the trusty tent pole, began in earnest. (It also had a bad effect on sharks, which have much more to fear from us than we do from them.)

Four days before the film’s momentous release, Benchley, who wrote the screenplay, and star Roy Scheider, guested on Good Night America hosted by Geraldo Rivera, who describes the picture as “the chilling story of a prehistoric eating machine.” At the very last moment, his production team talked Rivera out of wearing a only Speedo and a mustache during the interview, though he really, really wanted to.

Geraldo begins the program with allegations about the Rockefeller Commission further clouding the Kennedy Assassination. There are also filmed interviews in Louisiana with Mick and Bianca Jagger and an exposé on psychic and faith healers, including Rev. Bernard Zovluck of Times Square. The guest announcer is Don Imus, who once killed a shark he suspected of stealing his cocaine. Watch here.•

Tags: , , , , , , ,

The unanswered questions that we have about the Snowden Affair are probably a little different than the ones circulating in the head of cybersecurity expert and former fugitive John McAfee, who has written an unsurprisingly strange, paranoid and colorful piece on the topic for International Business Times. An excerpt:

The Russian interviewer also asked me about Snowden: “In your opinion, is Edward Snowden a real character or one invented by the intelligence services?”

And this was my answer

“I doubt everything, even my own senses at times. Is the apparent US government the real US government? Could the real government be a committee of the largest corporate entities who mount this play of democracy to veil the real machinations?

Are the divisions of the world into apparent “countries” even real? Are the apparent divisions within my own country real? Do we really have a tripartite system of government, where the executive, legislative and judicial divisions are, in fact, real divisions? I could go on forever.

As to Edward Snowden, I find the following inconsistencies to be very troubling:

1. He is a man of soft character and limited experience in the difficult and dangerous world into which he so willingly and knowingly thrust himself. I have personally been a fugitive. I have experienced many dangers and difficult situations, and even I with my excellent survival skills would not willingly bring down such wrath upon myself. Why would a man of Snowden’s apparent character do so?

2. He was safe in Hong Kong prior to entering Russia. With no offense to your country, I believe that Snowden was smart enough to know that he could have faded into the back alleys and byways of Hong Kong and, with his talents, have led a thriving existence there. Chinese women are equally as attractive as Russian women and not quite so dangerous. It is cheaper to live in Hong Kong and the weather is better. It is, quite frankly, a colourful place full of opportunity for a clever person. Why did he leave for Russia?

3. I doubt the truth of it all because my only source of information on the subject I have obtained through the world’s press. What truth can there be in it?”•

 

Tags:

In a New Statesman essay, Yuval Noah Harari, author of the great book Sapiens, argues that if we’re on the precipice of a grand human revolution–in which we commandeer evolutionary forces and create a post-scarcity world–it’s being driven by private-sector technocracy, not politics, that attenuated, polarized thing. The next Lenins, the new visionaries focused on large-scale societal reorganization, Harari argues, live in Silicon Valley, and even if they don’t succeed, their efforts may significantly impact our lives. An excerpt:

Whatever their disagreements about long-term visions, communists, fascists and liberals all combined forces to create a new state-run leviathan. Within a surprisingly short time, they engineered all-encompassing systems of mass education, mass health and mass welfare, which were supposed to realise the utopian aspirations of the ruling party. These mass systems became the main employers in the job market and the main regulators of human life. In this sense, at least, the grand political visions of the past century have succeeded in creating an entirely new world. The society of 1800 was completely destroyed and we are living in a new reality altogether.

In 1900 or 1950 politicians of all hues thought big, talked big and acted even bigger. Today it seems that politicians have a chance to pursue even grander visions than those of Lenin, Hitler or Mao. While the latter tried to create a new society and a new human being with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and supercomputers. In the coming decades, technological breakthroughs are likely to change human society, human bodies and human minds in far more drastic ways than ever before.

Whereas the Nazis sought to create superhumans through selective breeding, we now have an increasing arsenal of bioengineering tools at our disposal. These could be used to redesign the shapes, abilities and even desires of human beings, so as to fulfil this or that political ideal. Bioengineering starts with the understanding that we are far from realising the full potential of organic bodies. For four billion years natural selection has been tinkering and tweaking with these bodies, so that we have gone from amoebae to reptiles to mammals to Homo sapiens. Yet there is no reason to think that sapiens is the last station. Relatively small changes in the genome, the neural system and the skeleton were enough to upgrade Homo erectus – who could produce nothing more impressive than flint knives – to Homo sapiens, who produces spaceships and computers. Who knows what the outcome of a few more changes to our genome, neural system and skeleton might be? Bioengineering is not going to wait patiently for natural selection to work its magic. Instead, bioengineers will take the old sapiens body and ­intentionally rewrite its genetic code, rewire its brain circuits, alter its biochemical balance and grow entirely new body parts.

On top of that, we are also developing the ability to create cyborgs.•

Tags:

From the March 10, 1876 New York Times:

Louisville, March 9--The Bath County (Ky.) News of this date says: ‘On last Friday a shower of meat fell near the house of Allen Crouch, who lives some two or three miles from the Olympian Springs in the southern portion of the county, covering a strip of ground about one hundred yards in length and fifty wide. Mrs. Crouch was out in the yard at the time, engaged in making soap, when meat which looked like beef began to fall around her. The sky was perfectly clear at the time, and she said it fell like large snow flakes, the pieces as a general thing not being much larger. One piece fell near her which was three or four inches square. Mr. Harrison Gill, whose veracity is unquestionable, and from whom we obtained the above facts, hearing of the occurrence visited the locality the next day, and says he saw particles of meat sticking to the fences and scattered over the ground. The meat when it first fell appeared to be perfectly fresh.

The correspondent of the Louisville Commercial, writing from Mount Sterling, corroborates the above, and says the pieces of flesh were of various sizes and shapes, some of them being two inches square. Two gentlemen, who tasted the meat, express the opinion that it was either mutton or venison.•

The robotic store has been a long-held dream, and in and of itself it’s a good thing, but it’s certainly not a positive for Labor unless new work opportunities pop up to replace those disappeared or we come to some sort of political solution to a shrinking need for human hands. In Iowa, a completely automated nonprofit grocery will offer shoppers healthy food, which is wonderful, but not completely wonderful. From Christopher Snyder:

No more long lines at the grocery store – the future of food shopping is getting a high-tech upgrade.

Des Moines, Iowa is planning to build a first-of-a kind robotic grocery store as an experiment to offer food and necessities to locals anytime at their convenience.   

A partnership between the nonprofit Eat Greater Des Moines and the business equipment firm Oasis24seven will see an automated, vending machine-style unit come to the area.

“Throughout Des Moines, there are areas of town where access to quality food is limited,” said Aubrey Alvarez, the nonprofit’s executive director. “We would love for a full service grocery store to move into these areas, but until that time the robotic unit will address the gap in the community.”

She added this “project takes a simple and familiar idea, a vending machine, and turns it on its head. Robotic Retail will be accessible to everyone.”•

Tags: ,

If Marshall McLuhan and Jerome Angel were still alive, they would likely not collaborate with Quentin Fiore (95 this year) on a physical book, not even on one as great as The Medium Is the Massage, a paperback that fit bewtween its covers something akin to the breakneck genius of Godard’s early-’60s explosion. Would they create a Facebook page that comments on Facebook or a Twitter account of aphorisms or maybe an app? I don’t know, but it likely wouldn’t be a leafy thing you could put on a wooden shelf. 

About 10 days ago, I bought a copy of The Age of Earthquakes, a book created by Douglas Coupland, Hans Ulrich Obrist and Shumon Basar, which seems a sort of updating of McLuhan’s most-famous work, a Massage for the modern head and neck. It looks at our present and future but also, by the virtue of being a tree-made thing, the past. As soon as I’m done with the title I’m reading now, I’ll spend a day with Earthquakes and post something about it. 

In his latest Financial Times column, Coupland writes about the twin refiners of the modern mood: pharmacology and the Internet, the former which I think has made us somewhat happier and the latter of which we’ve used, I think, to largely to self-medicate, stretching egos to cover unhappiness rather than dealing with it, and as the misery, untreated, expands, so does its cover. We’re smarter because of the connectivity, but I don’t know that it’s put us in a better mood. 

Coupland is much more sanguine than I am about it all. He’s in a better mood. An excerpt:

If someone time travelled from 1990 (let alone from 1900) to 2015 and was asked to describe the difference between then and now, they might report back: “Well, people don’t use light bulbs any more; they use these things called LED lights, which I guess save energy, but the light they cast is cold. What else? Teenagers seem to no longer have acne or cavities, cars are much quieter, but the weirdest thing is that everyone everywhere is looking at little pieces of glass they’re holding in their hands, and people everywhere have tiny earphones in their ears. And if you do find someone without a piece of glass or earphones, their faces have this pained expression as if to say, “Where is my little piece of glass? What could possibly be in or on that little piece of glass that could so completely dominate a species in one generation?”

 . . . 

To pull back a step or two; as a species we ought to congratulate ourselves. In just a quarter of a century we have completely rewritten the menu of possible human moods, and quite possibly for the better. Psychopharmacology, combined with the neural reconfiguration generated by extended internet usage, has turned human behaviour into something inexplicable to someone from the not too distant past. We forget this so easily. Until Prozac came out in 1987, the only mood-altering options were mid-century: booze, pot and whatever MGM fed Judy Garland to keep her vibrating for three decades. The Prozac ripple was enormous . . .•

Tags: , , , ,

  • What’s The Benefit Of Eating Your Own Placenta?
  • It’s Not A Rat, KFC Swears
  • The Truth About Living With A Micropenis
  • Here’s What Men Really Think About Women’s Pubic Hair
  • Man Shoots Himself In Foot To See How It Feels
  • Farmer’s Wife Accused Of Murder After Body Found In Pile Of Manure
  • DOODY BOUND: Your Toothbrush Is Probably Covered In Poop
  • Teacher’s Sex Toy Selfie Assignment Arouses Parental Concern
  • Family Calls Police In Horror As Apartment Wall Starts Dripping Blood
  • Charlie Brown Voice Actor Has Bizarre Courtroom Meltdown
Mr Brown, what were you doing purchasing heroin in Canarsie?

Mr. Brown, what were you doing with a crack pipe in Jersey City?

I will throw blood all over this courtroom.

I will pour blood all over this courtroom.

The court orders you to undergo psychiatric evaluation.

The court orders you to undergo psychiatric evaluation.

 

10 recent search-engine keyphrases bringing traffic to Afflictor this week:

  1. jay bakker with marc maron
  2. francis ford coppola the conversation
  3. collier’s man will conquer space soon
  4. mary todd lincoln adjudged insane
  5. dorothy stratten murder
  6. thomas mann visiting the white house
  7. david frost interviewing yippies
  8. pick your spots baby mlton berle richard pryor
  9. jascha heifetz electric car
  10. john delorean in central park
This week, America's next President, Donald Trump, said wheelchair-bound pundit Chalres Karauthammer is a "loser" who "just sits there." He also took aim at that socialist creep who's always hanging around.

This week, America’s next President, Donald Trump, said wheelchair-bound pundit Charles Krauthammer is a “loser” who “just sits there.” He also took aim at that socialist jerk who’s always hanging around.

 

  • Martin Wolf thinks technology is overrated. Daniela Rus thinks it’s amazing.
  • Johann-Dietrich Wörner of the European Space Agency wants moon colonies.
  • Jenna Wortham explains what Uber has meant to African-Americans.
  • J.J. Abrams recalls Dick Smith, special make-up effects artist.

 

When I post this quote from 1981’s My Dinner with Andre, I don’t know if I should attribute it to Andre Gregory or “Andre Gregory.” Either way, the character’s fear seems more pressing now, though for some, it’s the dream. The passage:

I think it’s quite possible that the 1960s represented the last burst of the human being before he was extinguished, and that this is the beginning of the rest of the future now, that from now on there will simply be all these robots walking around, feeling nothing, thinking nothing, and there’ll be nobody left almost to remind them that there once was a species called a human being, with feelings and thoughts, and that history and memory are right now being erased and soon nobody will really remember that life existed on the planet.•

Tags: ,

Marshall McLuhan was right, for the most part. 

The Canadian theorist saw Frankenstein awakening from the operating table before others did, so the messenger was often mistaken for the monster. But he was neither Dr. Victor nor his charged charge, just an observer with a keen eye, one who could recognize patterns and realized humans might not be alone forever in that talent. Excerpts follow from two 1960s pieces that explore his ideas. The first is from artist-writer Richard Kostelanetz‘s 1967 New York Times article “Understanding McLuhan (In Part)” and the other from John Brooks’ 1968 New Yorker piece “Xerox Xerox Xerox Xerox.”

____________________________

Kostelanetz’s opening:

Marshall McLuhan, one of the most acclaimed, most controversial and certainly most talked-about of contemporary intellectuals, displays little of the stuff of which prophets are made. Tall, thin, middle-aged and graying, he has a face of such meager individual character that it is difficult to remember exactly what he looks like; different photographs of him rarely seem to capture the same man.

By trade, he is a professor of English at St. Michael’s College, the Roman Catholic unit of the University of Toronto. Except for a seminar called “Communication,” the courses he teaches are the standard fare of Mod. Lit. and Crit., and around the university he has hardly been a celebrity. One young woman now in Toronto publishing remembers that a decade ago, “McLuhan was a bit of a campus joke.” Even now, only a few of his graduate students seem familiar with his studies of the impact of communications media on civilization those famous books that have excited so many outside Toronto.

McLuhan’s two major works The Gutenberg Galaxy (1962) and Understanding Media (1964) have won an astonishing variety of admirers. General Electric, I.B.M. and Bell Telephone have all had him address their top executives, so have the publishers of America’s largest magazines. The composer John Cage made a pilgrimage to Toronto especially to pay homage to McLuhan and the critic Susan Sontag has praised his “grasp on the texture of contemporary reality.”

He has a number of eminent and vehement detractors, too. The critic Dwight Macdonald calls McLuhan’s books “impure nonsense, nonsense adulterated by sense.” Leslie Fiedler wrote in Partisan Review “Marshall McLuhan. . .continually risks sounding like the body-fluids man in Doctor Strangelove.

Still the McLuhan movement rolls on.”•

____________________________

From Brooks:

In the opinion of some commentators, what has happened so far is only the first phase of a kind o revolution in graphics. “Xerography is bringing a reign of terror into the world of publishing, because it
means that every reader can become both author and publisher,” the Canadian sage Marshall McLuhan wrote in the spring, 1966, issue of the American Scholar. “Authorship and readership alike can become production-oriented under xerography.… Xerography is electricity invading the world of typography, and it means a total revolution in this old sphere.” Even allowing for McLuhan’s erratic ebullience (“I change my opinions daily,” he once confessed), he seems to have got his teeth into something here. Various magazine articles have predicted nothing less than the disappearance of the book as it now exists, and pictured the library of the future as a sort of monster computer capable of storing and retrieving the contents of books electronically and xerographically. The “books” in such a library would be tiny chips of computer film — “editions of one.” Everyone agrees that such a library is still some time away. (But not so far away as to preclude a wary reaction from forehanded publishers. Beginning late in 1966, the long-familiar “all rights reserved” rigmarole on the copyright page of all books published by Harcourt, Brace & World was altered to read, a bit spookily, “All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information   storage and retrieval system …” Other publishers quickly followed the example.) One of the nearest approaches to it in the late sixties was the Xerox subsidiary University Microfilms, which could, and did, enlarge its microfilms of out-of-print books and print them as attractive and highly legible paperback volumes, at a cost to the customer of four cents a page; in cases where the book was covered by copyright, the firm paid a royalty to the author on each copy produced. But the time when almost anyone can make his own copy of a published book at lower than the market price is not some years away; it is now. All that the amateur publisher needs is access to a Xerox machine and a small offset printing press. One of the lesser but still important attributes of xerography is its ability to make master copies for use on offset presses, and make them much more cheaply and quickly than was previously possible. According to Irwin Karp, counsel to the Authors League of America, an edition of fifty copies of any printed book could in 1967 be handsomely “published” (minus the binding) by this combination of technologies in a matter of minutes at a cost of about eight-tenths of a cent per page, and less than that if the edition was larger. A teacher wishing to distribute to a class of fifty students the contents of a sixty-four-page book of poetry selling for three dollars and seventy-five cents could do so, if he were disposed to ignore the copyright laws, at a cost of slightly over fifty cents per copy.

The danger in the new technology, authors and publishers have contended, is that in doing away with the book it may do away with them, and thus with writing itself. Herbert S. Bailey, Jr., director of Princeton University Press, wrote in the Saturday Review of a scholar friend of his who has cancelled all his subscriptions to scholarly journals; instead, he now scans their tables of contents at his public library and makes copies of the articles that interest him. Bailey commented, “If all scholars followed [this] practice, there would be no scholarly journals.” Beginning in the middle sixties, Congress has been considering a revision of the copyright laws — the first since 1909. At the hearings, a committee representing the National Education Association and a clutch of other education groups argued firmly and persuasively that if education is to keep up with our national growth, the present copyright law and the fair-use doctrine should be liberalized for scholastic purposes. The authors and publishers, not surprisingly, opposed such liberalization, insisting that any extension of existing rights would tend to deprive them of their livelihoods to some degree now, and to a far greater degree in the uncharted xerographic future. A bill that was approved in 1967 by the House Judiciary Committee seemed to represent a victory for them, since it explicitly set forth the fair-use doctrine and contained no educational-copying exemption. But the final outcome of the struggle was still uncertain late in 1968. McLuhan, for one, was convinced that all efforts to preserve the old forms of author protection represent backward thinking and are doomed to failure (or, anyway, he was convinced the day he wrote his American Scholar article). “There is no possible protection from technology except by technology,” he wrote. “When you create a new environment with one phase of technology, you have to create an anti-environment with the next.” But authors are seldom good at technology, and probably do not flourish in anti-environments.•

Tags: , ,

Grantland has many fine writers and reporters, but the twin revelations for me have been Molly Lambert and Alex Pappademas, whom I enjoy reading as much as anyone working at any American publication. The funny thing is, I’m not much into pop culture, which is ostensibly their beat. But as with the best of journalists, the subject they cover most directly is merely an entry into many other ones, long walks that end up in big worlds. 

Excerpts follow from a recent piece by each. In “Start-up Costs,” a look at Silicon Valley and Halt and Catch Fire, Pappademas circles back to Douglas Coupland’s 1995 novel, Microserfs, a meditation on the reimagined office space written just before Silicon Valley became fully a brand as well as a land. In Lambert’s “Life Finds a Way,” the release of Jurassic World occasions an exploration of the enduring beauty of decommissioned theme parks–dinosaurs in and of themselves–at the tail end of an entropic state. Both pieces are concerned with an imposition on the natural order of things by capitalism.

_______________________________

From Pappademas:

Microserfs hit stores in 1995, which turned out to be a pretty big year for Net-this and Net-that. Yahoo, Amazon, and Craigslist were founded; Javascript, the MP3 compression standard, cost-per-click and cost-per-impression advertising, the first “wiki” site, and the Internet Explorer browser were introduced. Netscape went public; Bill Gates wrote the infamous Internet Tidal Wave” memo to Microsoft executives, proclaiming in the course of 5,000-plus words that the Internet was “the most important single development to come along since the IBM PC was introduced in 1981.” Meanwhile, at any time between May and September, you could walk into a multiplex not yet driven out of business by Netflix and watch a futuristic thriller like Hackers or Johnny Mnemonic or Virtuosity or The Net, movies that capitalized on the culture’s tech obsession as if it were a dance craze, spinning (mostly absurd) visions of the (invariably sinister) ways technology would soon pervade our lives. Microserfs isn’t as hysterical as those movies, and its vision of the coming world is much brighter, but in its own way it’s just as wrongheaded and nailed-to-its-context.

“What is the search for the next great compelling application,” Daniel asks at one point, “but a search for the human identity?” Microserfs argues that the entrepreneurial fantasy of ditching a big corporation to work at a cool start-up with your friends can actually be part of that search — that there’s a way to reinvent work in your own image and according to your own values, that you can find the same transcendence within the sphere of commerce that the slackers in Coupland’s own Generation X4 eschewed McJobs in order to chase. The notion that cutting the corporate cord to work for a start-up often just means busting out of a cubicle in order to shackle oneself to a laptop in a slightly funkier room goes unexamined; the possibility that work within a capitalist system, no matter how creative and freeform and unlike what your parents did, might be fundamentally incompatible with self-actualization and spiritual fulfillment is not on the table.•

_______________________________

Lambert’s opening:

I drove out to the abandoned amusement park originally called Jazzland during a trip to New Orleans earlier this year. Jazzland opened in 2000, was rebranded as Six Flags New Orleans in 2003, and was damaged beyond repair a decade ago by the flooding caused by Hurricane Katrina. But in the years since it’s been closed, it has undergone a rebirth as a filming location. It serves as the setting for the new Jurassic World. As I approached the former Jazzland by car, a large roller coaster arced into view. The park, just off Interstate 10, was built on muddy swampland. I have read accounts on urban exploring websites by people who’ve sneaked into the park that say it’s overrun with alligators and snakes.

After the natural disaster the area wasted no time in returning to its primeval state: a genuine Jurassic World. It was in the Jurassic era when crocodylia became aquatic animals, beginning to resemble the alligators currently populating Jazzland. I saw birds of prey circling over the theme park as I reached the front gates, only to be told in no uncertain terms that the site is closed to outsiders. I pleaded with the security guard that I am a journalist just looking for a location manager to talk to, but was forbidden from driving past the very first entrance into the parking lot. I could see the ticket stands and Ferris wheel, but accepted my fate and drove away, knowing I’d have to wait for Jurassic World to see Jazzland. As I drove off the premises, I could still glimpse the tops of the coasters and Ferris wheel, obscured by trees.

I am fascinated by theme parks that return to nature, since the idea of a theme park is such an imposition on nature to begin with — an obsessively ordered attempt to overrule reality by providing an alternate, superior dimension.•

 

Tags: ,

Olaf Stampf, who always conducts smart interviews for Spiegel, has a Q&A with Johann-Dietrich Wörner, the new general director of the European Space Agency. Two quick excerpts follow, one about a moon colony and the other about the potential of a manned Mars voyage.

______________________________

Spiegel:

Which celestial body would you like to travel to most of all?

Johann-Dietrich Wörner:

My dream would be to fly to the moon and build permanent structures, using the raw materials available there. For instance, regolith, or moon dust, could be used to make a form of concrete. Using 3-D printers, we could build all kinds of things with that moon concrete — houses, streets and observatories, for example.

______________________________

Spiegel:

Wouldn’t it be a much more exciting challenge to hazard a joint, manned flight to Mars?

Johann-Dietrich Wörner:

Man will not give up the dream of walking on Mars, but it won’t happen until at least 2050. The challenges are too great, and we don’t have the technologies yet to complete this vast project. Most of all, a trip to Mars would take much too long today. It would be irresponsible, not just from a scientific standpoint, to send astronauts to the desert planet if they could only return after more than two years.•

Tags: ,

Rachel Armstrong, a medical doctor who became an architect, wants to combine her twin passions, believing buildings can be created not only from plastics recovered from our waterways but also biological materials. From Christopher Hume of the Toronto Star:

She also imagines using living organisms such as bacteria, algae and jellyfish as building materials. If that sounds far-fetched, consider the BIQ (Bio Intelligent Quotient) Building in Hamburg. Its windows are filled with water in which live algae that’s fed nutrients. When the sun comes out, the micro-organisms reproduce, raising the temperature of the water. BIQ residents say they love their new digs. It helps that they have no heating bills.

Armstrong then described how objects can be made of plastic dredged from the oceans. It could, she suggested, be a new source of material as well as a way to clean degraded waterways. Her basic desire is to make machinery more biological and unravel the machinery behind the biological. That means figuring out how bacteria talks to bacteria, how algae “communicate.” This isn’t new, of course, but this fusion draws closer all the time.

As that happens, she argues, “consumers can become producers.” In the meantime, the search for “evidence-based truth-seeking systems” continues.

Armstrong, who began her professional life as a doctor, credits her interest in architecture to the time she spent at a leper colony in India in the early ’90s. “What I saw was a different way of life,” she recalls. “I realized we need a more integrated way of being and living so we are at one with our surroundings.”•

Tags: ,

The Confederate flag is an American swastika. In calling for its retirement in the aftermath of the horrific Charleston church massacre, Ta-Nehisi Coates of the Atlantic reminds that the impulse to enslave was initially driven by plunder. Of course, the American flag itself has a similar history, it being the chief symbol of the other U.S. holocaust–the plight of the Native Americans–which was a land grab drenched in blood, first conducted under flags of colonialist nations and then our own.

Coates’ opening:

Last night, Dylann Roof walked into a Charleston church, sat for an hour, and then killed nine people. Roof’s crime cannot be divorced from the ideology of white supremacy which long animated his state nor from its potent symbol—the Confederate flag. Visitors to Charleston have long been treated to South Carolina’s attempt to clean its history and depict its secession as something other than a war to guarantee the enslavement of the majority of its residents. This notion is belied by any serious interrogation of the Civil War and the primary documents of its instigators. Yet the Confederate battle flag—the flag of Dylann Roof—still flies on the Capitol grounds in Columbia.

The Confederate flag’s defenders often claim it represents “heritage not hate.” I agree—the heritage of White Supremacy was not so much birthed by hate as by the impulse toward plunder. Dylann Roof plundered nine different bodies last night, plundered nine different families of an original member, plundered nine different communities of a singular member. An entire people are poorer for his action. The flag that Roof embraced, which many South Carolinians embrace, does not stand in opposition to this act—it endorses it.•

Tags: ,

Have someone staying in your home and you want out? Well today’s the day. See I smell terrible all the time. Have me around for a bit and they are sure to want to leave. Really this will work. I want in return iPads, laptops, cash, gold, cash. Email for more info.

At the time of Watergate, the Presidency itself was seen as the problem, that no one person could handle running the most powerful country in the free world, but now I think gerrymandering and the way we apportion national senate seats without regard to population is more the trouble. The system still works, but certainly not optimally, sometimes barely. 

Because of a promotional tie-in with a new Tom Hanks documentary about the ’70s, the Atlantic is presenting several of its key articles from that decade, including Arthur Schlesinger’s 1973 piece “The Runaway Presidency.” An excerpt:

The crisis of the presidency has led some critics to advocate a reconstruction of the institution itself. For a long time people have felt that the job was becoming too much for one man to handle. “Men of ordinary physique and discretion,” Woodrow Wilson wrote as long ago as 1908, “cannot be Presidents and live, if the strain be not somehow relieved. We shall be obliged always to be picking our chief magistrate from among wise and prudent athletes,—a small class.”

But what was seen until the late 1950s as too exhausting physically is now seen, after Vietnam and Watergate, as too dizzying psychologically. In 1968 Eugene McCarthy, the first liberal presidential aspirant in the century to run against the presidency, called for the depersonalization and decentralization of the office. The White House, he thought, should be turned into a museum. Instead of trying to lead the nation, the President should become “a kind of channel” for popular desires and aspirations. Watergate has made the point irresistible. “The office has become too complex and its reach too extended,” writes Barbara Tuchman, “to be trusted to the fallible judgment of any one individual.” “A man with poor judgment, an impetuous man, a sick man, a power-mad man,” adds Max Lerner, “each would be dangerous in the post. Even an able, sensitive man needs stronger safeguards around him than exist today.”

The result is a new wave of proposals to transform the presidency into a collegial institution. Mrs. Tuchman suggests a six-man directorate with a rotating chairman, each member to serve for a year, as in Switzerland. Lerner wants to give the President a Council of State, a body that he would be bound by law to consult and that, because half its members would be from Congress and some from the opposite party, would presumably give him independent advice. Both proposals were, in fact, considered and rejected at the Constitutional Convention.

Hamilton and Jefferson disagreed on many things, but they agreed that the convention had been right in deciding on a one-man presidency. A plural executive, Hamilton contended, if divided within itself, would lead the country into factionalism and anarchy and, if united, could lead it into tyranny.•

Tags: ,

« Older entries § Newer entries »