Excerpts

You are currently browsing the archive for the Excerpts category.

Despite what some narratives say, Bill Gates was completely right about the Internet and mobile. That doesn’t mean he’ll be correct about every seismic shift, but I think his intuition about autonomous cars is almost definitely accurate: Driverless functions will be useful if partially completed and a societal game-changer if completely perfected. Just helpful or a total avalanche. In an interview conducted by Financial Times Deputy Editor John Thornhill, Gates discussed these matters, among many others. An excerpt from Shane Ferro’s article at Business Insider (which relies on Izabella Kaminska tweets from the event):

With regards to robots, the economy, and logistics, the takeaway seems to be that Gates thinks we’re in the fastest period of innovation ever, and it’s still unclear how that will affect the economy.

But there’s still quite a way to go. Robots “will be benign for quite some time,” Gates said. The future of work is not in immediate danger — although the outlook is not good for those who have a high school degree or less. 

Gates was also asked about Uber. He seems to think the real disruption to the driving and logistics industry is not going to come until we have fully driverless cars. That’s the “rubicon,” he says.

Kaminska relays that currently, Gates thinks that Uber “is just a reorganization of labour into a more dynamic form.” However, and this is big, Uber does have the biggest research and development budget out there on the driverless vehicle front. And that’s to its advantage.•

Tags: , ,

“We face a future in which robots will test the boundaries of our ethical and legal frameworks with increasing audacity.” writes Illah Reza Nourbakhsh in his Foreign Affairs article “The Coming Robot Dystopia,” and it’s difficult to envision a scenario in which the pace doesn’t get just faster, cheaper and at least somewhat out of control. 

We live in a strange duality now: On one hand, citizens worry that government has too much access to their information–and that’s true–but government is likely tightening its grip just as it’s losing it. Technology easily outpaces legislation, and it’s possible that at some point in the near future even those who espoused hatred of government may be wistful for a stable center. 

From Nourbakhsh:

Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.

This is a familiar warning when it comes to technological innovations of all kinds. But there is a crucial distinction between what’s happening now and the last great breakthrough in robotic technology, when manufacturing automatons began to appear on factory floors during the late twentieth century. Back then, clear boundaries separated industrial robots from humans: protective fences isolated robot workspaces, ensuring minimal contact between man and machine, and humans and robots performed wholly distinct tasks without interacting.

Such barriers have been breached, not only in the workplace but also in the wider society: robots now share the formerly human-only commons, and humans will increasingly interact socially with a diverse ecosystem of robots.•

Tags:

When I put up a post three days ago about the automated grocery store in Iowa, it brought to mind the first attempt at such a store, the Keedoozle, one of Clarence Saunders attempts at a resurgence in the aftermath of the Wall Street bath the Memphis-based Piggly Wiggly founder took while attempting and failing spectacularly at a corner. In his 1959 New Yorker piece about the Saunders Affair, John Brooks described the Keedoozle:

His hopes were pinned on the Keedoozle, an electrically operated grocery store, and he spent the better part of the last twenty years of his life trying to perfect it. In a Keedoozle store, the merchandise was displayed behind glass panels, each with a slot beside it, like the food in an Automat. There the similarity ended, for, instead of inserting coins in the slot to open a panel and lift out a purchase. Keedoozle customers inserted a key that they were given on entering the store. Moreover, Saunders’ thinking had advanced far beyond the elementary stage of having the key open the panel; each time a Keedoozle key was inserted inside a slot, the identity of the item selected was inscribed in code on a segment of recording tape embedded in the key itself, and simultaneously the item was automatically transferred to a conveyor belt that carried it to an exit gate at the front of the store. When a customer had finished his shopping, he would present his key to an attendant at the gate, who would decipher the tape and add up the bill. As soon as this was paid, the purchases would be catapulted into the customer’s arms, all bagged and wrapped by a device at the end of a conveyor belt. 

A couple of pilot Keedoozle stores were tried out–one in Memphis and the other in Chicago–but it was found that the machinery was too complex and expensive to compete with the supermarket pushcarts. Undeterred, Saunders set to work on an even more intricate mechanism–the Foodlectric, which would do everything the Keedoozle would do and add up the bill as well.•

______________________

From the February 19, 1937 Brooklyn Daily Eagle:

______________________

The Keedoozle inspired a Memphis competitor in 1947:

Tags: , ,

Sometime in the 21st century, you and me and Peter Thiel are going to die, and that’s horrible because even when the world is trying, it’s spectacular.

The Paypal cofounder is spending a portion of his great wealth on anti-aging research, hoping to radically extend life if not defeat death, which is a wonderful thing for people of the distant future, though it likely won’t save any of us. I will say that I wholly agree with Thiel that those who oppose radical life extension because it’s “unnatural” are just wrong.

From a Washington Post Q&A Ariana Eunjung Cha conducted with Thiel:

Question:

Leon Kass — the physician who was head of the President’s Council on Bioethics from 2001 to 2005 — as well as a number of other prominent historians, philosophers and ethicists have spoken out against radical life extension. Kass, for instance, has argued that it’s just not natural, that we’ll end up losing some of our humanity in the process. What do you think of their concerns?

Peter Thiel:

I believe that evolution is a true account of nature, but I think we should try to escape it or transcend it in our society. What’s true of evolution, I would argue, is true of all of nature. Even basic dental hygiene. If it’s natural for your teeth to start falling out, then you shouldn’t get cavities replaced? In the 19th century, people made the argument that it was natural for childbirth to be painful for women and therefore you shouldn’t have pain medication. I think the nature argument tends to go very wrong. . . . I think it is against human nature not to fight death.

Question:

What about the possibility of innovation stagnation? Some argue that if you live forever, you won’t be as motivated to invent new ways of doing this.

Peter Thiel:

That’s the Steve-Jobs-commencement-speech-in-2005 argument — that he was working so hard because he knew he was going to die. I don’t believe that’s true. There are many people who stop trying because they think they don’t have enough time. Because they are 85. But that 85-year-old could have gotten four PhDs from 65 to 85, but he didn’t do it because he didn’t think he had enough time. I think these arguments can go both ways. I think some people could be less motivated. I think a lot of people would be more motivated because they would have more time to accomplish something meaningful and significant.•

 

Tags: ,

In a 2012 Playboy Interview, Richard Dawkins addressed whether a fuller understanding of genetics would allow us to create something akin to extinct life forms, even prehistoric ones. The passage:

Playboy:

Do we know which came first—bigger brains or bipedalism?

Richard Dawkins:

Bipedalism came first.

Playboy:

How do we know that?

Richard Dawkins:

Fossils. That’s one place the fossils are extremely clear. Three million years ago Australopithecus afarensis were bipedal, but their brains were no bigger than a chimpanzee’s. The best example we have is Lucy [a partial skeleton found in 1974 in Ethiopia]. In a way, she was an upright-walking chimpanzee.

Playboy:

You like Lucy.

Richard Dawkins:

Yes. [smiles]

Playboy:

You’ve said you expect mankind will have a genetic book of the dead by 2050. How would that be helpful?

Richard Dawkins:

Because we contain within us the genes that have survived through generations, you could theoretically read off a creature’s evolutionary history. “Ah, yes, this animal lived in the sea. This is the time when it lived in deserts. This bit shows it must have lived up mountains. And this shows it used to burrow.”

Playboy:

Could that help us bring back a dinosaur? You have suggested crossing a bird and a crocodile and maybe putting it in an ostrich egg.

Richard Dawkins:

It would have to be more sophisticated than a cross. It’d have to be a merging.

Playboy:

Could we recreate Lucy?

Richard Dawkins:

We already know the human genome and the chimpanzee genome, so you could make a sophisticated guess as to what the genome of the common ancestor might have been like. From that you might be able to grow an animal that was close to the common ancestor. And from that you might split the difference between that ancestral animal you re-created and a modern human and get Lucy.•

Tags:

It doesn’t matter when our families moved to America, because all of us here own the past. 

Slave owners weren’t complete monsters–though they certainly were monstrous selectively–and those of us who got here much later might have acted just as abominably if we had been born to landed parents in the antebellum South–in fact, we probably would have. As the Confederate flag is hopefully lowered for good, the best thing we can do is realize that it’s possible for any people, any nation, to live within a delusion that’s unspeakably cruel to some. And we should ask ourselves if the American flag looks different to native peoples than the Stars and Bars appears to most of the rest of us.

From Campbell Robertson in the New York Times:

COLUMBIA, S.C. — It has been quite a few years since the lost cause has appeared quite as lost as it did Tuesday. As the afternoon drew on and their retreat turned into a rout, the lingering upholders of the Confederacy watched as license plates, statues and prominently placed Confederate battle flags slipped from their reach.

“This is the beginning of communism,” said Robert Lampley, who was standing in the blazing sun in front of the South Carolina State House shortly after the legislature voted overwhelmingly to debate the current placement of the Confederate battle flag. “The South is the last bastion of liberty and independence. I know we’re going to lose eventually.”

“Our people are dying off,” he went on, before encouraging a white reporter to “keep reproducing.” …

“You’re asking me to agree that my great-grandparent and great-great-grandparents were monsters,” said Greg Stewart, a member of the Sons of Confederate Veterans and the executive director of Beauvoir, the last home of Jefferson Davis.

Mr. Stewart was livid at the “reckless and unnecessary” statement by Philip Gunn, the Republican speaker of the Mississippi House of Representatives, that the Confederate battle saltire needed to be removed from the Mississippi state flag. Mr. Stewart pointed out that the state had voted by huge margins to keep the flag as it was in 2001, and that should have been that.•

Tags:

Excellent job by Daniel Oberhaus of Vice Motherboard with his smart interview of Noam Chomsky and theoretical physicist Lawrence Krauss about contemporary scientific research and space exploration. Chomsky is disturbed by the insinuiation of private enterprise into Space Race 2.0, a quest for trillions, while Krauss thinks the expense of such an endeavor permanently makes it a moot point. I’m not so sure about the “permanently” part. Both subjects encourage unmanned space missions as a way to speed up science while scaling back costs. The opening:

Vice:

The cost of entry is so high for space, and arguably for science as well, that the general public seems to be excluded from partaking right from the start. In that light, what can really be done to reclaim the commons of space?

Noam Chomsky:

If you look at the whole history of the space program, a lot of things of interest were discovered, but it was done in a way that sort of ranges from misleading to deceitful. So what was the point of putting a man on the moon? A person is the worst possible instrument to put in space: you have to keep them alive, which is very complex, there are safety procedures, and so on. The right way to explore space is with robots, which is now done. So why did it start with a man in space? Just for political reasons.

Lawrence Krauss:

Of course we should [pressure the government to divert more funds to space programs]. But again, if you ask me if we should appropriate funds for the human exploration of space, than my answer is probably not. Unmanned space exploration, from a scientific perspective is far more important and useful. If we’re doing space exploration for adventure, then it’s a totally different thing. But from a scientific perspective, we should spend the money on unmanned space exploration.

Noam Chomsky:

John F. Kennedy made it a way of overcoming the failure of the Bay of Pigs and the fact that the Russians in some minor ways had gotten ahead of us, even though the American scientists understood that that wasn’t true. So you had to have a dramatic event, like a man walking on the moon. There’s not very much point to have a man walking on the moon except to impress people.

As soon as the public got bored with watching some guy stumble around on the moon, those projects were ended. Then space exploration began as a scientific endeavor. Things continue to develop like this to a large extent. Take, again, the development of computers. That was presented under the rubric of defense. The Pentagon doesn’t say, ‘We’re taking your tax money so that maybe your grandson can have an iPad.’ What they say is, ‘We’re defending ourselves from the Russians.’ What we’re actually doing is seeing if we can create the cutting edge of the economy.•

Tags: , ,

In a Washington Post piece, Vivek Wadha reveals how bullish he is on the near-term future of robotics in the aftermath of the DARPA challenge. He believes Jetsons-level assistants are close, and although he acknowledges such progress would promote technological unemployment, he doesn’t really dwell on that thorny problem. An excerpt:

For voice recognition, we are already pretty close to C-3PO-like capabilities. Both Apple and Google use artificial intelligence to do a reasonably good job of translating speech to text, even in noisy environments. No bot has passed the Turing Test yet, but they are getting closer and closer. When it happens, your droid will be able to converse with you in complex, human-like interactions.

The computational power necessary to enable these robots to perform these difficult tasks is still lacking. Consider, however, that in about seven or eight years, your iPhone will have the computational ability of a human brain, and you can understand where we are headed.

Robots will be able to walk and talk like human beings.

What are presently halting steps moving up stairs will, in the next DARPA challenge, become sure-footed ascents. The ability to merely open a door will become that of opening a door and holding a bag of groceries and making sure the dog doesn’t get out.

And, yes, Rosie will replace lots of human jobs, and that is reason to worry — and cheer.•

Tags:

Saudi Arabia is not customarily a place associatd with green energy and women’s rights, but a rich country that wants to stay that way needs to adapt. Two excerpts from articles about the nation transforming in at least some ways: Jeffrey Ball’s Atlantic piece “Why the Saudis Are Going Solar” and Juliane von Mittelstaedt and Samiha Shafy’s Spiegel feature “Lifting the Veil.”

____________________________

From Ball:

The Saudis burn about a quarter of the oil they produce—and their domestic consumption has been rising at an alarming 7 percent a year, nearly three times the rate of population growth. According to a widely read December 2011 report by Chatham House, a British think tank, if this trend continues, domestic consumption could eat into Saudi oil exports by 2021 and render the kingdom a net oil importer by 2038.

That outcome would be cataclysmic for Saudi Arabia. The kingdom’s political stability has long rested on the “ruling bargain,” whereby the royal family provides citizens, who pay no personal income taxes, with extensive social services funded by oil exports. Left unchecked, domestic consumption could also limit the nation’s ability to moderate global oil prices through its swing reserve—the extra petroleum it can pump to meet spikes in global demand. If Saudi rulers want to maintain control at home and preserve their power on the world stage, they must find a way to use less oil.

Solar, they have decided, is an obvious alternative. In addition to having some of the world’s richest oil fields, Saudi Arabia also has some of the world’s most intense sunlight. (On a map showing levels of solar radiation, with the sunniest areas colored deep red, the kingdom is as blood-red as a raw steak.) Saudi Arabia also has vast expanses of open desert seemingly tailor-made for solar-panel arrays.•

____________________________

From Von Mittelstaedt and Samiha Shafy:

In 2012, Saudi Arabia began enforcing a law that allows only females to work in lingerie stores. Gradually, women were also granted the right to sell abayas, make-up, handbags and shoes. Children’s toys. Clothes. Slowly but surely, men were banished from these realms.

Female participation in the workforce, however, brought with it a host of new problems. How could women get to work, when they’re not allowed to drive? Who was going to look after their children? What happens if they’re expecting? More laws have subsequently been passed, from a right to ten-weeks of paid parental leave, to a right to work part-time and a right to childcare support. A revolution started by lingerie. Only in Saudi Arabia.

Society has undergone dramatic change in the last ten years, ever since the late King Abdullah succeeded to the throne in 2005. The change has been especially dramatic since 2011. The main reason for the transformation is that a growing number of women are now working, and not just as civil servants, teachers and doctors. They’re increasingly better-educated and financially independent and above all, they’re a far more visible presence. They’re leaving the isolation of their homes and are free to travel around inside the country, at least, to stay in hotels, and to set up companies. There are now even women’s shelters in Saudi Arabia and discussions of violence against women are no longer the taboo they used to be. The way women are perceived has changed – as has the way they perceive themselves.

“I used to be afraid all the time, I avoided speaking to strangers,” says Alamri. “But then I started to open up and meet people, and to enjoy life.” Her husband, however, began to stop by the store where she worked. He spied on her and told her she wasn’t allowed to speak to strange men. At home, he shouted at her. She began to ask herself why she needed him. She was earning money, after all. Not a lot, but enough to support herself. After two years, she filed for divorce.•

Tags: , ,

I was on the subway the other day and a disparate group of six people of different ages, races and genders began a spontaneous conversation about how the they couldn’t afford to live anywhere nice anymore and how the middle class was gone in America, that the country wasn’t for them anymore. Small sample size to be sure, but one that’s backed up by more than four decades of research. Part of the problem could be remedied politically if finding solutions was in vogue in America, but the bigger picture would seem to be a grand sweep of history that announced itself in the aftermath of the Great Recession, as profits returned but not jobs.

I fear Derek Thompson’s excellent Atlantic feature “A World Without Work” may be accurate in its position that this time it’s different, that technological unemployment may take root in America (and elsewhere), and I think one of the writer’s biggest contributions is explaining how relatively quickly the new normal can take hold. (He visits Youngstown, a former industrial boomtown that went bust, to understand the ramifications of work going away.)

I don’t believe a tearing of the social fabric need attend an enduring absence of universal employment provided wealth isn’t aggregated at one end of the spectrum, but I don’t have much faith right now in government to step into the breach should such opportunities significantly deteriorate. Much of Thompson’s piece is dedicated finding potential solutions to a radical decline of Labor–a post-workist world. He believes America can sustain itself if citizens are working fewer hours but perhaps not if most don’t need to punch the clock at all. I’m a little more sanguine than that if basic needs are covered. Then I think we’ll see people get creative.

An excerpt:

After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.

For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.

Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”

Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.

To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.

Tags: ,

Excerpts from a pair of recent Harvard Business Review articles which analyze the increasing insinuation of robots in the workplace. The opening of Walter Frick’s “When Your Boss Wears Metal Pants” examines the emotional connection we quickly make with robots who can feign social cues. In “The Great Decoupling,” Amy Bernstein and Anand Raman discuss technological unemployment, among other topics, with Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age.

___________________________

From Frick:

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society.•

___________________________

From Bernstein and Raman:

Harvard Business Review:

As the Second Machine Age progresses, will there be any jobs for human beings?

Andrew McAfee:

Yes, because humans are still far superior in three skill areas. One is high-end creativity that generates things like great new business ideas, scientific breakthroughs, novels that grip you, and so on. Technology will only amplify the abilities of people who are good at these things.

The second category is emotion, interpersonal relations, caring, nurturing, coaching, motivating, leading, and so on. Through millions of years of evolution, we’ve gotten good at deciphering other people’s body language…

Eric Brynjolfsson:

…and signals, and finishing people’s sentences. Machines are way behind there.

The third is dexterity, mobility. It’s unbelievably hard to get a robot to walk across a crowded restaurant, bus a table, take the dishes back into the kitchen, put them in the sink without breaking them, and do it all without terrifying the restaurant’s patrons. Sensing and manipulation are hard for robots.

None of those is sacrosanct, though; machines are beginning to make inroads into each of them.

Andrew McAfee:

We’ll continue to see the middle class hollowed out and will see growth at the low and high ends. Really good executives, entrepreneurs, investors, and novelists—they will all reap rewards. Yo-Yo Ma won’t be replaced by a robot anytime soon, but financially, I wouldn’t want to be the world’s 100th-best cellist.•

Tags: , , , ,

softbank-pepper-robot-shop-store-staff-humanoid-2

Softbank’s Pepper looks like a child killed by a lightning strike who returned as a ghost to make you pay for handing him a watering can during an electrical storm.

He’s described as an “emotional robot,” which makes me take an immediate disliking to him. Manufactured to express feelings based on stimuli in his surroundings, Pepper is supposed to be shaped by his environment, but I wonder if his behavior will shape those who own him. We may get an answer since the robot sold out in Japan in under a minute and will soon be available for sale internationally.

From Marilyn Malara at UPI:

The humanoid robot is described as one that can feel emotion in a way humans do naturally through a system similar to a human’s hormonal response to stimuli. The robot can generate its own emotions by gathering information from its cameras and various sensors. Softbank says that Pepper is a “he” and can read human facial expressions, words and surroundings to make decisions. He can sigh or even raise his voice; he can get scared from dimming lights and happy when praised.

Along with the product’s launch, 200 applications are available to download into the robot including one that can record everyday life in the form of a robotic scrapbook.

Last year, Nestle Japan used Pepper to sell Nescafe coffee machines in appliance stores all over the country. “Pepper will be able to explain Nescafe products and services and engage in conversation with consumers,” Nestle Japan CEO Kohzoh Takaoka said in October before its roll-out.•

____________________________

“Can you lend me a $100?”

Tags:

In a New York Times review, A.O. Scott, who is quietly one of the funniest writers working anywhere, offers a largely positive review of philosopher Susan Neiman’s new book about perpetual adolescence, something that’s become the norm in this era of fanboy (and -girl) ascendancy, its commodification seemingly having reached a saturation point until, yes, the next comic-book or YA franchise. The opening:

A great deal of modern popular culture — including just about everything pertaining to what French savants like to call le nouvel âge d’or de la comédie américaine — runs on the disavowal of maturity. The ideal consumer is a mirror image of a familiar comic archetype: a man-child sitting in his parents’ basement with his video games and his Star Wars figurines; a postgraduate girl and her pals treating the world as their playground. Baby boomers pursue perpetual youth into retirement. Gen-Xers hold fast to their skateboards, their Pixies T-shirts and their Beastie Boys CDs. Nobody wants to be an adult anymore, and every so often someone writes an article blaming Hollywood, attachment parenting, global capitalism or the welfare state for this catastrophe. I’ve written one or two of those myself. It’s not a bad racket, and since I’m intimately acquainted, on a professional basis, with the cinematic oeuvre of Adam Sandler, I qualify as something of an expert. 

In the annals of anti-infantile cultural complaint, Susan Neiman’s new book, Why Grow Up?, is both exemplary and unusual. An American-born philosopher who lives in Berlin, Neiman has a pundit’s fondness for the sweeping generalization and the carefully hedged argumentative claim. “I’m not suggesting that we do without the web entirely,” she writes in one of her periodic reflections on life in the digital age, “just that we refuse to let it rule.” Elsewhere she observes that “if you spend your time in cyberspace watching something besides porn and Korean rap videos, you can gain a great deal,” a ­hypothesis I for one am eager to test.•

Tags: ,

Wow, this is wonderful: Nicholas Carr posted a great piece from a recent lecture in which he addressed Marshall McLuhan’s idea of automation as media. In this excerpt, he tells a history of how cartography, likely the first medium, went from passive to active player as we transitioned from paper to software:

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving.•

Tags:

I believe Weak AI can remake a wide swath of our society in the coming decades, but the more sci-fi Strong AI moonshots don’t seem within reach to me. When Yuval Harari worries that technologists may play god, he’s saying nothing theoretically impossible. In fact, the innovations he’s discussing (genetic engineering, cyborgism, etc.) will almost definitely occur if we get lucky (and creative) and survive any near-term extinction.

But the thing about Silicon Valley remaking our world is that it’s really tough do that, especially when dealing with such hard problems–even the hard problem (i.e., consciousness). Google is an AI company disguised as a search company, but it’s certainly possible that it never becomes great at anything beyond search and (perhaps) a few Weak AI triumphs. Time will tell. But it probably will take significant time.

In a Washington Post piece, Bhaskar Chakravorti wonders if Google X is more moonshot or crater, though I think it’s too early to be assesssing such things. Creating 100% driverless autos wasn’t going to happen overnight, let alone radical life extension. An excerpt:

In its relentless hunt for innovation, Google is a voracious acquirer of innovative companies. In the two years prior to 2014, it outspent its five closest rivals combined on acquisitions. Here, too, it has failed in dramatic ways. A single acquisition, Motorola Mobility, cost $12 billion — almost half the amount that Google spent on all its acquisitions over a decade — which it sold for $3 billion two years later.

None of these factors deters Google’s leaders or its many admirers. Much of the public focus has shifted recently to its Google X unit, which not only has a chief most appropriately named, Astro Teller, it has a manager with an official job title of Head of Getting Moonshots Ready for Contact With the Real World. Now, the drumbeat has picked up as some of Google’s moonshots come closer to landing. The Google self-driven car is coming around the corner quite literally.  Google’s high-altitude balloons are being tested to offer Internet access to those without access. And the latest: Google intends to take on the myriad urban innovation challenges with its brand new Sidewalk Labs. Beyond the roads, sidewalks and the skies, Google wants to tinker with life itself, from glucose-monitoring contact lenses to longevity research.

Google’s revenue source is essentially unchanged and yet it spends disproportionately to move the needle. But these unprecedented moonshots could simply be money pits.•

Tags:

The unanswered questions that we have about the Snowden Affair are probably a little different than the ones circulating in the head of cybersecurity expert and former fugitive John McAfee, who has written an unsurprisingly strange, paranoid and colorful piece on the topic for International Business Times. An excerpt:

The Russian interviewer also asked me about Snowden: “In your opinion, is Edward Snowden a real character or one invented by the intelligence services?”

And this was my answer

“I doubt everything, even my own senses at times. Is the apparent US government the real US government? Could the real government be a committee of the largest corporate entities who mount this play of democracy to veil the real machinations?

Are the divisions of the world into apparent “countries” even real? Are the apparent divisions within my own country real? Do we really have a tripartite system of government, where the executive, legislative and judicial divisions are, in fact, real divisions? I could go on forever.

As to Edward Snowden, I find the following inconsistencies to be very troubling:

1. He is a man of soft character and limited experience in the difficult and dangerous world into which he so willingly and knowingly thrust himself. I have personally been a fugitive. I have experienced many dangers and difficult situations, and even I with my excellent survival skills would not willingly bring down such wrath upon myself. Why would a man of Snowden’s apparent character do so?

2. He was safe in Hong Kong prior to entering Russia. With no offense to your country, I believe that Snowden was smart enough to know that he could have faded into the back alleys and byways of Hong Kong and, with his talents, have led a thriving existence there. Chinese women are equally as attractive as Russian women and not quite so dangerous. It is cheaper to live in Hong Kong and the weather is better. It is, quite frankly, a colourful place full of opportunity for a clever person. Why did he leave for Russia?

3. I doubt the truth of it all because my only source of information on the subject I have obtained through the world’s press. What truth can there be in it?”•

 

Tags:

In a New Statesman essay, Yuval Noah Harari, author of the great book Sapiens, argues that if we’re on the precipice of a grand human revolution–in which we commandeer evolutionary forces and create a post-scarcity world–it’s being driven by private-sector technocracy, not politics, that attenuated, polarized thing. The next Lenins, the new visionaries focused on large-scale societal reorganization, Harari argues, live in Silicon Valley, and even if they don’t succeed, their efforts may significantly impact our lives. An excerpt:

Whatever their disagreements about long-term visions, communists, fascists and liberals all combined forces to create a new state-run leviathan. Within a surprisingly short time, they engineered all-encompassing systems of mass education, mass health and mass welfare, which were supposed to realise the utopian aspirations of the ruling party. These mass systems became the main employers in the job market and the main regulators of human life. In this sense, at least, the grand political visions of the past century have succeeded in creating an entirely new world. The society of 1800 was completely destroyed and we are living in a new reality altogether.

In 1900 or 1950 politicians of all hues thought big, talked big and acted even bigger. Today it seems that politicians have a chance to pursue even grander visions than those of Lenin, Hitler or Mao. While the latter tried to create a new society and a new human being with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and supercomputers. In the coming decades, technological breakthroughs are likely to change human society, human bodies and human minds in far more drastic ways than ever before.

Whereas the Nazis sought to create superhumans through selective breeding, we now have an increasing arsenal of bioengineering tools at our disposal. These could be used to redesign the shapes, abilities and even desires of human beings, so as to fulfil this or that political ideal. Bioengineering starts with the understanding that we are far from realising the full potential of organic bodies. For four billion years natural selection has been tinkering and tweaking with these bodies, so that we have gone from amoebae to reptiles to mammals to Homo sapiens. Yet there is no reason to think that sapiens is the last station. Relatively small changes in the genome, the neural system and the skeleton were enough to upgrade Homo erectus – who could produce nothing more impressive than flint knives – to Homo sapiens, who produces spaceships and computers. Who knows what the outcome of a few more changes to our genome, neural system and skeleton might be? Bioengineering is not going to wait patiently for natural selection to work its magic. Instead, bioengineers will take the old sapiens body and ­intentionally rewrite its genetic code, rewire its brain circuits, alter its biochemical balance and grow entirely new body parts.

On top of that, we are also developing the ability to create cyborgs.•

Tags:

The robotic store has been a long-held dream, and in and of itself it’s a good thing, but it’s certainly not a positive for Labor unless new work opportunities pop up to replace those disappeared or we come to some sort of political solution to a shrinking need for human hands. In Iowa, a completely automated nonprofit grocery will offer shoppers healthy food, which is wonderful, but not completely wonderful. From Christopher Snyder:

No more long lines at the grocery store – the future of food shopping is getting a high-tech upgrade.

Des Moines, Iowa is planning to build a first-of-a kind robotic grocery store as an experiment to offer food and necessities to locals anytime at their convenience.   

A partnership between the nonprofit Eat Greater Des Moines and the business equipment firm Oasis24seven will see an automated, vending machine-style unit come to the area.

“Throughout Des Moines, there are areas of town where access to quality food is limited,” said Aubrey Alvarez, the nonprofit’s executive director. “We would love for a full service grocery store to move into these areas, but until that time the robotic unit will address the gap in the community.”

She added this “project takes a simple and familiar idea, a vending machine, and turns it on its head. Robotic Retail will be accessible to everyone.”•

Tags: ,

If Marshall McLuhan and Jerome Angel were still alive, they would likely not collaborate with Quentin Fiore (95 this year) on a physical book, not even on one as great as The Medium Is the Massage, a paperback that fit bewtween its covers something akin to the breakneck genius of Godard’s early-’60s explosion. Would they create a Facebook page that comments on Facebook or a Twitter account of aphorisms or maybe an app? I don’t know, but it likely wouldn’t be a leafy thing you could put on a wooden shelf. 

About 10 days ago, I bought a copy of The Age of Earthquakes, a book created by Douglas Coupland, Hans Ulrich Obrist and Shumon Basar, which seems a sort of updating of McLuhan’s most-famous work, a Massage for the modern head and neck. It looks at our present and future but also, by the virtue of being a tree-made thing, the past. As soon as I’m done with the title I’m reading now, I’ll spend a day with Earthquakes and post something about it. 

In his latest Financial Times column, Coupland writes about the twin refiners of the modern mood: pharmacology and the Internet, the former which I think has made us somewhat happier and the latter of which we’ve used, I think, to largely to self-medicate, stretching egos to cover unhappiness rather than dealing with it, and as the misery, untreated, expands, so does its cover. We’re smarter because of the connectivity, but I don’t know that it’s put us in a better mood. 

Coupland is much more sanguine than I am about it all. He’s in a better mood. An excerpt:

If someone time travelled from 1990 (let alone from 1900) to 2015 and was asked to describe the difference between then and now, they might report back: “Well, people don’t use light bulbs any more; they use these things called LED lights, which I guess save energy, but the light they cast is cold. What else? Teenagers seem to no longer have acne or cavities, cars are much quieter, but the weirdest thing is that everyone everywhere is looking at little pieces of glass they’re holding in their hands, and people everywhere have tiny earphones in their ears. And if you do find someone without a piece of glass or earphones, their faces have this pained expression as if to say, “Where is my little piece of glass? What could possibly be in or on that little piece of glass that could so completely dominate a species in one generation?”

 . . . 

To pull back a step or two; as a species we ought to congratulate ourselves. In just a quarter of a century we have completely rewritten the menu of possible human moods, and quite possibly for the better. Psychopharmacology, combined with the neural reconfiguration generated by extended internet usage, has turned human behaviour into something inexplicable to someone from the not too distant past. We forget this so easily. Until Prozac came out in 1987, the only mood-altering options were mid-century: booze, pot and whatever MGM fed Judy Garland to keep her vibrating for three decades. The Prozac ripple was enormous . . .•

Tags: , , , ,

When I post this quote from 1981’s My Dinner with Andre, I don’t know if I should attribute it to Andre Gregory or “Andre Gregory.” Either way, the character’s fear seems more pressing now, though for some, it’s the dream. The passage:

I think it’s quite possible that the 1960s represented the last burst of the human being before he was extinguished, and that this is the beginning of the rest of the future now, that from now on there will simply be all these robots walking around, feeling nothing, thinking nothing, and there’ll be nobody left almost to remind them that there once was a species called a human being, with feelings and thoughts, and that history and memory are right now being erased and soon nobody will really remember that life existed on the planet.•

Tags: ,

Marshall McLuhan was right, for the most part. 

The Canadian theorist saw Frankenstein awakening from the operating table before others did, so the messenger was often mistaken for the monster. But he was neither Dr. Victor nor his charged charge, just an observer with a keen eye, one who could recognize patterns and realized humans might not be alone forever in that talent. Excerpts follow from two 1960s pieces that explore his ideas. The first is from artist-writer Richard Kostelanetz‘s 1967 New York Times article “Understanding McLuhan (In Part)” and the other from John Brooks’ 1968 New Yorker piece “Xerox Xerox Xerox Xerox.”

____________________________

Kostelanetz’s opening:

Marshall McLuhan, one of the most acclaimed, most controversial and certainly most talked-about of contemporary intellectuals, displays little of the stuff of which prophets are made. Tall, thin, middle-aged and graying, he has a face of such meager individual character that it is difficult to remember exactly what he looks like; different photographs of him rarely seem to capture the same man.

By trade, he is a professor of English at St. Michael’s College, the Roman Catholic unit of the University of Toronto. Except for a seminar called “Communication,” the courses he teaches are the standard fare of Mod. Lit. and Crit., and around the university he has hardly been a celebrity. One young woman now in Toronto publishing remembers that a decade ago, “McLuhan was a bit of a campus joke.” Even now, only a few of his graduate students seem familiar with his studies of the impact of communications media on civilization those famous books that have excited so many outside Toronto.

McLuhan’s two major works The Gutenberg Galaxy (1962) and Understanding Media (1964) have won an astonishing variety of admirers. General Electric, I.B.M. and Bell Telephone have all had him address their top executives, so have the publishers of America’s largest magazines. The composer John Cage made a pilgrimage to Toronto especially to pay homage to McLuhan and the critic Susan Sontag has praised his “grasp on the texture of contemporary reality.”

He has a number of eminent and vehement detractors, too. The critic Dwight Macdonald calls McLuhan’s books “impure nonsense, nonsense adulterated by sense.” Leslie Fiedler wrote in Partisan Review “Marshall McLuhan. . .continually risks sounding like the body-fluids man in Doctor Strangelove.

Still the McLuhan movement rolls on.”•

____________________________

From Brooks:

In the opinion of some commentators, what has happened so far is only the first phase of a kind o revolution in graphics. “Xerography is bringing a reign of terror into the world of publishing, because it
means that every reader can become both author and publisher,” the Canadian sage Marshall McLuhan wrote in the spring, 1966, issue of the American Scholar. “Authorship and readership alike can become production-oriented under xerography.… Xerography is electricity invading the world of typography, and it means a total revolution in this old sphere.” Even allowing for McLuhan’s erratic ebullience (“I change my opinions daily,” he once confessed), he seems to have got his teeth into something here. Various magazine articles have predicted nothing less than the disappearance of the book as it now exists, and pictured the library of the future as a sort of monster computer capable of storing and retrieving the contents of books electronically and xerographically. The “books” in such a library would be tiny chips of computer film — “editions of one.” Everyone agrees that such a library is still some time away. (But not so far away as to preclude a wary reaction from forehanded publishers. Beginning late in 1966, the long-familiar “all rights reserved” rigmarole on the copyright page of all books published by Harcourt, Brace & World was altered to read, a bit spookily, “All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information   storage and retrieval system …” Other publishers quickly followed the example.) One of the nearest approaches to it in the late sixties was the Xerox subsidiary University Microfilms, which could, and did, enlarge its microfilms of out-of-print books and print them as attractive and highly legible paperback volumes, at a cost to the customer of four cents a page; in cases where the book was covered by copyright, the firm paid a royalty to the author on each copy produced. But the time when almost anyone can make his own copy of a published book at lower than the market price is not some years away; it is now. All that the amateur publisher needs is access to a Xerox machine and a small offset printing press. One of the lesser but still important attributes of xerography is its ability to make master copies for use on offset presses, and make them much more cheaply and quickly than was previously possible. According to Irwin Karp, counsel to the Authors League of America, an edition of fifty copies of any printed book could in 1967 be handsomely “published” (minus the binding) by this combination of technologies in a matter of minutes at a cost of about eight-tenths of a cent per page, and less than that if the edition was larger. A teacher wishing to distribute to a class of fifty students the contents of a sixty-four-page book of poetry selling for three dollars and seventy-five cents could do so, if he were disposed to ignore the copyright laws, at a cost of slightly over fifty cents per copy.

The danger in the new technology, authors and publishers have contended, is that in doing away with the book it may do away with them, and thus with writing itself. Herbert S. Bailey, Jr., director of Princeton University Press, wrote in the Saturday Review of a scholar friend of his who has cancelled all his subscriptions to scholarly journals; instead, he now scans their tables of contents at his public library and makes copies of the articles that interest him. Bailey commented, “If all scholars followed [this] practice, there would be no scholarly journals.” Beginning in the middle sixties, Congress has been considering a revision of the copyright laws — the first since 1909. At the hearings, a committee representing the National Education Association and a clutch of other education groups argued firmly and persuasively that if education is to keep up with our national growth, the present copyright law and the fair-use doctrine should be liberalized for scholastic purposes. The authors and publishers, not surprisingly, opposed such liberalization, insisting that any extension of existing rights would tend to deprive them of their livelihoods to some degree now, and to a far greater degree in the uncharted xerographic future. A bill that was approved in 1967 by the House Judiciary Committee seemed to represent a victory for them, since it explicitly set forth the fair-use doctrine and contained no educational-copying exemption. But the final outcome of the struggle was still uncertain late in 1968. McLuhan, for one, was convinced that all efforts to preserve the old forms of author protection represent backward thinking and are doomed to failure (or, anyway, he was convinced the day he wrote his American Scholar article). “There is no possible protection from technology except by technology,” he wrote. “When you create a new environment with one phase of technology, you have to create an anti-environment with the next.” But authors are seldom good at technology, and probably do not flourish in anti-environments.•

Tags: , ,

Grantland has many fine writers and reporters, but the twin revelations for me have been Molly Lambert and Alex Pappademas, whom I enjoy reading as much as anyone working at any American publication. The funny thing is, I’m not much into pop culture, which is ostensibly their beat. But as with the best of journalists, the subject they cover most directly is merely an entry into many other ones, long walks that end up in big worlds. 

Excerpts follow from a recent piece by each. In “Start-up Costs,” a look at Silicon Valley and Halt and Catch Fire, Pappademas circles back to Douglas Coupland’s 1995 novel, Microserfs, a meditation on the reimagined office space written just before Silicon Valley became fully a brand as well as a land. In Lambert’s “Life Finds a Way,” the release of Jurassic World occasions an exploration of the enduring beauty of decommissioned theme parks–dinosaurs in and of themselves–at the tail end of an entropic state. Both pieces are concerned with an imposition on the natural order of things by capitalism.

_______________________________

From Pappademas:

Microserfs hit stores in 1995, which turned out to be a pretty big year for Net-this and Net-that. Yahoo, Amazon, and Craigslist were founded; Javascript, the MP3 compression standard, cost-per-click and cost-per-impression advertising, the first “wiki” site, and the Internet Explorer browser were introduced. Netscape went public; Bill Gates wrote the infamous Internet Tidal Wave” memo to Microsoft executives, proclaiming in the course of 5,000-plus words that the Internet was “the most important single development to come along since the IBM PC was introduced in 1981.” Meanwhile, at any time between May and September, you could walk into a multiplex not yet driven out of business by Netflix and watch a futuristic thriller like Hackers or Johnny Mnemonic or Virtuosity or The Net, movies that capitalized on the culture’s tech obsession as if it were a dance craze, spinning (mostly absurd) visions of the (invariably sinister) ways technology would soon pervade our lives. Microserfs isn’t as hysterical as those movies, and its vision of the coming world is much brighter, but in its own way it’s just as wrongheaded and nailed-to-its-context.

“What is the search for the next great compelling application,” Daniel asks at one point, “but a search for the human identity?” Microserfs argues that the entrepreneurial fantasy of ditching a big corporation to work at a cool start-up with your friends can actually be part of that search — that there’s a way to reinvent work in your own image and according to your own values, that you can find the same transcendence within the sphere of commerce that the slackers in Coupland’s own Generation X4 eschewed McJobs in order to chase. The notion that cutting the corporate cord to work for a start-up often just means busting out of a cubicle in order to shackle oneself to a laptop in a slightly funkier room goes unexamined; the possibility that work within a capitalist system, no matter how creative and freeform and unlike what your parents did, might be fundamentally incompatible with self-actualization and spiritual fulfillment is not on the table.•

_______________________________

Lambert’s opening:

I drove out to the abandoned amusement park originally called Jazzland during a trip to New Orleans earlier this year. Jazzland opened in 2000, was rebranded as Six Flags New Orleans in 2003, and was damaged beyond repair a decade ago by the flooding caused by Hurricane Katrina. But in the years since it’s been closed, it has undergone a rebirth as a filming location. It serves as the setting for the new Jurassic World. As I approached the former Jazzland by car, a large roller coaster arced into view. The park, just off Interstate 10, was built on muddy swampland. I have read accounts on urban exploring websites by people who’ve sneaked into the park that say it’s overrun with alligators and snakes.

After the natural disaster the area wasted no time in returning to its primeval state: a genuine Jurassic World. It was in the Jurassic era when crocodylia became aquatic animals, beginning to resemble the alligators currently populating Jazzland. I saw birds of prey circling over the theme park as I reached the front gates, only to be told in no uncertain terms that the site is closed to outsiders. I pleaded with the security guard that I am a journalist just looking for a location manager to talk to, but was forbidden from driving past the very first entrance into the parking lot. I could see the ticket stands and Ferris wheel, but accepted my fate and drove away, knowing I’d have to wait for Jurassic World to see Jazzland. As I drove off the premises, I could still glimpse the tops of the coasters and Ferris wheel, obscured by trees.

I am fascinated by theme parks that return to nature, since the idea of a theme park is such an imposition on nature to begin with — an obsessively ordered attempt to overrule reality by providing an alternate, superior dimension.•

 

Tags: ,

Olaf Stampf, who always conducts smart interviews for Spiegel, has a Q&A with Johann-Dietrich Wörner, the new general director of the European Space Agency. Two quick excerpts follow, one about a moon colony and the other about the potential of a manned Mars voyage.

______________________________

Spiegel:

Which celestial body would you like to travel to most of all?

Johann-Dietrich Wörner:

My dream would be to fly to the moon and build permanent structures, using the raw materials available there. For instance, regolith, or moon dust, could be used to make a form of concrete. Using 3-D printers, we could build all kinds of things with that moon concrete — houses, streets and observatories, for example.

______________________________

Spiegel:

Wouldn’t it be a much more exciting challenge to hazard a joint, manned flight to Mars?

Johann-Dietrich Wörner:

Man will not give up the dream of walking on Mars, but it won’t happen until at least 2050. The challenges are too great, and we don’t have the technologies yet to complete this vast project. Most of all, a trip to Mars would take much too long today. It would be irresponsible, not just from a scientific standpoint, to send astronauts to the desert planet if they could only return after more than two years.•

Tags: ,

Rachel Armstrong, a medical doctor who became an architect, wants to combine her twin passions, believing buildings can be created not only from plastics recovered from our waterways but also biological materials. From Christopher Hume of the Toronto Star:

She also imagines using living organisms such as bacteria, algae and jellyfish as building materials. If that sounds far-fetched, consider the BIQ (Bio Intelligent Quotient) Building in Hamburg. Its windows are filled with water in which live algae that’s fed nutrients. When the sun comes out, the micro-organisms reproduce, raising the temperature of the water. BIQ residents say they love their new digs. It helps that they have no heating bills.

Armstrong then described how objects can be made of plastic dredged from the oceans. It could, she suggested, be a new source of material as well as a way to clean degraded waterways. Her basic desire is to make machinery more biological and unravel the machinery behind the biological. That means figuring out how bacteria talks to bacteria, how algae “communicate.” This isn’t new, of course, but this fusion draws closer all the time.

As that happens, she argues, “consumers can become producers.” In the meantime, the search for “evidence-based truth-seeking systems” continues.

Armstrong, who began her professional life as a doctor, credits her interest in architecture to the time she spent at a leper colony in India in the early ’90s. “What I saw was a different way of life,” she recalls. “I realized we need a more integrated way of being and living so we are at one with our surroundings.”•

Tags: ,

The Confederate flag is an American swastika. In calling for its retirement in the aftermath of the horrific Charleston church massacre, Ta-Nehisi Coates of the Atlantic reminds that the impulse to enslave was initially driven by plunder. Of course, the American flag itself has a similar history, it being the chief symbol of the other U.S. holocaust–the plight of the Native Americans–which was a land grab drenched in blood, first conducted under flags of colonialist nations and then our own.

Coates’ opening:

Last night, Dylann Roof walked into a Charleston church, sat for an hour, and then killed nine people. Roof’s crime cannot be divorced from the ideology of white supremacy which long animated his state nor from its potent symbol—the Confederate flag. Visitors to Charleston have long been treated to South Carolina’s attempt to clean its history and depict its secession as something other than a war to guarantee the enslavement of the majority of its residents. This notion is belied by any serious interrogation of the Civil War and the primary documents of its instigators. Yet the Confederate battle flag—the flag of Dylann Roof—still flies on the Capitol grounds in Columbia.

The Confederate flag’s defenders often claim it represents “heritage not hate.” I agree—the heritage of White Supremacy was not so much birthed by hate as by the impulse toward plunder. Dylann Roof plundered nine different bodies last night, plundered nine different families of an original member, plundered nine different communities of a singular member. An entire people are poorer for his action. The flag that Roof embraced, which many South Carolinians embrace, does not stand in opposition to this act—it endorses it.•

Tags: ,

« Older entries § Newer entries »