You are currently browsing the archive for the Excerpted category.

Drones aren’t a unilateral technology, and eventually those pilotless planes are going to be aimed at the U.S. They’re a pretty attractive weapon for terrorists, capable of carrying a payload of explosives without a need for human recruits or faked passports. That development, of course, will lead to an American military industry protecting us from terror drones, tracking and destroying them. We’re just at the beginning, and people-less deliveries of books and pizzas will have that dark counterpart. The opening of “The Next Drone Wars,” a Foreign Policy essay by Sarah Kreps and Micah Zenko:

“During World War II, a top commander in what was then the U.S. Army Air Forces, General Henry ‘Hap’ Arnold, developed a new way to attack U-boat stations and other heavily fortified German positions: he turned old B-17 and B-24 bombers into remotely piloted aircraft and loaded them with explosives or chemical weapons. ‘If you can get mechanical machines to do this,’ Arnold wrote in a memo to his staff, ‘you are saving lives at the outset.’ The missions had a poor track record, but that did not deter Arnold from declaring in 1945 that ‘the next war may be fought by airplanes with no men in them at all.’

Nearly seven decades later, Arnold’s prophecy is slowly being realized: armed drones are starting to rule the skies. So far, the United States has had a relative monopoly over the use of such drones, but it cannot count on maintaining that for much longer. Other states are quickly catching up. And although these new weapons will not transform the international system as fundamentally as did the proliferation of nuclear weapons and ballistic missiles, they could still be used in ways that are highly destabilizing and deadly.

Countries will not be deterred from launching drone attacks simply because an adversary has drones in its arsenal, too. If anything, the inherent advantages of drones — most of all, not placing pilots or ground forces at risk of being killed or captured — have lowered the threshold for the use of force. Spurred by the United States’ example, other countries are likely to threaten or conduct drone strikes in ways that are harmful to U.S. interests, whether by provoking regional adversaries or targeting domestic enemies.”

Tags: , ,

At some point in time, our descendants will look back in a shock at the sort of diet most Americans had during this era. GM foods probably shouldn’t be feared any more than what we’re eating now. From look at the future of genetically engineered foods at Kurzweil AI from Daniel Berleant, author of The Human Race to the Future:

Beans don’t taste as good as meat to many people. Yet there is no reason they can’t be engineered to taste like small chicken nuggets. Processed fungus protein called mycoprotein, sold in grocery stores, tastes like chicken already. But why stop there? Potatoes with small hamburgers in the middle sounds good — let’s call them ‘hamburgatoes.’

There is no reason hamburgatoes can’t be grown once genetic engineering gets further along. Carrots are crunchy, as are potato chips. So why not grow carrots that taste like potato chips, but retain the nutritional advantages of traditional carrots? Kids would want to eat more veggies.

Sunflower seeds come in packages at many supermarkets, but the ones with the seeds still in their shells seem less popular as snacks because they are harder to eat. You have to bite off the shells to get to the rather small seed inside.

Yet the sunflower seed market would almost certainly grow dramatically if the seeds were ten times larger or more. Imagine eating an enormous sunflower seed the size of a small egg … hefting its weight in the palm of your hand … cracking off its shell to reveal the rich, tasty meat within … and finally sinking your teeth in to savor its nutritious and distinctive flavor. A future sunflower could produce just a few seeds like that, instead of dozens and dozens of smaller seeds like the sunflowers they used to grow back around 2020.”


In the big picture, tearing down a system where power to disseminate information was in the hands of the relative few is a good thing, but revolutions are rarely bloodless. The death of print was in the works for decades, but no one has yet figured out the new landscape. Two quotes:

From Stewart Brand in 1972:

“One popular new feature on the Net is AI’s Associated Press service. From anywhere on the Net you can log in and get the news that’s coming live over the wire or ask for all the items on a particular subject that have come in during the last 24 hours. Project that to household terminals, and so much for newspapers (in present form). Since huge quantities of information can be computer-digitalized and transmitted, music researchers could, for example, swap records over the Net with ‘essentially perfect fidelity.’ So much for record stores (in present form).”

From Michael Wolff in 2014:

“[Politico] did usurp The Washington Post, so they took what was essentially a 2 billion dollar business and replaced it with a business that does 25, 30 million dollars in revenue. So that’s kind of the paradigm. You take these businesses that were real businesses, incredibly valuable businesses, and you create that same function with businesses that are essentially trivial.”


Quantification means that third parties are granted access behind company firewalls. Example: A vending machine business can tell remotely when more Sprite is required, but it’s also a security risk to hackers who want to walk in a backdoor. When things are “smart,” when they have information, they pose a threat. Just like people. From Nicole Perlroth at the New York Times:

“Companies have always needed to be diligent in keeping ahead of hackers — email and leaky employee devices are an old problem — but the situation has grown increasingly complex and urgent as countless third parties are granted remote access to corporate systems. This access comes through software controlling all kinds of services a company needs: heating, ventilation and air-conditioning; billing, expense and human-resources management systems; graphics and data analytics functions; health insurance providers; and even vending machines.

Break into one system, and you have a chance to break into them all.

‘We constantly run into situations where outside service providers connected remotely have the keys to the castle,’ said Vincent Berk, chief executive of FlowTraq, a network security firm.”

Tags: ,

In video-game parlance, members of the Heaven’s Gate cult who committed suicide 17 years ago, hoping in their collective delusion that their well-calibrated deaths would enable them to hitch a ride on Hale-Bopp’s tail, weren’t choosing “Game Over” but trying to get to the “Next Level.” The gaming lingo is particularly apt because those shrouded, Nike-wearing true believers earned a living (until their dying) in the nascent field of website design. From Claire Evans at Vice:

“On March 26th, 1997, 39 people in matching black sweatsuits and Nike sneakers were found dead in a rented mansion in a San Diego suburb. They were members of a religious group called Heaven’s Gate, and they had committed suicide, cleanly and methodically, by ingesting large doses of phenobarbital and vodka. In each of their pockets, authorities found a five-dollar bill and three quarters—interplanetary toll fare.

Their motive was to hitch a ride to the ‘Next Level’ on a heavenly spacecraft hidden behind the rapidly-approaching Hale-Bopp Comet. They didn’t believe they were committing suicide. Instead, they were abandoning fallible physical ‘vehicles’ in order to progress to the ‘Next Level’ above human, a commitment they’d honed while living in isolated compounds in Salt Lake City, Denver, and the Dallas Forth-Worth area, before moving to their final resting place in Southern California.

Beyond the spectacle of their exit from this world, what’s most interesting about Heaven’s Gate, looking back, is their complicated relationship to technology. While we remember the Nike sneakers, the purple shrouds, and the bunk-beds meticulously lined with bodies, what most people don’t know about these 38 devotees and their leader, Marshall Applewhite (known to them as ‘Bo’ or ‘Do’), is that they paid for their lifestyle by building websites.

Yes, Heaven’s Gate were web designers. The group ran a firm called Higher Source, and counted the San Diego Polo Club, a local topiary company, and a Christian music store among their clients. In the heady early days of the World Wide Web, this crew of androgynous roommates in matching close-cropped haircuts and baggy, modest clothes practiced what they called ‘Higher Source-computer programming’ in Java, Visual Basic, SQL, and C++.”


“You’re only chance to evacuate is to leave with us”:

Tags: ,

From film blogger Justin Bozung’s interview with mime Dan Richter, a passage about how he came to be cast as “Moon-Watcher” in 2001: A Space Odyssey and how he prepared for the role:

Justin Bozung:

So for those that haven’t read the book, could you tell me how you came to work on the film with Stanley Kubrick?

Dan Richter: 

I had a friend at the time, a book publisher named Mike Wilson and he was working with Arthur C. Clarke on a series of books about diving. Arthur and Stanley had been discussing the ‘Dawn Of Man’ sequence because they had almost finished the live action shooting on 2001, but they still didn’t have an opening. They had tried a few different things but nothing seemed to worked right. They decided that maybe they should talk to a mime about some of their ideas. Arthur mentioned this to Mike Wilson, and because Mike and I had been friends, he said ‘I know a mime. His name is Dan Richter, and he’s great.’

So consequently, I was asked to go and meet with Stanley at Borehamwood Studios MGM outside of London. I figured he’d pick my brain, and I’d offer some suggestions. So I drove up to see him and we started to talk. Stanley started to explain to me some of the ideas they had had for the sequence that didn’t work. Thinking about it, I didn’t see his problems as having to do anything with acting, but rather as something to do with movement.

The ‘Dawn Of Man’ was for the opening of the film. The problem with the opening of a film or a play or a book is that you have to go and get your audience. You have a very short amount of time to get the audience involved, literally seconds of minutes. So it was important that we made the man-apes come to life.

Justin Bozung:

So you didn’t really go into the meeting thinking you were going in for a job interview with Kubrick?

Dan Richter: 

I truthfully thought I was just going in to talk to him. I thought Stanley was just going to pick my brain, and I thought I’d just offer up suggestions to him in regards to how a mime could be of assistance to him in terms of solving his problems for the sequence. I didn’t know I was auditioning for him. I went in there acting cocky. I wasn’t worried about saying anything wrong to Stanley, because I wasn’t looking for a job. I was busy with other work in London at the time. I just thought I was there to give Stanley some pointers or whatever. I thought I was meeting with Stanley to explain mime movement to him.

Stanley and I hit it right off, and I think he liked my approach. After I was done talking Stanley asked me to show him what I was talking about. He wanted to see how to move as I had explained it to him. Then he offered me the job. So I told Stanley I’d have to do all of the choreography. I told him I’d help develop all of the man-apes costumes. The costumes initially were completely unworkable. You couldn’t move in them. Then I told Stanley that I’d cast and then train the people myself. I didn’t think he’d actually agree to my terms, but he said ‘yes’ to everything. So suddenly, I found myself with a immense job, and it was a job creating something that had never been done before. I was given an office, a rehearsal studio, assistants, and my name on a door and I was just this cocky kid. I had to deliver.

I mean, I had no ideas or plans to play ‘Moon-Watcher’ in the film. I thought I was there to just help with the research and the choreography of the actor’s movements. I never had any notions that Stanley would want me to play ‘Moon-Watcher’ in 2001.

Justin Bozung:

Then there was the enormous amount of research you did on apes.

Dan Richter:

I spent a great deal of time researching at the London Museum Of Natural History. I was granted access to their back stacks, and got to examine and study various skeletons and bones in their collection and all the early journals and research work the museum had acquired to that point.. I spent a lot of time talking to scientists with specialization in the Australopithecus era. Which of course, was the era that we were planning on setting the opening of the film in.

I also went to various zoo’s around England. With the zoo research I was studying the apes to develop a choreography. So I studied the apes at the zoo, so I could see how they interacted with each other in a tribe. How they moved, how their bodies reacted. So I began to study Gorillas, Chimpanzees, and Gibbons. Before my first trip Stanley had handed me a 8mm Bolex camera and told me to film everything. So I just went and filmed everything I observed. I was looking for the truth of it, I needed to know how they interacted with each other.

As we got closer to shooting, we were having a difficult time figuring out exactly how the man-apes should move. When I was at the zoo I filmed this Gibbon ape in slow motion coming down a tree and once he got down he began to just walk around. When I went back and watched the film I discovered this specific way in which the Gibbon walked. The Gibbon moves with their legs slightly bent with their knees pointing outward. Then, with the Chimps we decided it would be best to move our hands and arms in the same way that they moved theirs, which was at particular angle as well.”

Tags: , , ,

Ethics, let alone laws, can’t keep up with the accelerating pace of science and technology. Growth is exponential and often unexpected, and different nations have varying rules of engagement. It’s difficult to come up with any universal policy. Biotech, in particular, will be messy and dangerous. From a post about the implications of synthetic yeast by Julian Savulescu at Practical Ethics:

“Back in 2010, I blogged about Craig Venter’s creation of the first synthetic organism, Synthia, a bacteria.

Now, in 2014, the next step has been made by a team at John Hopkins University, the use of synthetic biology in yeast, which, whilst still a simple organism, has a similar cell structure to humans (and other more complex organisms): a nuclei, chromosomes and organelles. The engineered yeast has been reproduced to over 100 generations, passing on its new DNA.

The pace is breathtaking. Moore’s law describes a phenomenon in computing, where computer capacity (so far) doubles every two years. Kurzweil uses Moore’s law to predict the: a state where humans no longer control, or even comprehend, the progress that technology continues to make.

It’s difficult to measure scientific progress in the same way as computer power, but it’s clear that leaps in progress are now measured in years, not decades. Yet still we wait until technology is upon us before we act.

Consider a parallel technology: cloning. The earliest intimations of cloning were perhaps in 1885, when Hans Dreisch successfully divided sea urchin embryos. Yet it was not until Dolly the sheep was cloned in 1998 that we began to become concerned and consider deeply thoughts on human cloning. A moratorium on human cloning research was put in place in the US, and a ban in Europe. In industry, cloned animals are used in farming already, yet the EC and UK governments are apparently at loggerheads about whether to allow this to continue.

Synthetic biology, I believe, has far greater potential than straight forward cloning. But this potential includes great harms as well as great benefits.

Tags: ,

Ray Bradbury reportedly wrote Fahrenheit 451 on coin-operated typewriters in the early 1950s. As the San Francisco Chronicle points out, coin-operated computers became a thing three decades later in the Bay Area. No dystopian masterpieces seem to have emerged, but it was an interesting experiment nonetheless. The opening of the above article:

“Patrons of the San Francisco Civic Center library may now buy time on a coin-operated computer–a $1 token pays for 20 minutes–to help figure their household budget, manage a small business or learn to type.

The computer comes with an instruction book written on a third-grade level.

The library’s first Franklin Ace 1000 computer was wheeled into the main library by Kim Cohan, its 18-year-old marketing entrepreneur, who said he has ‘taken an expensive piece of equipment and brought it to a level where it’s affordable for a large number of people.’

Cohan has taken a $4500 computer and wired it to a coin box and a printer. Librarians will sell the $1 tokens–which are restamped slot machine tokens–and take reservations from the public for up to an hour on the computer.”

Maybe we could colonize Mars with a relatively small community of pioneers. Not so another solar system. In that case, the mass would be critical. From Sarah Fecht at Popular Mechanics:

“Back in 2002, John Moore, an anthropologist at the University of Florida, calculated that a starship could leave Earth with 150 passengers on a 2000-year pilgrimage to another solar system, and upon arrival, the descendants of the original crew could colonize a new world there—as long as everyone was careful not to inbreed along the way.

It was a valiant attempt to solve a thorny question about the future of humans in space. The nearest star systems—such as our nearest neighbor, Proxima Centauri, which is 4.2 light-years from home—are so far that reaching them would require a generational starship. Entire generations of people would be born, live, and die before the ship reached its destination. This brings up the question of how many people you need to send on a hypothetical interstellar mission to sustain sufficient genetic diversity. And a new study sets the bar much higher than Moore’s 150 people.

According to Portland State University anthropologist Cameron Smith, any such starship would have to carry a minimum of 10,000 people to secure the success of the endeavor. And a starting population of 40,000 would be even better, in case a large percentage of the population died during during the journey.” (Thanks Browser.)

Tags: ,

While plant and insect brains may not be as complicated as human ones, they’re very complex and likely resemble our processes more than we commonly believe. From Oliver Sacks in the New York Review of Books:

“Where Aplysia has only 20,000 neurons distributed in ganglia throughout its body, an insect may have up to a million nerve cells, all concentrated in one brain, and despite its tiny size may be capable of extraordinary cognitive feats. Thus bees are expert in recognizing different colors, smells, and geometric shapes presented in a laboratory setting, as well as systematic transformations of these. And of course, they show superb expertise in the wild or in our gardens, where they recognize not only the patterns and smells and colors of flowers, but can remember their locations and communicate these to their fellow bees.

It has even been shown, in a highly social species of paper wasp, that individuals can learn and recognize the faces of other wasps. Such face learning has hitherto been described only in mammals; it is fascinating that a cognitive power so specific can be present in insects as well.

We often think of insects as tiny automata—robots with everything built-in and programmed. But it is increasingly evident that insects can remember, learn, think, and communicate in quite rich and unexpected ways. Much of this, doubtless, is built-in—but much, too, seems to depend on individual experience.

Whatever the case with insects, there is an altogether different situation with those geniuses among invertebrates, the cephalopods, consisting of octopuses, cuttlefish, and squid. Here, as a start, the nervous system is much larger—an octopus may have half a billion nerve cells distributed between its brain and its ‘arms’ (a mouse, by comparison, has only 75 to 100 million). There is a remarkable degree of organization in the octopus brain, with dozens of functionally distinct lobes in the brain and similarities to the learning and memory systems of mammals.”


Hell is other people, but if they own a pied–à–terre in Manhattan, you might upgrade them to purgatory.

The virtual advantages of the online world are being leveraged more and more offline, and not just in high-tech ways like with 3-D printers. Case in point: Airbnb, which has increased the inventory of lodgings without building a thing. It’s a knowledge share that becomes a physical one. It allows the non-professional to quantify the landscape and take advantage of otherwise hidden opportunities, though there may be some drawbacks. From Jeremy Rifkin’s Los Angeles Times op-ed about the company and the broader sharing economy:

“It’s not difficult to see why the service has soared in value. For a traditional hotel chain to add another room to its inventory, the room must be built or acquired, at a significant cost. Airbnb can add another room to its inventory at almost no cost, since its website is already up and running.

Private enterprises have every incentive to reduce their marginal costs. Doing so means they can increase profits, offer goods and services at a lower price, or both. But now the Internet and other innovations have reduced marginal costs to near zero for some commodities and services, which has left many traditional companies reeling.

The zero marginal cost phenomenon has sowed a path of destruction across the recording and information industries over the last decade, as millions of consumers began to produce and share music, video, news and knowledge with one another on the Internet at near zero marginal cost. This phenomenon has weakened revenues in the music industry, newspaper and publishing fields, and the book publishing industry.

Now, as we are seeing with Airbnb, the phenomenon is crossing over from soft goods in virtual space to physical goods in the brick-and-mortar world.”


It’s not that we’re entering a post-jobs world but one where automation, along with other economic factors, may make for permanently higher unemployment levels. Many types of work will vanish and not everyone will be suited for the new normal. Not all clerks can become nurses. From Tyler Cowen in the New York Times:

“How afraid should workers be of these new technologies? There is reason to be skeptical of the assumption that machines will leave humanity without jobs. After all, history has seen many waves of innovation and automation, and yet as recently as 2000, the rate of unemployment was a mere 4 percent. There are unlimited human wants, so there is always more work to be done. The economic theory of comparative advantage suggests that even unskilled workers can gain from selling their services, thereby liberating the more skilled workers for more productive tasks.

Nonetheless, technologically related unemployment — or, even worse, the phenomenon of people falling out of the labor force altogether because of technology — may prove a tougher problem this time around.

Labor markets just aren’t as flexible these days for workers, especially for men at the bottom end of the skills distribution.”


From John Naughton’s Guardian article about Michael Lewis’ new book which reveals computerized Wall Street chicanery, a passage about how technology, that supposed equalizer, can in fact tip the balance of the digital scales:

“This is a good illustration of one of the central problems that society will have to address in the coming decades: the collision between analogue mindsets and digital realities.

Software is pure ‘thought-stuff.’ The only resource needed to produce it is human intelligence and expertise. This has two implications. The first is that attempting to regulate the things that it creates is like trying to catch quicksilver using a butterfly net.

The Edward Snowden disclosures about the US National Security Agency have revealed how difficult it is to bring this stuff under effective democratic control. Lewis’s account of how high-frequency trader geeks have run rings around the regulators suggests that much the same holds true in civilian life. This technology can easily run out of control.

The second implication is that what one might call the politics of expertise will become much more important. Mastery of these technologies confers enormous power on those who have it. Sed quis custodiet ipsos custodes and all that. So in addition to wondering who will guard the guardians, we may have to start thinking about who is going to guard the geeks.”

Tags: ,

In a Guardian piece, Stephen King recalls how two disparate thoughts crashed together in his head, allowing him to create his first novel, Carrie, 40 years ago. The opening:

“While he was going to college my brother Dave worked summers as a janitor at Brunswick High. For part of one summer I worked there, too. One day I was supposed to scrub the rust-stains off the walls in the girls’ shower. I noticed that the showers, unlike those in the boys’ locker room, had chrome U-rings with pink plastic curtains attached.

This memory came back to me one day while I was working in the laundry, and I started seeing the opening scene of a story: girls showering in a locker room where there were no U-rings, pink plastic curtains or privacy. And this one girl starts to have her period. Only she doesn’t know what it is, and the other girls – grossed out, horrified, amused – start pelting her with sanitary napkins … The girl begins to scream. All that blood!

I’d read an article in Life magazine some years before, suggesting that at least some reported poltergeist activity might actually be telekinetic phenomena – telekinesis being the ability to move objects just by thinking about them. There was some evidence to suggest that young people might have such powers, the article said, especially girls in early adolescence, right around the time of their first —

POW! Two unrelated ideas, adolescent cruelty and telekinesis, came together, and I had an idea …”


Ken Jennings, the former Jeopardy! champ who offers that Trebek smells like “knowledge and Old Spice,” explains in a Reddit AMA why some people are really good at the game. Amusingly (and appropriately), his user name for the Q&A sessions was “watsonsbitch.” His answer:


How do you memorize the trivia and facts you’ve learned over the years?

Ken Jennings:

Almost without exception, the know-it-alls you see on Jeopardy are not Rain Man-style savants. They don’t sit at home memorizing the almanac. They are just interested in things. Crucially–and bizarrely–THEY ARE INTERESTED IN EVERYTHING.

Think about how easy it is to remember a fact when you’re interested in the subject. You don’t have to actively study lyrics of songs you like or names of players on your favorite team or characters on your TV show (unless it’s Game of Thrones, I guess). That stuff just sticks. We are wired to remember stuff effortlessly…if we want to.

So for better or for worse, it’s mostly a question of motivation. If you can convince yourself that a subject is interesting, facts will start to stay. That’s my theory anyway.”


From a really fun TED interview by Brooke Borel about the intersection of science and sports with former NFL punter Chris Kluwe, journalist David Epstein (The Sports Gene) and scientist Cynthia Bir, thoughts from the kicker about augmented and virtual reality:


With Google Glass, how do you see that technology changing the landscape of football—and other sports—in the future?

Chris Kluwe: 

I think it will initially shift the viewing perspective. People will now have another way to watch the game—from the athlete’s perspective. It’ll no longer be just the overhead cameras and the sweeping Skycam— you’ll actually be able to see what your favorite player did on the play from his or her perspective. That’s something that we’ve never really had up to this point.

From there, it leads to people becoming more comfortable with the idea of things like augmented reality and virtual reality, which leads into that being adopted more and more into everyday life. In the sporting world, that means augmented reality being adopted into the actual sports themselves. For football, you could have a projector that displays your next series of plays on your helmet as you’re running back to the huddle. Or something that highlights the receiver, or warns you if a guy is coming off your blind spot, for instance tackling against quarterback.

You see this a lot in the military—on displays in fighter jets, and I think they’re working on actual ground-based troop systems as well—there’s this filter of information between you and the world, an additional layer of information that you can use to enhance your own senses. I think we’re at that point right now where not a lot of people realize that, just like not a lot of people realized that the Internet was going to be something that spread and covered the entire world, or that cell phones would be as ubiquitous. No one even thinks of not having a cell phone, but there was a point when cell phones were big briefcase, clunky things that only executives on Wall Street had.”

Tags: , , ,

The United Nations Intergovernmental Panel on Climate Change report released this week was, wow, dreadful. The New Yorker blog posted an except from Elizabeth Kolbert’s 2013 reaction to an early version of the findings that were leaked. An excerpt about the death of diversity, which may include you and I or the next generations of us:

“As bad as things look for humans, the prognosis for non-humans is, in many ways, worse. Under all the scenarios that the I.P.C.C. panel considered, including an implausible one in which the world imposes drastic limits on carbon emissions right now, a ‘large fraction’ of terrestrial and freshwater species face elevated extinction risks. Under the most likely scenarios, many species ‘will not be able to move fast enough during the 21st century to track suitable climates’, and there is a chance that some ecosystems, including the Arctic tundra and the Amazon rainforest, will undergo ‘abrupt and irreversible change.’ Forests are already dying back in some parts of the world because of warming-related stress, and more forests are likely to follow suit as temperatures continue to rise. As Grist put it in a summary of the findings, ‘Animal Planet will get really boring.’

As it happens, the very same day the I.P.C.C. report was leaked, President Obama issued an executive order titled ‘Preparing the United States for the Impacts of Climate Change.’ Among other things, it established a new Council on Climate Preparedness and Resilience, to be co-chaired by the head of the White House Council on Environmental Quality, the head of the Office of Science and Technology Policy, and—suggestively enough—the Assistant to the President for Homeland Security and Counterterrorism.

Promoting ‘preparedness’ is doubtless a good idea. As the executive order notes, climate impacts—which include, but are not limited to, heat waves, heavier downpours, and an increase in the number and intensity of wildfires—are ‘already affecting communities, natural resources, ecosystems, economies, and public health across the Nation.’ However, one of the dangers of this enterprise is that it tends to presuppose, in a Boy Scout-ish sort of way, that ‘preparedness’ is possible.’”


In an Aeon essay “Russia’s Sacred Land,” Peter Turchin, father of Cliodynamics, looks at psychology on a national scale, examining its irrational yet evolutionary underpinnings. In doing so, he downplays the role of Putin in the annexation of Crimea. An excerpt:

“States often behave in an opportunistic manner, grabbing real estate when they can and giving it up when the cost of holding it becomes too great. In 1732, Russia returned a large chunk of Persian territory that Peter the Great had conquered in the previous decade. In return, the Persians entered an alliance with the Russians against the Ottoman Empire. This kind of behaviour is well-described by realism. However, most states, historical and modern, also put some territory into a special category, one that is not subject to rational geopolitical calculation. Such land is ‘sacred’. It must be held at all costs.

Here we find an obvious manifestation of the bourgeois strategy in the hawk-dove game. States and populations that are willing to escalate conflict as far as necessary in defence of their sacred lands are more likely to persist in the international arena. Those that treat their core territory in a rational manner – forfeiting it in accordance with strategic imperatives, as, for example, several Germanic tribes did repeatedly during the Migration Period – get wiped out. As a result, we observe the coevolution of geopolitics and what the anthropologist Scott Atran has identified as ‘sacred values’. Geopolitical assets acquire an aura of sanctity.”


From bombs to trash cans, the initial Apple icons were created by artist Susan Kare, who stumbled into the brand new career path. Zachary Crockett of Priceonomics has an interview with the computer graphics pioneer. An excerpt:

Kare was subsequently offered a fixed-length, part-time job designing fonts and icons for the Apple Macintosh; her business card read ‘HI Macintosh artist.’ She’d never worked on computer graphics before Apple, but quickly made strides to adjust to her new medium. ‘I remember I didn’t really know anything about digital typography, but I got as many books on it as I could,’ she recalls.

Kare found that pixels really weren’t that far removed from other forms of art — some of which dated back thousands of years:

‘I still joke that there’s nothing new under the sun, and bitmap graphics are like mosaics and needlepoint and other pseudo-digital art forms, all of which I had practiced before going to Apple. I didn’t have any computer experience, but I had experience in graphic design.’

When she began, Macintosh had no icon editor — just a way to ‘turn pixels on and off.’ But Hertzfeld soon sat down and worked out an icon editor that automatically generated the hex under the icons so that Kare could focus on the less technical aspects of her design work.

Usually, says Kare, the team would tell her what concepts they needed, and she would try to come up with a selection of things that might work; she’d try them out, and the final design would evolve from there. Her early icons drew inspiration from a wide range of sources — art history, wacky gadgets, and forgotten hieroglyphics.”

Tags: ,

When we’re truly wired, when the Internet of Things is the thing, when the revolution is not televised but quantified, and it’s all seamless so seamless, will we even know to smile? From a Foreign Affairs piece by Neil Gershenfeld and JP Vasseur:

“The Internet of Things is not just science fiction; it has already arrived. Some of the things currently networked together send data over the public Internet, and some communicate over secure private networks, but all share common protocols that allow them to interoperate to help solve profound problems.

Take energy inefficiency. Buildings account for three-quarters of all electricity use in the United States, and of that, about one-third is wasted. Lights stay on when there is natural light available, and air is cooled even when the weather outside is more comfortable or a room is unoccupied. Sometimes fans move air in the wrong direction or heating and cooling systems are operated simultaneously. This enormous amount of waste persists because the behavior of thermostats and light bulbs are set when buildings are constructed; the wiring is fixed and the controllers are inaccessible. Only when the infrastructure itself becomes intelligent, with networked sensors and actuators, can the efficiency of a building be improved over the course of its lifetime.

Health care is another area of huge promise. The mismanagement of medication, for example, costs the health-care system billions of dollars per year. Shelves and pill bottles connected to the Internet can alert a forgetful patient when to take a pill, a pharmacist to make a refill, and a doctor when a dose is missed. Floors can call for help if a senior citizen has fallen, helping the elderly live independently. Wearable sensors could monitor one’s activity throughout the day and serve as personal coaches, improving health and saving costs.

Countless futuristic ‘smart houses’ have yet to generate much interest in living in them. But the Internet of Things succeeds to the extent that it is invisible. A refrigerator could communicate with a grocery store to reorder food, with a bathroom scale to monitor a diet, with a power utility to lower electricity consumption during peak demand, and with its manufacturer when maintenance is needed. Switches and lights in a house could adapt to how spaces are used and to the time of day. Thermostats with access to calendars, beds, and cars could plan heating and cooling based on the location of the house’s occupants. Utilities today provide power and plumbing; these new services would provide safety, comfort, and convenience.

In cities, the Internet of Things will collect a wealth of new data. Understanding the flow of vehicles, utilities, and people is essential to maximizing the productivity of each, but traditionally, this has been measured poorly, if at all. If every street lamp, fire hydrant, bus, and crosswalk were connected to the Internet, then a city could generate real-time readouts of what’s working and what’s not. Rather than keeping this information internally, city hall could share open-source data sets with developers, as some cities are already doing.”

Tags: ,

  • Why do reporters keep asking Republican leaders for their alternative to the Affordable Care Act, as it were an opinion and not a law? After two national elections, passage in Washington (albeit partisan passage) and being upheld by a right-leaning Supreme Court, the legislation is still treated as something less than the law of the land. I suppose it’s liberal journalists trying to put the GOP on the spot because the party doesn’t have an alternative and doesn’t want one, really–it just wants the ACA to go away the way it wants legalized abortion to go away. But what’s the point? Even if Congress swings to the GOP in our gerrymandered country in the 2014 midterms, it won’t signal any chance at repeal with President Obama in the White House until the beginning of 2017. With tens of millions of people being insured by then through Obamacare, repeal is likely a moot point permanently.


In a New York Review of Books interview, George Soros elaborates on the desperation that is Vladimir Putin’s aggression in the Ukraine and his sabre-rattling on the world stage:

George Soros:

The important thing to remember is that Putin is leading from a position of weakness. He was quite popular in Russia because he restored some order out of the chaos. The new order is not all that different from the old one, but the fact that it is open to the outside world is a definite improvement, an important element in its stability. But then the prearranged switch with Dmitry Medvedev from prime minister to president deeply upset the people. Putin felt existentially threatened by the protest movement. He became repressive at home and aggressive abroad.

That is when Russia started shipping armaments to the Assad regime in Syria on a massive scale and helped turn the tide against the rebels. The gamble paid off because of the preoccupation of the Western powers—the United States and the EU—with their internal problems. Barack Obama wanted to retaliate against Syria’s use of chemical weapons. He asked for congressional approval and was about to be rebuffed when Putin came to the rescue and persuaded Assad to voluntarily surrender his chemical weapons.

That was a resounding diplomatic victory for him. Yet the spontaneous uprising of the Ukrainian people must have taught Putin that his dream of reconstituting what is left of the Russian Empire is unattainable. He is now facing a choice between persevering or changing course and becoming more cooperative abroad and less repressive at home. His current course has already proved to be self-defeating, but he appears to be persevering.”

Tags: ,

I’ve put up posts before about Immanuel Velikovsky, the Russian-born psychiatrist turned catastrophist crank who presented a radical alternative to accepted planetary history. He was friends with Albert Einstein and Freeman Dyson, and was always perturbed that they and others in the scientific community didn’t take his science fiction “Worlds in Collision” theory seriously. From “Visionary to the Fringe,” by Paula Findlen in the Nation:

“Early in the project, Velikovsky’s research took an unexpected turn. Seeking to confirm the historical reality of Exodus, he read the modern translation of the Ipuwer Papyrus and began to consider the potential correlation between ancient Egyptian catastrophes and biblical plagues: What had caused them, and were they indicative of a common pattern across cultures? After consulting Columbia anthropologist Franz Boas, he explored the records of ancient Mesoamerican civilizations. Velikovsky’s quest led him from the textual and archaeological challenges of deep history to the empirical findings and theoretical underpinnings of astrophysics, geology and paleontology. There, too, he found his greatest inspiration in historical sources, namely the scientific literature of the late seventeenth through early twentieth centuries, which lay neglected and largely forgotten in the stacks of the Columbia University library. Science’s past inspired his new vision of the present.

Velikovsky later observed that he rarely met professors in the library, lamenting the narrowly defined limits of their erudition in comparison with the breadth of his own. He read musty tomes that experts considered hopelessly out of date, attempting to absorb something from every possible domain of knowledge. In defense of his methodology, Velikovsky declared himself a historian and not a scientist, while nevertheless proclaiming the revolutionary importance of his findings for science. Historical data became his tool for rethinking science, though since Velikovsky failed to meet the empirical standards of either subject or to demonstrate his competence in basic research skills to expert satisfaction, neither discipline embraced him. However, scholarly disapproval has never been a serious impediment to public acclaim (consider the case of Trofim Lysenko or Malcolm Gladwell). Indeed, it became the cornerstone of his reputation as an anti-establishment figure, a latter-day Giordano Bruno or Galileo willing to be condemned as an intellectual heretic for defying authorities in pursuit of truth.


Velikovsky appearing on a 1964 episode of Camera Three:

Tags: ,

Monkeys hitting random keys on typewriters (or tablets) would take eons to write Hamlet but not quite as long to write something better than Hamlet. It’s logical, even if it’s almost completely useless information. From Alex Mayyasi at Priceonomics:

“Mathematicians have spent time calculating how long it would take a monkey to write a copy of Hamlet (even if they perform better than the macaques in England, the answer is a really long time — orders of magnitude longer than the universe has existed). But Borges’s Total Library idea suggests an important corollary to the Infinite Monkey Theorem: a monkey hitting random keys on a typewriter would mostly likely write something superior to Shakespeare long before it produced a copy of Hamlet.

The logic is simple. The odds of a monkey writing an intelligible sentence are low, but the odds of one writing a sentence from Hamlet are astronomical because there are many possible intelligible sentences but a limited number of sentences in Hamlet. In the same way, there are a limited number of works by Shakespeare, but there are an almost infinite number of plays and books that are better than Shakespeare ranging from a copy of Hamlet with one small, superior tweak to yet-to-be-written sci-fi novels to George R.R. Martin’s Game of Thrones series.”

Tags: ,

I was an early adopter of Gmail in 2004, back when you still couldn’t create accounts at will but had to get a code sent to your cellphone to complete the sign-up process. I still remember the disquiet I felt the first time ads seemed to be targeting me based on keywords in my messages. A decade later, I’m pretty much sick of the service, “free” though it is. It’s such clutter now, with so many of my emails filtered into the wrong folders and numerous “great offers” sent to me that I don’t want. I need to switch to an email that’s stripped down and simplified.

From Harry McCracken’s new Time article, “How Gmail Happened“:

“The first true landmark service to emerge from Google since its search engine debuted in 1998, Gmail didn’t just blow away Hotmail and Yahoo Mail, the dominant free webmail services of the day. With its vast storage, zippy interface, instant search and other advanced features, it may have been the first major cloud-based app that was capable of replacing conventional PC software, not just complementing it.

Even the things about Gmail that ticked off some people presaged the web to come: Its scanning of messages to find keywords that could be used for advertising purposes kicked off a conversation about online privacy that continues on to this day.

Within Google, Gmail was also regarded as a huge, improbable deal. It was in the works for nearly three years before it reached consumers; during that time, skeptical Googlers ripped into the concept on multiple grounds, from the technical to the philosophical. It’s not hard to envision an alternate universe in which the effort fell apart along the way, or at least resulted in something a whole lot less interesting.

‘It was a pretty big moment for the Internet,’ says Georges Harik, who was responsible for most of Google’s new products when Gmail was hatched. (The company called such efforts ‘Googlettes’ at the time.) ‘Taking something that hadn’t been worked on for years but was central, and fixing it.’”


« Older entries § Newer entries »