Excerpts

You are currently browsing the archive for the Excerpts category.

Private zoos have always been a strange beast, and some Animal Planet enthusiasts now oddly invite the elephant into the living room (scroll down to second item), but one menagerie faced a far crazier time during Japan’s Great Zoo Massacre of WWII. The opening of Bambi and Tong Tong,” Julia Adeney Thomas’ Times Literary Supplement review of a new book about understanding the psychology and politics of the horror: 

“Behind the curtain of empire, horrors lurk. At the Tokyo Imperial Zoo on September 4, 1943, two starving elephants remained silent, obedient to their trainers, while a religious service on the other side of a red-and-white awning prematurely memorialized their sacrifice for Japan’s imperial cause. Buddhist monks, government officials and schoolchildren made offerings of food to the elephants’ spirits and to the spirits of other captive animals killed by order of the government. This unprecedented ceremony known as the ‘Memorial Service for Martyred Animals’ was held on the zoo’s grounds where nearly a third of the cages stood empty. Lions from Abyssinia, tigers representative of Japan’s troops, bears from Manchuria, Malaya and Korea, an American bison, and many others had been clubbed, speared, poisoned and hacked to death in secret. Although the zoo’s director had found a way to save some of the condemned creatures by moving them to zoos outside Tokyo, Mayor Ōdaichi Shigeo insisted on their slaughter. Ōdaichi himself, along with Imperial Prince Takatsukasa Nobusuke and the chief abbot of Asakusa’s Sensōji Temple, presided over the carefully choreographed and highly publicized ‘Memorial Service,’ thanking the animals for sacrificing themselves for Japan’s war effort.

But the elephants were not dead.”

Tags:

The path to wearables has been a tortured one, with failures keeping up with promises. You would assume if we could reduce the size of a pacemaker from a car to an implantable, tiny tech items embedded into clothes and accoutrements would be a snap–but that hasn’t been the case. Perhaps things will change. From “The Future of Wearables,” by Shara Tibken at CNet:

Look for completely different products to emerge. Health care is an area that could see a surge in wearables. We’ll also see more wearables for pets, such as new activity and biometrics trackers, as well as toys.

There will also be other types of devices that extend the capabilities of the smartphone or allow for social interaction, like a ring that lights up when a loved one taps the other half of the matching pair.

Another big area is clothing. For instance, manufacturers are working on smart buttons that could change the color of a fabric when pushed or buttons and fabric that could measure UV exposure in sports equipment.

‘This year we’re hoping to see the beginning of the wearables market showing its diversity,’ said Robert Thompson, business development leader for Freescale’s i.MX application processor line.”

Tags:

Whether we’re talking about governments and corporations spying on individuals or citizens leaking classified documents, I think the main problem isn’t that legislation hasn’t yet caught up to technology, but that it can’t and won’t. When information is so easy to intercept, when you can download Deep Throat, when everyone can be proven guilty, what will the new morality be?

A few differences between Ellsberg’s Pentagon Papers leak and Assange, Manning and Snowden, from “The Three Leakers and What to Do About Them,” by David Cole at the New York Review of Books:

“First, unlike Nixon, Obama did not attempt to prohibit the publication of any of Snowden’s or Manning’s leaks. The Pentagon Papers case, thanks in part to Goodale’s own arguments before the courts, established an extraordinarily high legal bar for enjoining publication, and that bar holds today. For many of the justices in the Pentagon Papers case, however, that bar applied only to ‘prior restraints’—requests to prohibit publication altogether—and would not apply to after-the-fact criminal prosecutions of leakers. While the Times was not prosecuted, Ellsberg was, and his case was dismissed not on First Amendment grounds, but on the basis of prosecutorial misconduct.

Second, the digital age has profoundly altered the dynamics and stakes of leaks. Computers make stealing documents much more efficient. Ellsberg had to spend months manually photocopying the Pentagon Papers. Manning used his computer to download over 700,000 documents, and Snowden apparently stole even more. The Internet makes disclosures across national borders much easier. Manning uploaded his documents directly to WikiLeaks’ website, hosted in Sweden, far beyond US reach. Snowden gave access to his documents to journalists in Germany, Brazil, and the US, and they have in turn published them in newspapers throughout the world.

Third, computers and the Internet have at the same time made it easier to identify and prosecute leakers. When someone leaked the fact that the US had placed an agent inside an active al-Qaeda cell in May 2012, an entirely unjustifiable disclosure, the Justice Department spent eight months investigating the old-fashioned way, interviewing over 550 people without success. But when the prosecutors subpoenaed phone records of the Associated Press offices and reporters involved in publishing the story, they promptly identified the leaker, an FBI agent, and obtained a guilty plea.”

Tags: , , , ,

I think of the era in America between the one wallpapered with newsprint (pre-1960) and the one given to smartphone updates (today), that time when TV news was predominant, as an age of delusion. That was when Newt Gingrich’s word games could work, when a screenshot of Willie Horton could win. It was an age of bullshit and manipulation. Why, an actor playing a part could become President, aided by Hallmark Card-level writers.

You’re free to feel less than sanguine about the transition, about the financial metrics of newsgathering and the threat it poses to less-profitable but vital journalism (as I sometimes am), but I will choose the deluge of information we get now to centralized media when far fewer had far greater control of the flow. People seem to get bamboozled much less now. Let it rain, I say. Let it pour. Let us swim together in the flood.

Anyhow, we always romanticized the wrong part of the newspaper. It wasn’t great because of the print. I mean, what’s so important about a lousy, crummy newspaper?

From “The Golden Age of Journalism?” a wonderful TomDispatch essay by Tom Engelhardt about the downfall of one type of news and the thing that has supplanted it:

In so many ways, it’s been, and continues to be, a sad, even horrific, tale of loss. (A similar tale of woe involves the printed book. It’s only advantage: there were no ads to flee the premises, but it suffered nonetheless — already largely crowded out of the newspaper as a non-revenue producer and out of consciousness by a blitz of new ways of reading and being entertained. And I say that as someone who has spent most of his life as an editor of print books.) The keening and mourning about the fall of print journalism has gone on for years. It’s a development that represents — depending on who’s telling the story — the end of an age, the fall of all standards, or the loss of civic spirit and the sort of investigative coverage that might keep a few more politicians and corporate heads honest, and so forth and so on.

Let’s admit that the sins of the Internet are legion and well-known: the massive programs of government surveillance it enables; the corporate surveillance it ensures; the loss of privacy it encourages; the flamers and trolls it births; the conspiracy theorists, angry men, and strange characters to whom it gives a seemingly endless moment in the sun; and the way, among other things, it tends to sort like and like together in a self-reinforcing loop of opinion. Yes, yes, it’s all true, all unnerving, all terrible.

As the editor of TomDispatch.com, I’ve spent the last decade-plus plunged into just that world, often with people half my age or younger. I don’t tweet. I don’t have a Kindle or the equivalent. I don’t even have a smart phone or a tablet of any sort. When something — anything — goes wrong with my computer I feel like a doomed figure in an alien universe, wish for the last machine I understood (a typewriter), and then throw myself on the mercy of my daughter.

I’ve been overwhelmed, especially at the height of the Bush years, by cookie-cutter hate email — sometimes scores or hundreds of them at a time — of a sort that would make your skin crawl. I’ve been threatened. I’ve repeatedly received “critical” (and abusive) emails, blasts of red hot anger that would startle anyone, because the Internet, so my experience tells me, loosens inhibitions, wipes out taboos, and encourages a sense of anonymity that in the older world of print, letters, or face-to-face meetings would have been far less likely to take center stage. I’ve seen plenty that’s disturbed me. So you’d think, given my age, my background, and my present life, that I, too, might be in mourning for everything that’s going, going, gone, everything we’ve lost.

But I have to admit it: I have another feeling that, at a purely personal level, outweighs all of the above. In terms of journalism, of expression, of voice, of fine reporting and superb writing, of a range of news, thoughts, views, perspectives, and opinions about places, worlds, and phenomena that I wouldn’t otherwise have known about, there has never been an experimental moment like this. I’m in awe. Despite everything, despite every malign purpose to which the Internet is being put, I consider it a wonder of our age. Yes, perhaps it is the age from hell for traditional reporters (and editors) working double-time, online and off, for newspapers that are crumbling, but for readers, can there be any doubt that now, not the 1840s or the 1930s or the 1960s, is the golden age of journalism?

Think of it as the upbeat twin of NSA surveillance.

Tags:

Excerpts from two articles about Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, which is one of the greatest films ever made, yet only my fourth or fifth favorite Stanley Kubrick movie, which shows you how highly I rank his work. It’s as perfect now as it was when released 50 years ago, as timeless as Patton or Duck Soup. In fact, it’s Patton *as* Duck Soup. It’s tremendously funny yet no laughing matter.

_________________________

From “Doctor’s Orders,” Bilge Elbiri’s 2009 Moving Image Source article explaining how a very serious novel became a Kubrick comedy:

After their initial drafts, Kubrick and his producing partner James B. Harris, with whom he had made The Killing, Paths of Glory, and Lolita, workshopped the script (then called The Delicate Balance of Terror) in New York. “They’d stay up late into the night cracking up over it, overcome by their impulse towards gallows humor,” says Mick Broderick, the author of Nuclear Movies and an extensive forthcoming study of Strangelove. Harris would soon leave to forge his own directorial career (his admirably tense 1965 directorial debut, The Bedford Incident, concerns a confrontation between an American destroyer and a Soviet submarine). But when Kubrick later called his former partner to tell him that he had decided to turn Delicate Balance into an actual comedy, Harris was skeptical, to say the least. “He thought, ‘The kid’s gonna destroy his career!’” says Broderick.

The absurd hilarity of the situation had never quite stopped haunting the director, as he and George continued to work on the film. It wasn’t so much the premise of the Red Alert story as everything Kubrick was learning about the thinking behind thermonuclear strategy. The director, even then notorious for thorough research, had become friendly with a number of scientists and thinkers on the subject, some with George’s help, including the notorious RAND strategist Herman Kahn, who would talk with a straight face about “megadeaths,” a word he had coined in the 1950s to describe one million deaths. As Kubrick told Joseph Heller:

Incongruity is certainly one of the sources of laughter—the incongruity of sitting in a room talking to somebody who has a big chart on the wall that says “tragic but distinguishable postwar environments’ and that says ‘one to ten million killed.” …There is something so absurd and unreal about what you’re talking about that it’s almost impossible to take it seriously.•

_________________________

From “Almost Everything in Dr. Strangelove Was True,” a New Yorker blog post about the scary reality that informed the nervous laughter, by Eric Schlosser, author of Command and Control:

This month marks the fiftieth anniversary of Stanley Kubrick’s black comedy about nuclear weapons, Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. Released on January 29, 1964, the film caused a good deal of controversy. Its plot suggested that a mentally deranged American general could order a nuclear attack on the Soviet Union, without consulting the President. One reviewer described the film as “dangerous … an evil thing about an evil thing.” Another compared it to Soviet propaganda. Although Strangelove was clearly a farce, with the comedian Peter Sellers playing three roles, it was criticized for being implausible. An expert at the Institute for Strategic Studies called the events in the film “impossible on a dozen counts.” A former Deputy Secretary of Defense dismissed the idea that someone could authorize the use of a nuclear weapon without the President’s approval: “Nothing, in fact, could be further from the truth.” (See a compendium of clips from the film.) When Fail-Safe—a Hollywood thriller with a similar plot, directed by Sidney Lumet—opened, later that year, it was criticized in much the same way. “The incidents in Fail-Safe are deliberate lies!” General Curtis LeMay, the Air Force chief of staff, said. “Nothing like that could happen.” The first casualty of every war is the truth—and the Cold War was no exception to that dictum. Half a century after Kubrick’s mad general, Jack D. Ripper, launched a nuclear strike on the Soviets to defend the purity of “our precious bodily fluids” from Communist subversion, we now know that American officers did indeed have the ability to start a Third World War on their own. And despite the introduction of rigorous safeguards in the years since then, the risk of an accidental or unauthorized nuclear detonation hasn’t been completely eliminated.•

Tags: , , ,

A passage from “Can We Equate Computing with Art?” novelist Vikram Chandra’s very good Financial Times consideration of the aesthetics of 0s and 1s:

“Most of the artists I know – painters, film-makers, actors, poets – seem to regard programming as an esoteric scientific discipline; they are keenly aware of its cultural mystique, envious of its potential profitability and eager to extract metaphors, imagery, and dramatic possibility from its history, but coding may as well be nuclear physics as far as relevance to their own daily practice is concerned.

Many programmers, on the other hand, regard themselves as artists. Since programmers create complex objects, and care not just about function but also about beauty, they are just like painters and sculptors. The best-known assertion of this notion is the 2003 essay ‘Hackers and Painters‘ by programmer and venture capitalist Paul Graham. ‘Of all the different types of people I’ve known, hackers and painters are among the most alike,’ writes Graham. ‘What hackers and painters have in common is that they’re both makers. Along with composers, architects, and writers, what hackers and painters are trying to do is make good things.’

According to Graham, the iterative processes of programming – write, debug (discover and remove bugs, which are coding errors), rewrite, experiment, debug, rewrite – exactly duplicate the methods of artists. ‘The way to create something beautiful is often to make subtle tweaks to something that already exists, or to combine existing ideas in a slightly new way,’ he writes. ‘You should figure out programs as you’re writing them, just as writers and painters and architects do.’

Attention to detail further marks good hackers with artist-like passion, he argues. ‘All those unseen details [in a Leonardo da Vinci painting] combine to produce something that’s just stunning, like a thousand barely audible voices all singing in tune. Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that parts no one is ever supposed to see are beautiful too.’

This desire to equate art and programming has a lengthy pedigree.”

Tags: ,

Contrarian theoretical physicist Lee Smolin is interviewed by Michael Segal of Nautilus about the nature of time. A passage about the intersection of religion and science at the dawn of physics:

Michael Segal:

Newton was revolutionary in part because he applied a timeless set of laws to the whole universe. Was he wrong to do so?

Lee Smolin:

Physics was invented by people who happened to be very religious. Newton is one example. For him the laws of nature and their mathematical representations were synonymous with knowing the thoughts of God: Space was the sensorium of God and true time was the time in which God experienced the world and made things in the world. And Newton’s style of doing physics works perfectly when you apply it to a small part of the universe, say something going on in a laboratory. But when you take Newton’s style of doing physics and apply it to the universe as a whole, you implicitly assume that there is something outside the universe making things happen inside the universe, the same way there’s something outside the laboratory system making things happen in the laboratory. What I think has happened is that even physicists who have no religious faith or commitment have gotten sucked into a form of explanation which has a religious underpinning, by which I mean it requires pointing to something outside the universe in order to give a complete explanation. Many people who think of themselves as atheists do this habitually. In my view, it makes them think sloppy thoughts about cosmology. When it comes to extending science to the universe as a whole, you have to think differently than applying science to a laboratory system.

Michael Segal:

Is it not possible for our universe to be affected by other universes?

Lee Smolin:

It is possible. But you know, science is not about what might be the case, science is about what we can demonstrate is the case through publicly available evidence. There’s an infinite number of things that might be the case: There might be other universes, there might be a platonic realm in which mathematical objects move eternally, there might be God and heaven and angels. But science is not about that. If you want to explain the whole universe within science, you have to explain the laws in terms of things inside the universe itself. I think this is the only aspiration for cosmology that’s true to the real spirit of science.”

Tags: , ,

I’m in favor of getting news to the people in any and all manners, so the following sentences about a new Facebook app don’t strike me as being as vampiric and frightening as you might expect. Print news has been on a collision course with computerization and, to some extent, automation, for four decades. I don’t think these adjustments, painful though they are, will replace the primary goal of news but ensure its existence. The channels need to be fed. The programming of the channels is, of course, worrisome. But that was the case in pre-digital times as well. From Reed Albergotti at the WSJ:

“On Thursday, Facebook introduced a long-awaited mobile app, called Paper, that offers users a personalized stream of news. Facebook said it will be available Feb. 3 for the iPhone; there is no date yet for Android.

Instead of editors and reporters, Facebook’s publication is staffed by a computer algorithm and human ‘curators.’ The content comes from outside sources, based on links shared by the social network’s 1.2 billion users. During a recent demonstration, the curated content featured articles from The New York Times, The Washington Post and Time magazine, among others.

The move is part of Facebook’s long-term strategy to be more than just a popular app, or a destination on the Internet. Facebook wants to be the global hub of human communication, essential in the lives of its users.”

Tags:

Three quick exchanges from Erik Lundegaard’s 1996 interview with then-fledgling Internet entrepreneur Jeff Bezos, when Amazon was merely books, and delivery drones and Washington Post ownership were most certainly not in the offing.

__________________

Question:

So how did you come up with the idea for this company?

Jeff Bezos:

… I looked at several different areas and finally decided that one of the most promising ones is interactive retailing. Then I made a list of 20 products, and force-ranked them, looking for the first-best product to sell on-line.

Original logo for amazon.comIn the top five were things like magazine subscriptions, computer hardware, computer software, and music. The reason books really stood out is because there are so many books. Books are totally unusual in that respect—to have so many items in a particular category. There are one and a half million English-language books, different titles, active and in-print at any given time. There are three million titles active and in-print worldwide in all languages. If you look at the number two category in that respect, it’s music, and there are only about 200 thousand active music CDs. Now when you have a huge number of items that’s where computers start to shine because of their sorting and searching and organizing capabilities. Also, it’s back to this idea that you have to have an incredibly strong value proposition. With that many items, you can build a store on-line that literally could not exist in any other way. It would be impossible to have a physical bookstore with 1.5 million titles. The largest physical bookstores in the world only have about 175,000 titles. It would also be impossible to print the amazon.com catalogue and make it into a paper catalogue. If you were to print the amazon.com catalogue it would be the size of seven New York City phone books.

So here we’re offering a service that literally can’t be done in any other way, and, because of that, people are willing to put up with this infant technology.

__________________

Question:

 Do you have a favorite book?

Jeff Bezos:

It used to be Dune. I’m sort of a techno-geek, propeller-head, science-fiction type, but my wife got me to read Remains of the Day and I liked that a lot. I also like the Penguin edition of Sir Richard Francis Burton’s biography

___________________

Question:

How will all of this affect physical bookstores?

Jeff Bezos:

I think you’ll see a continuation of the trend that’s already in place, which is that physical bookstores are going to compete by becoming better places to be. They’ll have better lattes, better sofas, all this stuff. More comfortable environments. I still buy about half of my books from physical bookstores and one of the big reasons is I like being in bookstores. It’s just like TV didn’t put the movies out of business—people still like to go to the movie theater, they like to mingle with their fellow humans—and that’s going to continue to be the case. Good physical bookstores are like the community centers of the late 20th century. Good physical bookstores have great authors come in and you can meet them and shake their hands, and that’s a different thing. You can’t duplicate that on-line.

Tags: ,

Extrapolating a degree beyond the idea of self-replicating machines, George Zarkadakis of the Telegraph wonders whether robots will eventually pair off and hook up, whether the future of “life” will be determined by sexed machines. From his article:

“Perhaps by exploring and learning about human evolution, intelligent machines will come to the conclusion that sex is the best way for them to evolve. Rather than self-replicating, like amoebas, they may opt to simulate sexual reproduction with two, or indeed innumerable, sexes.

Sex would defend them from computer viruses (just as biological sex may have evolved to defend organisms from parasitical attack), make them more robust and accelerate their evolution. Software engineers already use so-called ‘genetic algorithms’ that mimic evolution.

Nanotechnologists, like Eric Drexler, see the future of intelligent machines at the level of molecules: tiny robots that evolve and – like in Lem’s novel – come together to form intelligent superorganisms. Perhaps the future of artificial intelligence will be both silicon- and carbon-based: digital brains directing complex molecular structures to copulate at the nanometre level and reproduce. Perhaps the cyborgs of the future may involve human participation in robot sexual reproduction, and the creation of new, hybrid species.

If that is the future, then we may have to reread Paley’s Natural Theology and take notice. Not in the way that creationists do, but as members of an open society that must face up to the possible ramifications of our technology. Unlike natural evolution, where high-level consciousness and intelligence evolved late as by-products of cerebral development in mammals, in robotic evolution intelligence will be the guiding force. Butler will be vindicated. Brains will come before bodies. Robotic evolution will be Intelligent Design par excellence. The question is not whether it may happen or not, but whether we would want it to happen.”

 

Tags:

It seems like the “engineering” of babies, “designer babies” as they’re often called, will happen at some point, but I would think it will be a slow, gradual process, this shock of the new coming to newborns. From Ferris Jabr’s Scientific American blog post, “Are We Too Close to Making Gattaca a Reality?“:

“In 2009 [Jeffrey] Steinberg announced that he would soon give parents the option to choose their child’s skin color, hair color and eye color in addition to sex. He based this claim on studies in which scientists at deCode Genetics in Iceland suggested they could identify the skin, hair and eye color of a Scandinavian by looking at his or her DNA. ‘It’s time for everyone to pull their heads out of the sand,’ Steinberg proclaimed to the BBC at the time. Many fertility specialists were outraged. Mark Hughes, a pioneer of pre-implantation genetic diagnosis, told the San Diego Union-Tribune that the whole idea was absurd and the Wall Street Journal quoted him as saying that ‘no legitimate lab would get into it and, if they did, they’d be ostracized.’ Likewise, Kari Stefansson, chief executive of deCode, did not mince words with the WSJ: ‘I vehemently oppose the use of these discoveries for tailor-making children,’ he said. Fertility Institutes even received a call from the Vatican urging its staff to think more carefully. Steinberg withdrew his proposal.

But that does not mean he and other likeminded clinicians and entrepreneurs have forgotten about the possibility of parents molding their children before birth. ‘I’m still very much in favor of using genetics for all it can offer us,’ Steinberg says, ‘but I learned a lesson: you really have to take things very, very slowly, because science is scary to a lot of people.'”

Tags: , , ,

Buckminster Fuller wanted to dome a chunk of Manhattan, but even that plan wasn’t as outré as his designs for a floating city. From “10 Failed Utopian Cities That Influenced the Future,” a fun i09 post by Annalee Newitz and Emily Stamm:

“Cloud Nine, the Floating City

Science — and science fiction — often influenced city designers. But nobody took futuristic ideas more seriously than mid-twentieth century inventor Buckminster Fuller, who responded to news about overcrowding in Tokyo by imagining cities in the sky. The Spherical Tensegrity Atmospheric Research Station, called STARS or Cloud 9s, would be composed of giant geodesic spheres. When filled with air, the sphere would weigh one-thousandth of the weight of the air inside it. Fuller planned on heating that air with solar power or human activity, causing the sphere to float. He would anchor his floating cities to mountains, or let them drift around the world. They were never built, but Fuller’s idea for a pre-fab, geodesic dome dwelling called Dymaxion House eventually influenced the pre-fab house movement which is still going strong.”

Tags: , ,

News at it used to be produced is a niche item now. It may have always been to some degree, but more so today. But is that necessarily a bad thing? I think in our decentralized age, American citizens seem far less likely to be bullshitted than they were not too long ago. It may be best that news is delivered in all forms from all directions.

The opening “Doesn’t Anyone Read The News?” by Timothy Wu at the New Yorker blog:

“The State of the Union address is one of the few times each year when a large percentage of Americans reliably pay attention to politics. Once upon a time, as legend has it, things were different: most Americans tuned into Walter Cronkite in the evening or picked up the morning newspaper, which covered matters of national and international importance, like politics, foreign affairs, and business developments.

If analysts at Microsoft Research are correct, a startling number of American Web users are no longer paying attention to the news as it is traditionally defined. In a recent study of ‘filter bubbles,’ Sharad Goel, Seth Flaxman, and Justin Rao asked how many Web users actually read the news online. Out of a sample of 1.2 million American users, just over fifty thousand, or four per cent, were ‘active news customers’ of ‘front section’ news. The other ninety-six per cent found other things to read.”

 

Tags:

If I was offered a good job in Los Angeles, I think I’d move in a minute. I’m one of those people who was born and raised in New York and lived here my whole life, could never have imagined living elsewhere. This city has been the biggest part of my education, has taught me as much a place can. But the last couple of decades have changed it in ways that pain me. So much that was interesting is gone. The way poor and working-class people have been pushed to the edges–to the brink–just saddens me. It was our city, and it was as beautiful as it was ugly. And in the last half-dozen years or so, so many of the brightest, most-creative people I know have left for better opportunities elsewhere.

I’m not one of those people who romanticizes Times Square of the bad old days. I don’t think of child prostitutes as useful props in the fantasies of those who love the idea of urban grit. But I don’t think we had to become a shopping mall, either.

New York was always about money, but it wasn’t only about money. You could create disco or rap or art from what others discarded. It wasn’t a city for the few but for the masses. You could have less but still be equal. You felt like you had it all, even if you had next to nothing. I don’t think that’s true anymore. 

Friends chide me for feeling this way. You act like it’s Toledo or something, they say. They’re right. New York is still more interesting than Toledo. But was that the goal? 

I probably wouldn’t really like anywhere else, either. But being disappointed by a place not your own is different than being disappointed by home.

Millions of other New Yorkers across decades have said the same things about the city that I’m saying now, and they’ve all been wrong. And I’m wrong, too. But I still feel that way.

I think one advantage L.A. has over New York has long been viewed as a deficit: it’s sprawl. When something has no center, it can’t really be “fixed” (or ruined). From “Los Angeles: a City That Outgrew Its Masterplan. Thank God,” by Colin Marshall in the Guardian:

“This lack of definition makes it no easy place to write about, and the challenge has reduced many an otherwise intelligent observer to the comforts of obscurantism and polemic. Nobody understands Los Angeles who thinks about it only through the framework of its entertainment industry, its freeways, its class divisions, or its race relations. I don’t even pretend to understand Los Angeles, but living here I’ve undergone the minor enlightenment whereby I recuse myself from the obligation of doing so.

My own time in LA has, in fact brought me to see many other world cities as theme-park experiences by comparison, made enjoyable yet severely limited by the claims of their images. San Francisco has long strained under the sheer fondness roundly felt for it, or at least for an idea of it, never quite living up to how people imagine or half-remember it in various supposedly prelapsarian states of 20, 40, 60 years ago. New York has similarly struggled with perceptions of it as the ultimate expression of the urban, and even lovers of Paris come back admitting that Paris-as-reality seems hobbled by Paris-as-idea.”

Tags:

Some think the government is gaining too much control over us at the very instant that I think the opposite is happening. Pretty soon, as the anarchy of the Internet is loosed back into the real world, it will be tougher to control much of anything. Big Brother can watch, but can he act?

That wonderful Browser blog pointed me to “The Drug Revolution No One Can Stop,” Mike Power’s Medium article about designer drugs that are made to order and delivered to you like a chair, a lamp, a knife. An excerpt:

MXE is part of a cultural shift that started a generation ago, but has taken on a new edge in the last few years. In 2008, the first in a wave of new, legal, synthetic drugs emerged into the mainstream. They had little to no history of human use. Instead, they were concocted in labs by tweaking a few atoms here and there—creating novel, and therefore legal, substances. Sold mainly online, these designer drugs cover every category of intoxication imaginable, and their effects resemble the full range of banned drugs, from the mellowness of marijuana to the extremes of cocaine and LSD. They are known as ‘legal highs,’ and they have exploded in popularity: the 2012 Global Drugs Survey found that one in twelve people it surveyed worldwide takes them.

Legislators around the world have been put off-balance by the emergence of this massively distributed, technically complex and chemically sophisticated trade. And the trade is growing rapidly.

In 2009 The European Monitoring Centre for Drugs and Drug Addiction’s early warning system identified 24 new drugs. In 2010, it identified 41. In 2011, another 49, and in 2012, there were 73 more. By October 2013, a further 56 new compounds had already been identified—a total of 243 new compounds in just four years.

In its latest World Drug Report, the United Nations acknowledged this extraordinary expansion: ‘While new harmful substances have been emerging with unfailing regularity on the drug scene,’ it said, ‘the international drug control system is floundering, for the first time, under the speed and creativity of the phenomenon.’

Technology and drugs have always existed in an easy symbiosis: the first thing ever bought and sold across the Internet was a bag of marijuana. In 1971 or 1972, students at Stanford University’s Artificial Intelligence Laboratory used ARPANET—the earliest iteration of the Internet—to arrange a marijuana deal with their counterparts at the Massachusetts Institute of Technology.”

Tags:

Oh, I have trouble reading science fiction. The ideas are interesting, but the actual writing usually leaves me cold. There are some exceptions, of course, as there always are in life, but I doubt I’ll even have a period in which I dive deeply into the genre. Rebecca J. Rosen of the Atlantic has an interview with Dan Novy and Sophia Bruckner of MIT who are going to be teaching a course “Science Fiction to Science Fabrication.” A passage from the Q&A about one of the exceptions, Philip K. Dick:

Rebecca J. Rosen:

What are some specific examples you’ll be looking at?

Sophia Bruckner:

For example, we will be reading the classic Do Androids Dream of Electric Sheep? by Philip K. Dick, who is one of my favorite authors and is a master of crazy gadget ideas. The devices he describes in his writings can be very humorous and satirical but are truly profound. People have probably seen Blade Runner, an excellent movie based on this book, but the book is very different! Many of the most compelling devices from the book did not make it into the movie.

For example, the Mood Organ is a device that allows the user to dial a code to instantly be in a certain mood. The book contains multiple funny instances of people using this device, such as when one character plugs in the code 888 to feel ‘the desire to watch TV no matter what is on,’ but Dick also points out some disturbing implications resulting from the existence of such a technology. ‘How much time do you set aside each month for specific moods?’ asks one character. Should you be happy and energized to work all the time? This character eventually concludes that two days a month is a reasonable amount for feeling despair. Today, we are hoping science and technology will find the secret to forever happiness, but what will happen if we actually succeed?

Another one of my favorite gadgets from Do Androids Dream of Electric Sheep? is the Empathy Box. A person holds the handles on the Empathy Box and is connected with all other people using it at the same time by sharing the feelings of a spiritual figure named William Mercer. Amazingly, even in 1968, Dick saw the potential for technology to not only connect people across long distances but to do so with emotional depth. Dick writes that the Empathy Box is ‘the most personal possession you have! It’s an extension of your body; it’s the way you touch other humans, it’s the way you stop being alone.’

Actually, I just realized while answering this question that I’ve been attempting to build a version of the Empathy Box as part of my thesis! I believe people crave for their computers and phones to fulfill this need for connection, but they manage to do so only superficially. As a result, people feel increasingly estranged and alone despite being connected all the time. Like Dick, I also am intrigued by how to use technology to promote empathy and a greater sense of genuine interconnectedness with one another, and I am currently working on designing wearable devices to do this. Some of my best ideas stem from reading science fiction, and I often don’t realize it until later!'”

Tags: , , ,

Since copies of cells are less perfect than healthy original ones, I would assume some cognitive decline occurs over time, that our brains deteriorate as do our other organs. But the age-decline of brain matter has probably always been somewhat overstated; we forget more over time simply because we have inelastic memories that are taxed by a surfeit of information collected over a lifetime. Perhaps we know too much. That’s why it’s a good thing, not a scary thing, for some of our data to be stored in computers, for our heads to be in the cloud. We just don’t have the necessary space for so much information. We can’t fit it all in our heads. We need more room.

From “The Older Mind May Just Be a Fuller Mind,” by Benedict Carey in the New York Times:

“Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing. In a paper published in Topics in Cognitive Science, a team of linguistic researchers from the University of Tübingen in Germany used advanced learning models to search enormous databases of words and phrases.

Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging ‘deficits’ largely disappeared.

‘What shocked me, to be honest, is that for the first half of the time we were doing this project, I totally bought into the idea of age-related cognitive decline in healthy adults,’ the lead author, Michael Ramscar, said by email. But the simulations, he added, ‘fit so well to human data that it slowly forced me to entertain this idea that I didn’t need to invoke decline at all.”

Can it be?”

Tags: ,

Anything we can conjure in our minds, not matter how far-fetched, could happen eventually in reality. Maybe not exactly in the form we hoped or as soon as we wanted, but in some sense. From a Code(Love) piece by Roger Huang about digitally raising the dead, a favorite pursuit of Ray Kurzweil:

Will virtual intelligence ever be anything more than a figment of a real person? The question examines everything humans have always assumed about human nature: that we are unique, and that we are defined by our uniqueness against non-humans. We possess a strange combination of social interaction, physical manipulation, and processing power that is hard to define, so we often use comparisons to living things that are distinctly not human to define ourselves.

We are not cows. We are not dolphins. We are not chimpanzees, even though that is getting uncomfortably close.

The closer robots get to piercing that space, the more uncomfortable humans get with them. This is the ‘uncanny valley.’ The more robots look, and act like humans, even if we distinctly know they are not, the more we revile them. Like the broken souls of the Ring, poorly designed robots can lead us to hate, and to pain, because they lead us to question who we truly are.

Virtual life that humans can accept must pass the Turing Test. It must fool a human into thinking that it too is a human, that it is really he or she. When Ray sits down to talk with his reincarnated father, he cannot be talking with a robot, but with a real, living human being that he has been yearning to speak to for forty long years.

Ray Kurzweil believes that will happen within a couple of decades.”•

________________________________

The 17-year-old Kurzweil in 1965 on I’ve Got a Secret:

Tags: ,

We’re all irreplaceable, each of us, but few more than the singer-songwriter Pete Seeger, whose death feels like the actual end of the twentieth century, so many of that era’s struggles and triumphs burned into his flesh. He was really American and completely foreign. Not a bad thing to be.

An episode of his lo-fi 1960s TV odyssey, Rainbow Quest.

From Jennie Rothenberg Gritz in the Atlantic, writing about Rainbow Quest:

“For a brief period in the mid-1960s, Seeger hosted his own program on the ‘magic screen.’ The show was called Rainbow Quest (named after a line in one of Seeger’s songs). Despite the colorful title, it was filmed in black and white, in a New Jersey studio with no audience, and broadcast over a Spanish-language UHF station. Seeger’s wife, Toshi, was listed in the credits as ‘Chief Cook and Bottle Washer.’

Even with this bare-bones production, Seeger clearly found the new medium disorienting. ‘You know, I’m like a blind man, looking out through this little magic screen,” he said at the start of the first episode, gazing awkwardly into the camera. ‘And I—I don’t know if you see me. I know I can’t see you.’ Over the next 10 minutes, he alternated between noodling gorgeously on his banjo and explaining his distrust of the ‘little box’ that sat in every American living room, killing ambition, romance, and human interaction.

But then he started talking about Huddie Ledbetter and giving his invisible audience an impromptu 12-string guitar lesson. And then the Clancy Brothers showed up in their big woolly sweaters and performed a rousing set of Irish tunes. At that point, Seeger seemed to settle into his comfort zone—a state of natural curiosity and delight.”

Tags: ,

Some athletes respond overwhelmingly to exercise and training which results in only modest gains for others. It’s also likely that some of us have a genetic predisposition to actually doing the work necessary to excel, and while just showing up probably isn’t quite 80% of success as Woody Allen once opined, it is really important. We truly are programmed, though thankfully in complicated and mysterious ways. From Bruce Grierson at Pacific Standard:

“To a certain kind of sports fan—the sort with a Ph.D. in physiology—Olga Kotelko is just about the most interesting athlete in the world. A track and field amateur from Vancouver, Canada, Kotelko has no peer when it comes to the javelin, the long jump, and the 100-meter dash (to name just a few of the 11 events she has competed in avidly for 18 years). And that’s only partly because peers in her age bracket tend overwhelmingly to avoid athletic throwing and jumping events. Kotelko, you see, is 94 years old.

Scientists want to know what’s different about Olga Kotelko. Many people assume she simply won the genetic lottery—end of story. But in some ways that appears not to be true. Some athletes carry genetic variants that make them highly ‘trainable,’ acutely responsive to aerobic exercise. Kotelko doesn’t have many of them. Some people have genes that let them lose weight easily on a workout regime. Kotelko doesn’t.

Olga’s DNA instead may help her out in a subtler way. There’s increasing evidence that the will to work out is partly genetically determined.”

Tags: ,

One of the neat fictional things that Gene Roddenberry dreamed up, the holodeck, might actually be coming to our living rooms soon, so that we can be completely drenched by even more entertainment, until it’s oozing from every last pore. Because we’re all children now who have to be amused every last fucking second. Everybody is excited about a holodeck potentially bringing us even more diversions. Well, not everybody. Starving children and colorectal cancer patients probably don’t care. But they have perspective, so they don’t count. From Nick Bilton in the New York Times:

“This is all part of a quest by computer companies, Hollywood and video game makers to move entertainment closer to reality — or at least a computer-generated version of reality. Rather than simply watch movies, the thinking goes, we could become part of the story. We could see people and things moving around our living rooms. The actors could talk to us. Gamers who today slouch on the couch could step inside their games. They could pick up a computer-simulated bat in computer-simulated Yankee Stadium while a computer-simulated crowd roared around them.

‘The holodeck is something we’ve been fixated on here for a number of years as a future target experience that would be truly immersive,’ said Phil Rogers, a corporate fellow at Advanced Micro Devices, the computer chip maker. ‘Ten years ago, it seemed like a dream. Now, it feels within reach.'”

Tags: ,

It isn’t often that a corporate acquisition results in the acquirer setting up an ethics board to govern the work which will result from the collaboration. But that’s what’s apparently happened with Google’s purchase of DeepMind Technologies. From Jason Inofuentes at ArsTechnica:

London-based DeepMind was founded in 2010, and it has brought together some of the preeminent researchers in deep learning. The company has a staff of 50-75, with 30 PhDs in a particular subset of machine learning called ‘deep learning,’ the development of algorithms that allow machines to learn as humans do. Deep learning models eschew pre-scripted forms of artificial intelligence and instead rely on experiential learning based on rudimentary capabilities. The models require vast server networks and can be broadly applied to any problem that requires advanced pattern recognition.

DeepMind’s well-funded work hasn’t yielded any commercial products, but a recent paper (PDF) demonstrates how far the company has come. In the paper, DeepMind’s researchers describe a neuronal network that was able to learn how to play Breakout, the Atari 2600 game. …

The DeepMind purchase price seems to be up for debate, but The Information is reporting an interesting non-financial wrinkle to the deal: an ethics board will have the authority to determine how Google is allowed to implement artificial intelligence research. DeepMind reportedly insisted on the board’s establishment before reaching a deal.”

Tags:

Commenting on experience and loss made me think of Wim Klein (known alternately as Willem), the human calculator. A rare breed, lightning calculators had always been employed by sideshows and dime museums, and Klein worked French circuses (under the names “Pascal” and “Willy Wortel”), a curiosity with amazing mathematical abilities. He ultimately left the big top to become a human calculator at CERN in the late 1950s. In addition to his miraculous mental abilities, Klein was fascinating because his amazing tent-show talents ran up against the Computer Age, a time he could not navigate, and one that overwhelmed his gifts. Klein retired in 1976, just as personal computers began pushing their way into homes. He was subsequently killed, violently and mysteriously. From a memory of Klein by his friend Frans Cerulus:

So one day arrived Wim Klein, introduced by a note from Director-General Bakker. Professor Bakker wrote that Mr Klein had been recommended by the director of the Zeeman laboratory in Amsterdam as a remarkable calculator.

I was charged with examining Mr Klein’s abilities; such jobs befall usually to the younger member of a team and in addition I spoke Dutch. He needed no desk calculator and performed exceedingly well, exceeding in speed even my own desk calculator in multiplication and division. I wanted to test something more complicated, took the table prepared by the British ladies and asked Klein to calculate a line, with my eye on my watch.

He came up with the result in a minute: his number did not agree with the table. This made him nervous, he did it all over and obtained the same result, getting red in the face. I then sat down and did the calculations twice on my desk calculator, which took me about ten minutes: Klein was right, the ladies had made an error! Klein was appointed.

The next job he did for me was ideally suited for him. I needed tables of combinations of so-called Clebsch-Gordan coefficients. Such coefficients are really fractions which can become quite complicated: there existed tables where the values of the coefficients had been tabled as decimal numbers, e.g. 0.92308. But I needed the explicit form, with the numerator and denominator as whole numbers. Normally this would have required doing the computation all over. But for Wim – by then I could call him Wim – it was just play to find that was just 11/13. He told me part of his secrets: he was gifted with an extraordinary memory for numbers, he could remember a row of 50 digits given him an hour earlier. He kept in his head the multiplication tables up to one hundred and all the logarithms from 2 to 100; in addition he knew the standard interpolation rules.

Wim was very reliable, except perhaps on Monday morning. But then we had that most remarkable secretary, Tatiana Fabergé, who had to type out the tables and spotted any unusual number in the row.

In later years Wim became rather unhappy: there were electronic computers and the demands involved such complications that he could no longer cope and had to learn the basics of computer programming. The moments he could again become the entertainer and show off his extraordinary feats of mental calculation were then moments for happiness for him.•

Tags: , ,

Just read psychologist Adam Alter’s 2013 book, Drunk Tank Pink, which I really enjoyed even if some of the historical material he presents is well-worn. (Oh, and even though I think the connection the author draws between Usain Bolt’s surname and his career success is overstated. Jamaican steering committees responded to those fast-twitch leg muscles and stop-watch times, not his thunderous “title.”) A brief passage in the section about social isolation discusses the experience of French speleologist Michel Siffre who insinuated himself into the Space Race in the 1960s by conducting extreme self-deprivation experiments, in an attempt to anticipate how such conditions would effect astronauts. In 1962, Siffre lived within the solitude of an underground glacier to test the effect on his mental faculties. The following decade, he spent 205 days alone in a Texas cave. A Cousteau who does not get wet, Siffre has dived so deep inside of himself that time has seemed to cease.

Alter’s writing reminded me of a 2008 Cabinet interview that Joshua Foer, that memory enthusiast, conducted with the time-isolation explorer. The opening:

Joshua Foer:

In 1962, you were just twenty-three years old. What made you decide to live underground in complete isolation for sixty-three days?

Michel Siffre:

You have to understand, I was a geologist by training. In 1961, we discovered an underground glacier in the Alps, about seventy kilometers from Nice. At first, my idea was to prepare a geological expedition, and to spend about fifteen days underground studying the glacier, but a couple of months later, I said to myself, “Well, fifteen days is not enough. I shall see nothing.” So, I decided to stay two months. And then this idea came to me—this idea that became the idea of my life. I decided to live like an animal, without a watch, in the dark, without knowing the time.

Joshua Foer:

Instead of studying caves, you ended up studying time.

Michel Siffre:

Yes, I invented a simple scientific protocol. I put a team at the entrance of the cave. I decided I would call them when I woke up, when I ate, and just before I went to sleep. My team didn’t have the right to call me, so that I wouldn’t have any idea what time it was on the outside. Without knowing it, I had created the field of human chronobiology. Long before, in 1922, it had been discovered that rats have an internal biological clock. My experiment showed that humans, like lower mammals, have a body clock as well.”

Tags: , ,

Everybody needs to learn comouter coding because why? As someone who learned a fair amount of HTML in the late ’90s only to not need to know any of it, I can’t make sense of the analogy being drawn between book literacy and coding literacy. If we are all “coders” in the future, it will be a very different thing and a simpler process that produces more complex answers. From NPR’s “Computers Are the Future, but Does Everyone Need to Code?“:

“Some people aren’t so enthusiastic about all the pro-coding rhetoric. ‘Reading and writing are hard; the basics are hard,’ says software developer Jeff Atwood. ‘And now we’re telling people you have to learn this programming too, or else the robots are going to get you.’

Atwood started making his own video games as a 12-year-old back in the ’80s. Now he runs a coding blog and a set of websites to help people with programming. But he remembers a time not so long ago when computers weren’t at all intuitive.

‘When I got my first computer in the mid-80s, when you turned it on, what you got was a giant, blinking cursor on the screen — that was the boot up,’ he recalls. ‘It wasn’t like turning on an iPad where you have a screen full of apps and you start doing things. … When I hear: ‘Everyone must learn to program,’ what I hear is: We’re going back in time to a place where you have to be a programmer to do things on the computer.’

Atwood thinks that’s going backward. He’s glad that people don’t have to be computer whizzes anymore just to be able to use a computer. He thinks that if computers aren’t your thing, then it’s OK to let the programmers make life easier for you.

‘It’s sort of like an obsession with being an auto mechanic,’ he says. ‘There are tons of cars, there’s tons of driving … but I think it’s a little crazy to go around saying everyone should really learn to be an auto mechanic because cars are so essential to the functioning of our society. Should you know how to change oil? Absolutely. There are [also] basic things you should know when you use a computer. But this whole ‘become an auto mechanic’ thing? It’s just really not for everyone.'”

Tags:

« Older entries § Newer entries »