Science/Tech

You are currently browsing the archive for the Science/Tech category.

I’ve never watched a single episode of Law & Order. Never. Seems impossible, I know. But artist Jeff Thompson did something amazing with the long-running series’ 456 episodes, documenting every appearance of a computer in the history of the show, which premiered, as did the World Wide Web, in 1990. In doing so, Thompson charted the program’s unintentional chronicling of a society in transformation. From Rebecca J. Rosen at the Atlantic:

“…Most of the technology on the show seems to have come as an afterthought. ‘No one was probably thinking about, you know, what kind of mouse should we use, or where should it go in the room,’ says Thompson. They just represented whatever was the norm of the time, and, in doing so, documented details of computer history that perhaps no one at the time could have articulated—details that were so commonplace they went totally unnoticed.

For instance, when computers appear on Law & Order in the early ’90s they are often not on. Who at the time would have said, ‘We have these new machines in the office. We only turn them on when we need to use them, and they are off the rest of the time.’ The fact that computers tended to be off is only noticeable in light of today’s habit of leaving them on, even during a task that is not specifically on a computer (which may not even happen that often anyway). People’s work-streams were not computer-based, and computers only were booted up for a specific task.

Another shift Thompson noticed is that over time, computers attained more prominent physical locations within a room. Early on, computers tended to be off to the side, on a specialized desk, perhaps for many people to share, using it for one specific task. If a character had his or her own computer, it would be located on a separate table behind his or her desk, not on the desk itself. It’s not until 1995 that the first computer makes the leap from behind the desk to its central ‘desktop’ position we all are so familiar with today.”

Tags: ,

“The enormous crowd…gave the aeronaut a tremendous ovation.”

Alberto Santos-Dumont, a Brazilian aviation pioneer whose chosen vocation was influenced by early reading of Jules Verne, answered a challenge in 1901 to travel around the Eiffel Tower in his airship within 30 minutes. From a report that year in the October 20th New York Times:

Paris–Santos-Dumont, who rounded the Eiffel Tower to-day in his airship, started for the first time at 2:29, but on leaving the park his guide rope caught in a tree and he was obliged to descend. He started against 2:42 P.M., rose 250 yards and then pointed for the Eiffel Tower, the balloon going in a straight line.

It was seen, through field glasses, to arrive at the tower and round it. The time, up to that point, with the wind in the balloon’s favor, was eight minutes and forty-five seconds. It returned against the wind and made slower headway, but still kept in true direction for St. Cloud, which it reached in the total time of twenty-nine minutes, fifteen seconds.

But, instead of descending immediately, Santos-Dumont made a broad sweep over the Aero Club grounds, with the result that another minute and twenty-five seconds were consumed before the workmen seized the guide rope. Thus, technically, Santos-Dumont exceeded the time limit by forty seconds.

The enormous crowd which had gathered inside and outside the grounds gave the aeronaut a tremendous ovation. As his basket came within speaking distance, Santos-Dumont leaned over the side and asked:

‘Have I won the prize?’

“A number of ladies who were present threw flowers over the aeronaut.”

Hundreds of spectators shouted: ‘Yes! Yes!’ But the Count de Dion, a member of the committee approached and threw a damper on the enthusiasm by saying:

‘My friend, you have lost the prize by forty seconds.’

The crowd, however, refused to accept this view and a warm discussion ensued, the majority of the spectators taking the ground that Santos-Dumont was entitled to the prize.

The aeronaut, after protesting against the decision of the committee, finally shrugged his shoulders and remarked:

‘I do not care personally for the 100,000 francs. I intended to give it to the poor.’

A number of ladies who were present threw flowers over the aeronaut, others offered him bouquets, and one admirer, to the amusement of the onlookers, even presented him with a little white rabbit.”

Usain Bolt runs really fast, for a human. Slow for a sheep.

Similarly, humans play chess really well for humans, but we’re inferior when competing on a wider playing field, when AI is introduced. That means we must redefine the way we view our role in the world.

Before the shift was complete and computers became our partners–our betters, in some unnerving ways–Garry Kasparov still had a fighting chance and so did you and I. At the end of the 1980s, the chess champion was able to stave off the onslaught, if only for a little while longer, when challenged by Deep Thought. From an article by Harold C. Schonberg in the October 23, 1989 New York Times:

Yesterday Gary Kasparov, the world chess champion, played Deep Thought, the world computer chess champion, in a two-game match. He won both games handily, to nobody’s surprise, including his own.

Two hours before the start of the first game, held at the New York Academy of Art at 419 Lafayette Street in Manhattan, he held a conference for some 75 journalists representing news organizations all over the world. They were attracted to the event because of the possibility of an upset and the philosophical problems an upset would cause. Deep Thought, after all, has recently been beating grandmasters. Does this mean that the era of human chess supremacy is drawing to a close?

Yes, in the opinion of computer and chess experts.

The time is rapidly coming, all believe, when chess computers will be operating with a precision, rapidity and completeness of information that will far eclipse anything the human mind can do. In three to five years, Deep Thought will be succeeded by a computer with a thousand times its strength and rapidity. And computers scanning a million million positions a second are less than 10 years away. As for the creativity, intuition and brilliance of the great players, chess computers have already demonstrated that they can dream up moves that make even professionals gasp with admiration. It may be necessary to hold championship chess matches for computers and separate ones for humans. …

Mr. Kasparov, unlike many of the experts, was even doubtful that a computer could ever play with the imagination and creativity of a human, though he did look ahead to the next generation of computers and shuddered at what might be coming. Deep Thought can scan 720,000 positions a second. The creators of Deep Thought have developed plans for a machine that can scan a billion positions a second, and it may be ready in five years.

‘That means,’ grinned Mr. Kasparov, ‘that I can be champion for five more years.’ More seriously, he continued: ‘But I can’t visualize living with the knowledge that a computer is stronger than the human mind. I had to challenge Deep Thought for this match, to protect the human race.'”

_______________________________

“What you have here is the phenomenon of how we define ourselves in relationship to the machine”:

Tags: ,

This Guy Wants To Help You Download Your Brain” is Monica Heisey’s Vice interview with Russell Hanson, who’d like you to do a backup of your most important files, to make a copy of your gray matter in case of accident or virus. An excerpt:

Question:

So once a brain has been imaged, can you effectively play back that information, like a tape?

Russell Hanson:

A single snapshot is a static image, so you can’t play something back that doesn’t have a time series associated with it. Conceivably, you could ‘rewind’ just as you can peer back in time into your memories. The way different people access different pieces of their memories is hierarchical and everything is built upon prior experience, so you would have to build a special kind of ‘relative knowledge engine’ that needs to construct the mechanism of accessing the memories for each person individually. Research has shown that the brain is very poor at telling wall-clock time, and is affected by all sorts of things, like whether we caused an event or not. So no—you can’t really ‘play back’ the information in the kind of frame-by-frame or second-by-second manner we’re used to with audio or visual recordings.

Question:

The connectome, from my understanding, is simply the documentation of connections, but provides no information about what is being passed between neurons at these points. If you can’t play back or otherwise access the information in your brain, what’s the use to the average person of having a map of their brain’s pathways?

Russell Hanson:

The goal of the work is to build the infrastructure to make this data usable and interesting. It is pretty clear that having the brain map is a necessary first component to ‘playing back’ or ‘running’ a meaningful dynamical simulation of a brain, whether it’s a mouse, fly, or human. We decided to tackle this engineering challenge first before the other one—that’s being worked on by other very capable groups. In its simplest form, this research will surely inform treatments for devastating diseases like Alzheimer’s, Parkinson’s, autism, depression, and others—research that the governmental funding agencies have a long history of supporting.”

Tags: ,

If history has taught us anything, it’s that trying to hold off the future does not work. The Industrial Revolution was frightening, but countries that resisted (or lacked the wherewithal to join) were left behind. The rest of us grew richer (if unevenly). The same is true of the Computer Age, with all its challenges. Avoid it at your own peril. People argue against technology until the technology gets so good that the argument is silenced. Better to try to “reform from within” than smash the machines. From a new Economist article about the teaching of mathematics:

“Maths education has been a battlefield before: the American ‘math wars’ of the 1980s pitted traditionalists, who emphasised fluency in pen-and-paper calculations, against reformers led by the country’s biggest teaching lobby, who put real-world problem-solving, often with the help of calculators, at the centre of the curriculum. A backlash followed as parents and academics worried that the ‘new math’ left pupils ill-prepared for university courses in mathematics and the sciences. But as many countries have since found, training pupils to ace exams is not the same as equipping them to use their hard-won knowledge in work and life.

Today’s reformers think new technology renders this old argument redundant. They include Conrad Wolfram, who worked on Mathematica, a program which allows users to solve equations, visualise mathematical functions and much more. He argues that computers make rote procedures, such as long division, obsolete. ‘If it is high-level problem-solving and critical thinking we’re after, there’s not much in evidence in a lot of curriculums,’ he says.”

Even if Broadway disappeared, there’d always be theater. We need stories. The same is true of newspapers and journalism. Reportage is rarely what we’d like it to be–and those reading the news are rarely ideal themselves–but the process will always march on. Via Johana Bhuiyan and Nicole Levy at Capital New York, some predictions for the near-term future of news from Marc Andreessen:

  • The news market will expand by 2020 to about 5 billion people worldwide, consuming mostly on mobile devices.
  • News organizations that report broadly at an extremely gross scale, or very deeply at a small scale, will thrive on the new eyeballs.
  • Advertising remains the best way to make newsgathering profitable but publishers need to take responsibility for the quality of their ads, or work with partners in the ad tech field who do.
  • Subscription models, as well as conferences and events, are here to stay: “Many consumers pay $ for things they value much of the time. If they’re unwilling to pay, ask Q, are they really valuing?”

Tags: , ,

Detroit had RoboCop while Pittsburgh had actual robots. Guess which erstwhile industrial giant remade itself in post-manufacturing America and which one fell into complete disrepair? From “The Robots That Saved Pittsburgh,” by Glenn Thrush at Politico:

“‘Roboburgh,’ the boosterish moniker conferred on the city by the Wall Street Journal in 1999 and cited endlessly in Pittsburgh’s marketing materials ever since, may have been premature back then, but it isn’t now: Pittsburgh, after decades of trying to remake itself, today really does have a new economy, rooted in the city’s rapidly growing robotic, artificial intelligence, health technology, advanced manufacturing and software industries. It’s growing in population for the first time since the 1950s, and now features regularly in lists like ‘the Hottest Cities of the Future’ and ‘Best Cities for Working Mothers.’ ‘The city is sort of in a sweet spot,’ says Sanjiv Singh, a [Red] Whittaker acolyte at Carnegie Mellon who is working on the first-of-its-kind pilotless medical evacuation helicopter for the Marines. ‘It has the critical mass of talent you need, it’s still pretty affordable and it has corporate memory—the people here still remember when the place was an industrial powerhouse.’

Improbably for a blue-collar town that seemed headed for the scrap heap when its steel industry collapsed, Pittsburgh has developed into one of the country’s most vibrant tech centers, a hotbed of innovation that can no longer be ignored by the industry’s titans. Carnegie Mellon is Google’s biggest rival in the race to build a driverless car, partnering with GM to build a robot Cadillac that has been humanlessly tooling around Route 19, just outside city limits. In 2011, Google opened a posh, 40,000-square-foot office in an old Nabisco factory in the city’s East Liberty neighborhood, ramping up last year to 350 people, with more on the way. Bill Gates and other Silicon Valley moguls have invested millions of dollars in Aquion Energy, a start-up spun out of CMU that is developing next-generation batteries and producing them in nearby Westermoreland County, not China. Apple, RAND and Intel also have outposts in town and Disney, which has tapped the university’s computer and robotics talent for years, is partnering with the school to improve cinematic graphics and to develop hominid robots that can gently hand objects to people by predicting the movement around them. All told, Pittsburgh’s tech and education sectors now account for some 80 percent of the high-wage jobs in the city, and robots are just the most visible piece of this miraculous turnaround of a city on the brink.”

Tags: , ,

Arthur Chu has recently won money and attention for doing something on Jeopardy! that even Watson didn’t attempt: gaming the system–hacking it in a sense. IBM’s robot beat the humans the old-fashioned way, never using out-of-ordinary strategy to circumvent the spirit of the game or disrupt it. Contestant Chu has done just that. For instance: He pursues Daily Doubles he knows he can’t answer, betting a miniscule amount on them, absorbing a nominal loss and scrubbing them off the board so that the other players can’t gain from them. And that’s just one of his “tricks,” all legal if unorthodox, intended to not break the rules but the bank. 

From “Meet the Man Who Hacked Jeopardy,” Jason Schreier’s Kotaku interview with a champion for the search-engine age:

“So what makes Chu so unusual? While most players will start from the top of each column on the Jeopardy board and progress sequentially as question difficulty increases, Chu picks questions at random, using what’s called the Forrest Bounce to hunt for the three Daily Doubles, which are often scattered among the harder questions in every game. Instead of moving from the $200 question to the $400 question and so forth, Chu might bounce between all of the $1,600 or $2,000 questions—not the kind of strategy you often see on Jeopardy.

Chu does this for two reasons. For one, it throws everyone off balance. ‘It’s a lot more mentally tiring to have to jump around the board like that,’ Chu told me.

More importantly, snagging those Daily Doubles offers him a massive statistical advantage. Since Daily Doubles allow players to bet up to their entire bankrolls, just one can swing an entire Jeopardy match—and Chu’s strategy is to control them all, even just to prevent other players from using them.

‘The only chance you have to give yourself an edge—the only moment of power, or choice you have in Jeopardy is choosing the next clue if you got the last one right,’ Chu said.”

Tags: ,

At the New Yorker blog, John Cassidy tries to handicap which company will likely be around in 20 or 30 years, Microsoft or Facebook. I suppose my answer is it doesn’t really matter since there’s little chance either will be influential even if they’re still in business at that point. But it’s still a fun exercise. From Cassidy:

“As Brian Arthur, an economist at the Sante Fe Institute, explained many years ago, the technology industry is different from most other businesses, where incumbents, such as Toyota and Hilton, build up franchises that are difficult to dislodge but which don’t take over the entire market. The tech industry, on the other hand, is defined by successive waves of innovation, and it operates more like a long-running lottery, with the prize for each drawing being a temporary monopoly. Microsoft is Microsoft because, back in the eighties, it won the lottery for the operating-system market. Facebook is Facebook because it won the lottery for the social-networking market.

In the technology world, market leaders, generally speaking, don’t get dislodged by competitors who build a better or cheaper version of their product. Eventually, though, they do tend to get displaced by companies that create, or popularize, a new technology that shifts the entire industry in a different direction. (Look at what digital cameras did to Eastman Kodak.) In assessing the long-term prospects of technology firms, the key issue is how vulnerable they are to such waves of creative destruction. And in conducting such an assessment, short-term indicators, such as growth rates and recent movements in stock prices, can be misleading.

Tags: ,

From the June 20, 1922 New York Times:

“PARIS–Dr. Serge Voronoff’s monkey gland experiments have led to the startling discovery that apparently it is possible to transplant all the vital organs of a chimpanzee to human beings.

‘Already I am using four different glands from every chimpanzee received from Africa, notably thyroid glands for weak-minded children and interstitial glands for the rejuvenation of the aged.’ said Dr. Voronoff. ‘All chimpanzee glands which I have transplanted have thrived so well in the human body that I have tried lesser organs, which also are thriving well. I am experimenting now on major organs and I expect to announce soon that a man may have any new organ.”

Tags:

Kevin Kelly, one of the tech thinkers I admire most, was recently profiled by the New York Times’ wonderfully dyspeptic David Carr, and now he’s participated in an excellent Q&A at John Brockman’s Edge.org. 

I think if you read this blog with any regularity, you know I believe that legislation won’t control or alter surveillance and snooping, won’t stem the flow of information any more than Prohibition stopped the flow of alcohol. Everybody is drinking; everybody’s drunk. That topic is addressed in the first question of the interview:

Edge:

How can we have a world in which we are all watching each other, and everybody feels happy?

Kevin Kelly:

The question that I’m asking myself is, how far will we share, when are we going to stop sharing, and how far are we going to allow ourselves to monitor and surveil each other in kind of a coveillance? I believe that there’s no end to how much we can track each other—how far we’re going to self-track, how much we’re going to allow companies to track us—so I find it really difficult to believe that there’s going to be a limit to this, and to try to imagine this world in which we are being self-tracked and co-tracked and tracked by governments, and yet accepting of that, is really hard to imagine.

How does this work? How can we have a world in which we are all watching each other, and everybody feels happy? I don’t see any counter force to the forces of surveillance and self-tracking, so I’m trying to listen to what the technology wants, and the technology is suggesting that it wants to be watched. What the Internet does is track, just like what the Internet does is to copy, and you can’t stop copying. You have to go with the copies flowing, and I think the same thing about this technology. It’s suggesting that it wants to monitor, it wants to track, and that you really can’t stop the tracking. So maybe what we have to do is work with this tracking—try to bring symmetry or have areas where there’s no tracking in a temporary basis. I don’t know, but this is the question I’m asking myself: how are we going to live in a world of ubiquitous tracking?”•

Tags: , ,

I can’t help but feel that Libertarians have a blind spot for the deep immorality embedded into their philosophy. Yet, it’s not like I disagree with everything Libertarian. For instance: I concur with George Mason economist Bryan Caplan that the U.S. embargo of Cuba has been detrimental to both countries. It should be stopped immediately. A few exchanges from Caplan’s Ask Me Anything at Reddit follow.

________________________

Question:

What would happen if we began trading with Cuba again?

Bryan Caplan:

They’d quickly get a lot richer, and we’d get some very nice vacations. In the longer run, the chance that Communism in Cuba would collapse or collapse into mere rhetoric is high.

________________________

Question:

Do you feel that the rise of China is beneficial to the interests of the United States?

Bryan Caplan:

When countries produce cheap stuff to sell us, it is good for us. And rich countries are very rarely militarily aggressive, at least once they’ve been rich for a full generation.

Question:

Is the U.S. a counterexample?

Bryan Caplan:

Not really. Most dominant powers throughout history have been far more aggressive. The U.S. today is scared to lose a few thousand soldiers. Why? Because rich people value their lives. Thankfully!

________________________

Question:

What books have influenced you and your career?

Bryan Caplan:

Atlas Shrugged, For a New Liberty, Economic Sophisms, The Armchair Economist, The Bell Curve, The Myth of Democratic Failure, The Nurture Assumption, and Modern Times. Mike Huemer’s been a massive influence on me, but mostly his articles, especially “Moral Objectivism.”

________________________

Question:

With the drought in Southern California is it possible the state is overpopulated? Meaning we have to halt immigration into the south west?

Bryan Caplan:

No. Just raise the price of water!•
______________________________

In 1959, Ed Sullivan interviews Fidel Castro:

Tags:

Ray Kurzweil, always looking forward, believing that then is actually now, discusses how the computer, which used to be all the way across campus and is now in our pockets, will soon be within us, like a pacemaker.

Tags:

A completely preposterous entry into the annals of medical history is this article in the February 6, 1913 New York Times:

ANN ARBOR, Mich.–The brain of a dog was transferred to a man’s skull at University Hospital here to-day. W.A. Smith of Kalamazoo had been suffering from abscess on the brain, and in a last effort to save his life this remarkable operation was performed. 

Opening his skull, the surgeons removed the diseased part of his brain, and in its place substituted the brain of a dog.

Smith was resting comfortably to-night, and the surgeons say he has a good chance to recover.”

 

Tags:

Chattanooga, long famed for speeding trains and infamous for unabated pollution, began remaking itself back when the Internet was still the ArpaNet by cornering the market on a new type of speed. From “Fast Internet Is Chattanooga’s New Locomotive,” by Edward Wyatt in New York Times:

“CHATTANOOGA, Tenn. — For thousands of years, Native Americans used the river banks here to cross a gap in the Appalachian Mountains, and trains sped through during the Civil War to connect the eastern and western parts of the Confederacy. In the 21st century, it is the Internet that passes through Chattanooga, and at lightning speed.

‘Gig City,’ as Chattanooga is sometimes called, has what city officials and analysts say was the first and fastest — and now one of the least expensive — high-speed Internet services in the United States. For less than $70 a month, consumers enjoy an ultrahigh-speed fiber-optic connection that transfers data at one gigabit per second. That is 50 times the average speed for homes in the rest of the country, and just as rapid as service in Hong Kong, which has the fastest Internet in the world.

It takes 33 seconds to download a two-hour, high-definition movie in Chattanooga, compared with 25 minutes for those with an average high-speed broadband connection in the rest of the country. Movie downloading, however, may be the network’s least important benefit.

‘It created a catalytic moment here,’ said Sheldon Grizzle, the founder of the Company Lab, which helps start-ups refine their ideas and bring their products to market. ‘The Gig,’ as the taxpayer-owned, fiber-optic network is known, “allowed us to attract capital and talent into this community that never would have been here otherwise.'”

Facebook, that thing that helps you pretend, turns ten today, which seems an appropriate age for the site’s maturity level. The opening of the first article written about the social network, a piece by Alan J. Tabak published in the February 9, 2004 Harvard Crimson:

“When Mark E. Zuckerberg ’06 grew impatient with the creation of an official universal Harvard facebook, he decided to take matters into his own hands.

After about a week of coding, Zuckerberg launched thefacebook.com last Wednesday afternoon. The website combines elements of a standard House face book with extensive profile features that allow students to search for others in their courses, social organizations and Houses.

‘Everyone’s been talking a lot about a universal face book within Harvard,’ Zuckerberg said. ‘I think it’s kind of silly that it would take the University a couple of years to get around to it. I can do it better than they can, and I can do it in a week.’

As of yesterday afternoon, Zuckerberg said over 650 students had registered use thefacebook.com. He said that he anticipated that 900 students would have joined the site by this morning.

‘I’m pretty happy with the amount of people that have been to it so far,’ he said. ‘The nature of the site is that each user’s experience improves if they can get their friends to join it.'”

Tags: ,

I don’t really think it’s necessary to build a faux downtown to give drivereless cars a test run, but that’s the plan in Michigan. Just seems an unnecessary intermediary step considering that Google already has its models on the real roads. From John Gallagher at the Detroit Free Press:

“As vehicles learn to drive themselves minus human control, the place they’ll learn is on the University of Michigan’s north campus in Ann Arbor.

Today, Gov. Rick Snyder and other officials touted a new $6.5-million, 32-acre site to be built on U-M’s north campus as a test center for technologies for autonomous, or self-driving, vehicles.

Peter Sweatman, director of U-M’s Transportation Research Institute, said this fake downtown will feature building facades, parked vehicles, traffic signals, a tunnel, bicycle lanes and other realistic elements of an actual Michigan streetscape.

The idea, Sweatman said, is to test self-driving technology in realistic conditions that can be measured and controlled with precision.

‘The future of the automotive industry is connected and automated, and we’re going to create that future right here in Michigan,’ Sweatman said during a news conference with Snyder at the North American International Auto Show.”

Using invasiveness to battle criminality, a Google Glass app allows you to scan a stranger’s eyes and know within moments whether that person has been registered with a sex-offender database. Of course, the greater moral question will come when an app can look into eyes and determine what that person might be about to do, not only what they’ve done in the past. From “Through a Face Scanner Darkly,” Betsy Morais at the New Yorker blog:

“Anonymity forms a protective casing. When it’s punctured, on the street or at a party, the moment of recognition falls somewhere on a spectrum of delight and horror. Soon enough, though, technology will see to it that we can no longer expect to disappear into a landscape of passing faces.

NameTag, an app built for Google Glass by a company called FacialNetwork.com, offers a face scanner for encounters with strangers. You see somebody on the sidewalk and, slipping on your high-tech spectacles, select the app. Snap a photo of a passerby, then wait a minute as the image is sent up to the company’s database and a match is hunted down. The results load in front of your left eye, a selection of personal details that might include someone’s name, occupation, Facebook and/or Twitter profile, and, conveniently, whether there’s a corresponding entry in the national sex-offender registry.”

Tags:

The path to wearables has been a tortured one, with failures keeping up with promises. You would assume if we could reduce the size of a pacemaker from a car to an implantable, tiny tech items embedded into clothes and accoutrements would be a snap–but that hasn’t been the case. Perhaps things will change. From “The Future of Wearables,” by Shara Tibken at CNet:

Look for completely different products to emerge. Health care is an area that could see a surge in wearables. We’ll also see more wearables for pets, such as new activity and biometrics trackers, as well as toys.

There will also be other types of devices that extend the capabilities of the smartphone or allow for social interaction, like a ring that lights up when a loved one taps the other half of the matching pair.

Another big area is clothing. For instance, manufacturers are working on smart buttons that could change the color of a fabric when pushed or buttons and fabric that could measure UV exposure in sports equipment.

‘This year we’re hoping to see the beginning of the wearables market showing its diversity,’ said Robert Thompson, business development leader for Freescale’s i.MX application processor line.”

Tags:

You know the story about the Paris-based celebrity doctor who liked to prescribe sauerkraut, was Alexandre Dumas’ personal physician and kept a vicious pet monkey? No? Well, here it is, courtesy of the December 18, 1898 edition of the Brooklyn Daily Eagle:

Paris Bureau–All capitals contain so great a number of eccentric people that if we knew them all, we would still more readily come to the conclusion that there are more mad people outside than inside insane asylums.

It is probable the Paris does not contain as many as London, for it is known that for oddity and originality the English have the precedence; but such specimens as Dr. Gruby show that if the number is not as large as in Paris as in London, they, at least, are quite as capable to do as eccentric things and lead as eccentric lives.

Dr. Gruby was a physician who possessed all of the necessary diplomas, but he was called a healer. This country, like all other countries, in fact, is flooded with healers. Legitimate doctors do all in their power to bring them into disfavor, but vox populi is vox dei, and the more eccentric the healer seems to be and the more extraordinary his cures appear to the patients, the more they knock at his door to be healed.

“Alexandre Dumas would have no other doctor.”

There is not a French celebrity of any kind, within the last forty years, who, afflicted with any serious illness, has not gone to Dr. Gruby, and who was not dumbfounded when the healer prescribed carrots, sauerkraut or some other unheard of medicament with the grave countenance of a doctor who writes down the most complicated mixtures in an incomprehensible page of Latin words.

But faith was there. Had the healer not made the most remarkable cures? Were not such men as Alexandre Dumas and Ambroise Thomas there to testify that whatever surprising things the healer gave, they, one and all, were benefited by it?

He did not reserve all his oddities for his patients; he kept a great number for his own actions and behavior. One of them was that he never wanted to appear but in the best of health to all humanity, his servants included. He died at the age of 80, behind a locked door. He did not even admit his servants during his last two days of agony. He died in a dark room, without a streak of light, for he feared some curious eye might see him in the throes of death. At last the scared servants had the door forced open by the commissaire de police and they found but a cold corpse. The healer had drawn his last breath about twelve hours before.

Not so long ago, Mme. Ambroise Thomas was asked to tell us some eccentricities of the doctor. “Alexandre Dumas would have no other doctor, and for a long while, by the orders of Dr. Gruby, Dumas would start off on a morning constitutional with four apples in his pocket. The orders were to walk from the Avenue de Villiens to the Arch of Triumph and there stop to eat an apple; then to start again and walk to the Place de la Concorde, and stop there and eat another apple. He was to return to the Arch and eat his third apple, and take the fourth before his own door and have the last bite in his mouth before he crossed the threshold.

“And Dr. Gruby’s servants were allowed to be visible only at certain hours. He was passionately fond of animals and plants. He had dogs and cats and for a long time possessed a vicious monkey whom he called his brother, and who bit several of his friends.”•

Tags:

Whether we’re talking about governments and corporations spying on individuals or citizens leaking classified documents, I think the main problem isn’t that legislation hasn’t yet caught up to technology, but that it can’t and won’t. When information is so easy to intercept, when you can download Deep Throat, when everyone can be proven guilty, what will the new morality be?

A few differences between Ellsberg’s Pentagon Papers leak and Assange, Manning and Snowden, from “The Three Leakers and What to Do About Them,” by David Cole at the New York Review of Books:

“First, unlike Nixon, Obama did not attempt to prohibit the publication of any of Snowden’s or Manning’s leaks. The Pentagon Papers case, thanks in part to Goodale’s own arguments before the courts, established an extraordinarily high legal bar for enjoining publication, and that bar holds today. For many of the justices in the Pentagon Papers case, however, that bar applied only to ‘prior restraints’—requests to prohibit publication altogether—and would not apply to after-the-fact criminal prosecutions of leakers. While the Times was not prosecuted, Ellsberg was, and his case was dismissed not on First Amendment grounds, but on the basis of prosecutorial misconduct.

Second, the digital age has profoundly altered the dynamics and stakes of leaks. Computers make stealing documents much more efficient. Ellsberg had to spend months manually photocopying the Pentagon Papers. Manning used his computer to download over 700,000 documents, and Snowden apparently stole even more. The Internet makes disclosures across national borders much easier. Manning uploaded his documents directly to WikiLeaks’ website, hosted in Sweden, far beyond US reach. Snowden gave access to his documents to journalists in Germany, Brazil, and the US, and they have in turn published them in newspapers throughout the world.

Third, computers and the Internet have at the same time made it easier to identify and prosecute leakers. When someone leaked the fact that the US had placed an agent inside an active al-Qaeda cell in May 2012, an entirely unjustifiable disclosure, the Justice Department spent eight months investigating the old-fashioned way, interviewing over 550 people without success. But when the prosecutors subpoenaed phone records of the Associated Press offices and reporters involved in publishing the story, they promptly identified the leaker, an FBI agent, and obtained a guilty plea.”

Tags: , , , ,

I think of the era in America between the one wallpapered with newsprint (pre-1960) and the one given to smartphone updates (today), that time when TV news was predominant, as an age of delusion. That was when Newt Gingrich’s word games could work, when a screenshot of Willie Horton could win. It was an age of bullshit and manipulation. Why, an actor playing a part could become President, aided by Hallmark Card-level writers.

You’re free to feel less than sanguine about the transition, about the financial metrics of newsgathering and the threat it poses to less-profitable but vital journalism (as I sometimes am), but I will choose the deluge of information we get now to centralized media when far fewer had far greater control of the flow. People seem to get bamboozled much less now. Let it rain, I say. Let it pour. Let us swim together in the flood.

Anyhow, we always romanticized the wrong part of the newspaper. It wasn’t great because of the print. I mean, what’s so important about a lousy, crummy newspaper?

From “The Golden Age of Journalism?” a wonderful TomDispatch essay by Tom Engelhardt about the downfall of one type of news and the thing that has supplanted it:

In so many ways, it’s been, and continues to be, a sad, even horrific, tale of loss. (A similar tale of woe involves the printed book. It’s only advantage: there were no ads to flee the premises, but it suffered nonetheless — already largely crowded out of the newspaper as a non-revenue producer and out of consciousness by a blitz of new ways of reading and being entertained. And I say that as someone who has spent most of his life as an editor of print books.) The keening and mourning about the fall of print journalism has gone on for years. It’s a development that represents — depending on who’s telling the story — the end of an age, the fall of all standards, or the loss of civic spirit and the sort of investigative coverage that might keep a few more politicians and corporate heads honest, and so forth and so on.

Let’s admit that the sins of the Internet are legion and well-known: the massive programs of government surveillance it enables; the corporate surveillance it ensures; the loss of privacy it encourages; the flamers and trolls it births; the conspiracy theorists, angry men, and strange characters to whom it gives a seemingly endless moment in the sun; and the way, among other things, it tends to sort like and like together in a self-reinforcing loop of opinion. Yes, yes, it’s all true, all unnerving, all terrible.

As the editor of TomDispatch.com, I’ve spent the last decade-plus plunged into just that world, often with people half my age or younger. I don’t tweet. I don’t have a Kindle or the equivalent. I don’t even have a smart phone or a tablet of any sort. When something — anything — goes wrong with my computer I feel like a doomed figure in an alien universe, wish for the last machine I understood (a typewriter), and then throw myself on the mercy of my daughter.

I’ve been overwhelmed, especially at the height of the Bush years, by cookie-cutter hate email — sometimes scores or hundreds of them at a time — of a sort that would make your skin crawl. I’ve been threatened. I’ve repeatedly received “critical” (and abusive) emails, blasts of red hot anger that would startle anyone, because the Internet, so my experience tells me, loosens inhibitions, wipes out taboos, and encourages a sense of anonymity that in the older world of print, letters, or face-to-face meetings would have been far less likely to take center stage. I’ve seen plenty that’s disturbed me. So you’d think, given my age, my background, and my present life, that I, too, might be in mourning for everything that’s going, going, gone, everything we’ve lost.

But I have to admit it: I have another feeling that, at a purely personal level, outweighs all of the above. In terms of journalism, of expression, of voice, of fine reporting and superb writing, of a range of news, thoughts, views, perspectives, and opinions about places, worlds, and phenomena that I wouldn’t otherwise have known about, there has never been an experimental moment like this. I’m in awe. Despite everything, despite every malign purpose to which the Internet is being put, I consider it a wonder of our age. Yes, perhaps it is the age from hell for traditional reporters (and editors) working double-time, online and off, for newspapers that are crumbling, but for readers, can there be any doubt that now, not the 1840s or the 1930s or the 1960s, is the golden age of journalism?

Think of it as the upbeat twin of NSA surveillance.

Tags:

Excerpts from two articles about Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, which is one of the greatest films ever made, yet only my fourth or fifth favorite Stanley Kubrick movie, which shows you how highly I rank his work. It’s as perfect now as it was when released 50 years ago, as timeless as Patton or Duck Soup. In fact, it’s Patton *as* Duck Soup. It’s tremendously funny yet no laughing matter.

_________________________

From “Doctor’s Orders,” Bilge Elbiri’s 2009 Moving Image Source article explaining how a very serious novel became a Kubrick comedy:

After their initial drafts, Kubrick and his producing partner James B. Harris, with whom he had made The Killing, Paths of Glory, and Lolita, workshopped the script (then called The Delicate Balance of Terror) in New York. “They’d stay up late into the night cracking up over it, overcome by their impulse towards gallows humor,” says Mick Broderick, the author of Nuclear Movies and an extensive forthcoming study of Strangelove. Harris would soon leave to forge his own directorial career (his admirably tense 1965 directorial debut, The Bedford Incident, concerns a confrontation between an American destroyer and a Soviet submarine). But when Kubrick later called his former partner to tell him that he had decided to turn Delicate Balance into an actual comedy, Harris was skeptical, to say the least. “He thought, ‘The kid’s gonna destroy his career!’” says Broderick.

The absurd hilarity of the situation had never quite stopped haunting the director, as he and George continued to work on the film. It wasn’t so much the premise of the Red Alert story as everything Kubrick was learning about the thinking behind thermonuclear strategy. The director, even then notorious for thorough research, had become friendly with a number of scientists and thinkers on the subject, some with George’s help, including the notorious RAND strategist Herman Kahn, who would talk with a straight face about “megadeaths,” a word he had coined in the 1950s to describe one million deaths. As Kubrick told Joseph Heller:

Incongruity is certainly one of the sources of laughter—the incongruity of sitting in a room talking to somebody who has a big chart on the wall that says “tragic but distinguishable postwar environments’ and that says ‘one to ten million killed.” …There is something so absurd and unreal about what you’re talking about that it’s almost impossible to take it seriously.•

_________________________

From “Almost Everything in Dr. Strangelove Was True,” a New Yorker blog post about the scary reality that informed the nervous laughter, by Eric Schlosser, author of Command and Control:

This month marks the fiftieth anniversary of Stanley Kubrick’s black comedy about nuclear weapons, Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. Released on January 29, 1964, the film caused a good deal of controversy. Its plot suggested that a mentally deranged American general could order a nuclear attack on the Soviet Union, without consulting the President. One reviewer described the film as “dangerous … an evil thing about an evil thing.” Another compared it to Soviet propaganda. Although Strangelove was clearly a farce, with the comedian Peter Sellers playing three roles, it was criticized for being implausible. An expert at the Institute for Strategic Studies called the events in the film “impossible on a dozen counts.” A former Deputy Secretary of Defense dismissed the idea that someone could authorize the use of a nuclear weapon without the President’s approval: “Nothing, in fact, could be further from the truth.” (See a compendium of clips from the film.) When Fail-Safe—a Hollywood thriller with a similar plot, directed by Sidney Lumet—opened, later that year, it was criticized in much the same way. “The incidents in Fail-Safe are deliberate lies!” General Curtis LeMay, the Air Force chief of staff, said. “Nothing like that could happen.” The first casualty of every war is the truth—and the Cold War was no exception to that dictum. Half a century after Kubrick’s mad general, Jack D. Ripper, launched a nuclear strike on the Soviets to defend the purity of “our precious bodily fluids” from Communist subversion, we now know that American officers did indeed have the ability to start a Third World War on their own. And despite the introduction of rigorous safeguards in the years since then, the risk of an accidental or unauthorized nuclear detonation hasn’t been completely eliminated.•

Tags: , , ,

I posted about Timex’s failed foray into wearables in the mid-1990s, and here’s a commercial for the Sinclair 1000–touted as “the first computer under $100”–during the company’s equally unsuccessful 1982 attempt to corner the PC market.

A passage from “Can We Equate Computing with Art?” novelist Vikram Chandra’s very good Financial Times consideration of the aesthetics of 0s and 1s:

“Most of the artists I know – painters, film-makers, actors, poets – seem to regard programming as an esoteric scientific discipline; they are keenly aware of its cultural mystique, envious of its potential profitability and eager to extract metaphors, imagery, and dramatic possibility from its history, but coding may as well be nuclear physics as far as relevance to their own daily practice is concerned.

Many programmers, on the other hand, regard themselves as artists. Since programmers create complex objects, and care not just about function but also about beauty, they are just like painters and sculptors. The best-known assertion of this notion is the 2003 essay ‘Hackers and Painters‘ by programmer and venture capitalist Paul Graham. ‘Of all the different types of people I’ve known, hackers and painters are among the most alike,’ writes Graham. ‘What hackers and painters have in common is that they’re both makers. Along with composers, architects, and writers, what hackers and painters are trying to do is make good things.’

According to Graham, the iterative processes of programming – write, debug (discover and remove bugs, which are coding errors), rewrite, experiment, debug, rewrite – exactly duplicate the methods of artists. ‘The way to create something beautiful is often to make subtle tweaks to something that already exists, or to combine existing ideas in a slightly new way,’ he writes. ‘You should figure out programs as you’re writing them, just as writers and painters and architects do.’

Attention to detail further marks good hackers with artist-like passion, he argues. ‘All those unseen details [in a Leonardo da Vinci painting] combine to produce something that’s just stunning, like a thousand barely audible voices all singing in tune. Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that parts no one is ever supposed to see are beautiful too.’

This desire to equate art and programming has a lengthy pedigree.”

Tags: ,

« Older entries § Newer entries »