Science/Tech

You are currently browsing the archive for the Science/Tech category.

If you need more proof that software-driven cars will be safer than those with humans behind the wheel, it should be noted that the Google self-driving vehicles have yet to get a ticket. Not one. From Alexis C. Madrigal at the Atlantic:

“On a drive in a convoy of Google’s autonomous vehicles last week, a difficult driving situation arose.

As our platoon approached a major intersection, two Google cars ahead of us crept forward into the intersection, preparing to make left turns. The oncoming traffic took nearly the whole green light to clear, so the first car made the left as the green turned to yellow. The second, however, was caught in that tough spot where the car is in the intersection but the light is turning, and the driver can either try to back up out of the intersection or gun it and make the left, even though he or she or it knows the light is going to turn red before the maneuver is complete. The self-driving car gunned it, which was the correct decision, I think. But it was also the kind of decision that was on the borderline of legality.

It got me wondering: had these cars ever gotten a ticket driving around Mountain View, where they’ve logged 10,000 miles?

‘We have not cited any Google self-driving cars,’ Sergeant Saul Jaeger, the press information officer at the Mountain View Police Department, told me. They hadn’t pulled one over and let the vehicle go, either, to Jaeger’s knowledge.

I wondered if that was because of a pre-existing agreement between Google and the department, but Jaeger said, ‘There is no agreement in place between Google and the PD.’

Google confirmed that they none of their cars had ever been ticketed in Mountain View or elsewhere.”

Tags: ,

Following up the earlier post about computers and consciousness, here’s an excerpt from “Yes, Computers Can Think,” a 1997 New York Times article by Drew McDermott written in the wake of the machines conquering Kasparov:

“When people say that human grandmasters do not examine 200 million move sequences per second, as the computer does, I ask them, ‘How do you know?’ The answer is usually that human grandmasters are not aware of considering so many options. But humans are unaware of almost everything that goes on in our minds.

I tend to agree that grandmasters search in a different way than Deep Blue does, but whatever method they use, if done by a computer, would seem equally ‘blind.’

For example, some scientists believe that the masters’ skill comes from an ability to compare their current position against, say, 10,000 positions they’ve studied. We call their behavior insightful because they are unaware of the details; the right position among the 10,000 ‘just occurs to them.’ If a computer did the same thing, the trick would be revealed; we could examine its data to see how laboriously it checks the 10,000 positions. Still, if the unconscious version yields intelligent results, and the explicit algorithmic version yields essentially the same results, are not both methods intelligent?

So what shall we say about Deep Blue? How about: It’s a ‘little bit’ intelligent. Yes, its computations differ in detail from a human grandmaster’s. But then, human grandmasters differ from one another in many ways.

A log of the machine’s computations is perfectly intelligible to chess masters; they speak the same language, as it were. That’s why the I.B.M. team refused to give the game logs to Mr. Kasparov during the match: It would have been the same as bugging the hotel room where the computer ‘discussed’ strategy with his seconds.

Saying that Deep Blue doesn’t really think is like saying an airplane doesn’t really fly because it doesn’t flap its wings.

Of course, this advance in artificial intelligence does not indicate that any Grand Unified Theory of Thought is on the horizon. As the field has matured, it has focused more and more on incremental progress, while worrying less and less about some magic solution to all the problems of intelligence. There are fascinating questions about why we are unaware of so much that goes on in our brains, and why our awareness is the way it is. But we can answer a lot of questions about thinking before we need to answer questions about awareness.

It is entirely possible that computers will come to seem alive before they come to seem intelligent. The kind of computing power that fuels Deep Blue will also lead to improved sensors, wheels and grippers that will allow machines to react in a more sophisticated way to things in their environment, including us. They won’t seem intelligent, but we may think of them as a weird kind of animal — one that can play a very good game of chess.”

Tags: ,

From the June 1, 1932 Brooklyn Daily Eagle:

Chicago–Young moderns aren’t so casual about their marriage vows as they’ve been painted.

Two of them, Miss Harriet Berger, 21, and Vaclaw Hund, 24, were married yesterday by Judge Charles B. Adams while they were strapped to Northwestern University’s ‘lie detector,’ and this is what happened:

When the judge asked Hund if he would ‘take this woman,’ the bride’s heart almost stopped, and it skipped a beat when the judge said, ‘I pronounce you man and wife.’

The bridegroom’s blood pressure sank steadily throughout the ceremony, and the bride’s rose–all of which the judge said, proved that they really love each other.”

 

Tags: , ,

This year is the 25th anniversary of a great, if largely unheeded, speech by Isaac Asimov about how the human race can survive on Earth in the long run and the role space exploration would need to play in that endeavor.

Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:

CAN MACHINES BE CREATIVE?

We naturally admire our Einsteins and Beethovens, and wonder if
computers ever could create such wondrous theories or symphonies. Most
people think that creativity requires some special, magical ‘gift’ that
simply cannot be explained. If so, then no computer could create – since
anything machines can do most people think can be explained.

To see what’s wrong with that, we must avoid one naive trap. We mustn’t
only look at works our culture views as very great, until we first get good
ideas about how ordinary people do ordinary things. We can’t expect to
guess, right off, how great composers write great symphonies. I don’t
believe that there’s much difference between ordinary thought and
highly creative thought. I don’t blame anyone for not being able to do
everything the most creative people do. I don’t blame them for not being
able to explain it, either. I do object to the idea that, just because we can’t
explain it now, then no one ever could imagine how creativity works.

We shouldn’t intimidate ourselves by our admiration of our Beethovens
and Einsteins. Instead, we ought to be annoyed by our ignorance of how
we get ideas – and not just our ‘creative’ ones. Were so accustomed to the
marvels of the unusual that we forget how little we know about the
marvels of ordinary thinking. Perhaps our superstitions about creativity
serve some other needs, such as supplying us with heroes with such
special qualities that, somehow, our deficiencies seem more excusable.

Do outstanding minds differ from ordinary minds in any special way? I
don’t believe that there is anything basically different in a genius, except
for having an unusual combination of abilities, none very special by
itself. There must be some intense concern with some subject, but that’s
common enough. There also must be great proficiency in that subject;
this, too, is not so rare; we call it craftsmanship. There has to be enough
self-confidence to stand against the scorn of peers; alone, we call that
stubbornness. And certainly, there must be common sense. As I see it, any
ordinary person who can understand an ordinary conversation has
already in his head most of what our heroes have. So, why can’t
‘ordinary, common sense’ – when better balanced and more fiercely
motivated – make anyone a genius,

So still we have to ask, why doesn’t everyone acquire such a combination?
First, of course, it sometimes just the accident of finding a novel way to
look at things. But, then, there may be certain kinds of difference-in-
degree. One is in how such people learn to manage what they learn:
beneath the surface of their mastery, creative people must have
unconscious administrative skills that knit the many things they know
together. The other difference is in why some people learn so many more
and better skills. A good composer masters many skills of phrase and
theme – but so does anyone who talks coherently.

Why do some people learn so much so well? The simplest hypothesis is
that they’ve come across some better ways to learn! Perhaps such ‘gifts’
are little more than tricks of ‘higher-order’ expertise. Just as one child
learns to re-arrange its building-blocks in clever ways, another child
might learn to play, inside its head, at rearranging how it learns!

Our cultures don’t encourage us to think much about learning. Instead
we regard it as something that just happens to us. But learning must itself
consist of sets of skills we grow ourselves; we start with only some of them
and and slowly grow the rest. Why don’t more people keep on learning
more and better learning skills? Because it’s not rewarded right away, its
payoff has a long delay. When children play with pails and sand, they’re
usually concerned with goals like filling pails with sand. But once a child
concerns itself instead with how to better learn, then that might lead to
exponential learning growth! Each better way to learn to learn would lead
to better ways to learn – and this could magnify itself into an awesome,
qualitative change. Thus, first-rank ‘creativity’ could be just the
consequence of little childhood accidents.

So why is genius so rare, if each has almost all it takes? Perhaps because
our evolution works with mindless disrespect for individuals. I’m sure no
culture could survive, where everyone finds different ways to think. If
so, how sad, for that means genes for genius would need, instead of
nurturing, a frequent weeding out.”

Tags:

In a Guardian piece by Andrew Pulver about David Cronenberg, who’s at Cannes for the screening of his latest film, Maps to the Stars, the director asserts that the automobile deserves a place alongside the Pill in green-lighting the sexual revolution. And now that tablets and smartphones are more important than cars, what does that say about us? From Pulver’s article:

“Cronenberg was also quizzed on his fondness for sex scenes set in cars, with one journalist pointing out it went all the way back to his JG Ballard adaptation Crash. Cronenberg replied, not entirely seriously: ‘Crash was suppressed by Ted Turner [CEO of TBS, parent company of Crash’s US distributor Fine Line] because he said it would encourage them to have sex in cars. I said: there’s an entire generation of Americans who have been spawned in the back seats of 1954 Fords. I doubt I invented sex in cars. You have to remember, part of the sexual revolution came about because of the automobile, because young people could get away from their parents, and that was freedom. I don’t think I’m breaking any new territory.

‘I mean… why wouldn’t you? There are such great cars around.'”

_________________________

In 1979, David Cronenberg discusses casting porn star Marilyn Chambers:

Tags: ,

Christopher Mims, now at the Wall Street Journal, has a new column that explains the basics of so-called “fog computing,” likely the next step beyond cloud computing as the Internet becomes the Internet of Things. An excerpt:

“Modern 3G and 4G cellular networks simply aren’t fast enough to transmit data from devices to the cloud at the pace it is generated, and as every mundane object at home and at work gets in on this game, it’s only going to get worse.

Luckily there’s an obvious solution: Stop focusing on the cloud, and start figuring out how to store and process the torrent of data being generated by the Internet of Things (also known as the industrial Internet) on the things themselves, or on devices that sit between our things and the Internet.

Marketers at Cisco Systems Inc. have already come up with a name for this phenomenon: fog computing.

I like the term. Yes, it makes you want to do a Liz Lemon eye roll. But like cloud computing before it—also a marketing term for a phenomenon that was already under way—it’s a good visual metaphor for what’s going on.

Whereas the cloud is ‘up there’ in the sky somewhere, distant and remote and deliberately abstracted, the ‘fog’ is close to the ground, right where things are getting done. It consists not of powerful servers, but weaker and more dispersed computers of the sort that are making their way into appliances, factories, cars, street lights and every other piece of our material culture.”

Tags:

In a Guardian Q&A tied to his new book, Joshua Ferris tells interviewer Tim Adams about being a novelist in the Internet Age:

Question:

The Internet in the book is often seen as a conversely destructive force. Is that your experience?

Joshua Ferris:

I think it’s a force of anxiety. Anyone who wants to be completely sure of their information – personal, political, historical – is faced with a huge number of sources willing to provide it. It can be a very dubious place. A hall of mirrors with diminishing returns.

Question:

Have you made a conscious effort to block out some of that information when you are writing?

Joshua Ferris:

I don’t belong to social media at all. Not for any principled reason, but because I don’t want to spend the time on it. I do think books are harder to read when you move away from the quick cuts of the internet. You have to reach back for your attention span. If you’ve spent two hours looking at 6,000 very different web pages it’s difficult to concentrate on a single story that requires sustained attention. I don’t think books are going to go away. I think maybe they’re going to become a more fine taste.

Question:

Do you think the pervasiveness of that screen culture also makes novels harder to write?

Joshua Ferris:

Not if the novelist is a novelist. The determined novelist is just interested in the fact that she must write novels.”

Tags: ,

There’s probably something a little wrong with someone who would be a whistleblower, and a free society is usually richer for it. The question to ask about Edward Snowden and Glenn Greenwald is not whether they’re perfect people, whether they’re heroes, but if America is better off overall for their actions. From Geoff Dyer’s well-written Financial Times review of Greenwald’s new book:

“Ever since then Greenwald, who left the Guardian last October, has had a long line of reporters queueing outside his house in Rio de Janeiro to hear the story (I am one of the guilty parties). Yet he has somehow still managed to make the tale seem fresh. The first third of his book is a genuinely gripping account of his encounters with Snowden. Jason Bourne meets The Social Network: the film rights for this one will sell themselves.

Snowden instructed Greenwald to find the meeting room in his Kowloon hotel with a plastic alligator on the floor. He entered carrying a Rubik’s Cube (‘unsolved’) and responded to a prepared question about the hotel food. Back in Snowden’s room and with their mobile phones in the fridge to prevent prying ears, the former lawyer Greenwald questioned him for five hours. Snowden confessed that some of his political ideas had been gleaned from video games, which provided the lesson ‘that just one person, even the most powerless, can confront great injustice.’

The book adds little fresh material on the NSA but, by putting all the reporting in one place, Greenwald gives an effective sense of the sheer scope of information that is being hoovered up. In one particularly clumsy slide, the NSA brags that its goals include: ‘Sniff it All,’ ‘Know it All,’ ‘Exploit it All,’ ‘Collect it All.’

In selecting Greenwald as his main media interlocutor, Snowden chose well. Greenwald has pursued the story with passion, ensuring that the documents have achieved the widest possible impact. He has also been a tireless defender of Snowden, even after his recent disastrous appearance on a Vladimir Putin call-in show.

But that single-mindedness, mixed with self-regard, is also Greenwald’s great weakness. He lives in a world of black and white, where all government officials are venal and independent journalists are heroes. ‘There are, broadly speaking, two choices: obedience to institutional authority or radical dissent from it,’ he writes.”

Tags: , ,

In a Boston Globe essay, economist Edward Glaeser argues that happiness should be a goal but not the goal. An excerpt:

“What we know now, however, is that the mistake lies in thinking happiness is the be-all and end-all to judging how effective a municipality is operating for its citizens. In a sense, putting happiness above all else is just as foolish as economists who think money is the only objective or doctors who can’t imagine anything that trumps health. Happiness, money, and health are all good goals, but they are rarely the only things people are striving for.

Because, if quality of life is so important, why do people choose to keep living — and moving — to ‘unhappy’ cities? If people in Rust Belt cities like Milwaukee or Detroit are so dissatisfied, why don’t they just move to a new place where they’d be more happy? Because most humans are willing to sacrifice happiness and satisfaction if the price is right — and we’re probably better off for it.

The debate over whether happiness should be life’s ultimate currency is ancient. Greek philosopher Epicurus opined ‘that pleasure is the end and aim,’ while his contemporary Epictetus countered, ‘What is our nature? To be free, noble, self-respecting . . . We must subordinate pleasure to these principles.’ More recently, Jeremy Bentham, the 18th century British thinker, popularized the notion that humans should maximize pleasure and minimize pain. Kant argued that our goal should not be happiness, which does not automatically follow moral behavior, but rather act so as to be worthy of happiness.

Most people, however, are less theoretical and more practical in terms of what they’re willing to trade off for happiness. In fact, it is better to think of happiness as one utility among many, rather than a supreme desideratum.”

Tags:

Everything is quantified and measured and analyzed now–or soon will be–but that wasn’t always the case. The recently deceased economist Gary Becker believed his discipline could be brought to bear on all aspects of life. The opening of a defense of his mindset from fellow economist Tim Harford at the Financial Times:

“Perhaps it was inevitable that there would be something of the knee-jerk about the reaction to the death of the Nobel Prize-winning economist Gary Becker. Published obituaries acknowledged his originality, productivity and influence, of course. But there are many who lament Becker’s economic imperialism – the study of apparently non-economic aspects of life. It is now commonplace for those in the field to consider anything from smoking to parenting to the impact of the contraceptive pill. That is Gary Becker’s influence at work.

Becker makes a convenient bogeyman. It did not help that he could be awkward in discussing emotional issues – despite his influence inside the economics profession, he was not a slick salesman outside it. So it is easy to caricature a man who writes economic models for discrimination, for suicide and for the demand for children. How blinkered such a man must be, the critics say; how intellectually crude and emotionally stunted.

The criticism is unfair. Gary Becker’s economic imperialism was an exercise in soft power. Becker’s view of the world was not that economics was the last word on all human activity. It was that no matter what the subject under consideration, economics would always have something insightful to add. And for many years it fell to Becker to find that insight.”

Tags: ,

The moon landing was supposed to be the beginning of the Space Age, but the giant leap turned out to be a small step. A mission to Mars, let alone a full-fledged settlement in space, was shelved. But billionaire entrepreneurs weaned on sci-fi are taking aim again at the stratosphere. The opening of Jessa Gamble’s Guardian article “How Do You Build a City in Space?“:

“Science fiction has delivered on many of its promises. Star Trek videophones have become Skype, the Jetsons’ food-on-demand is materialising through 3-D printing, and we have done Jules Verne one better and explored mid-ocean trenches at crushing depths. But the central promise of golden age sci-fi has not yet been kept. Humans have not colonised space.

For a brief moment in the 1970s, the grandeur of the night sky felt interactive. It seemed only decades away that more humans would live off the Earth than on it; in fact, the Space Shuttle was so named because it was intended to make 50 round trips per year. There were active plans for expanding civilisation into space, and any number of serious designs for building entire cities on the moon, Mars and beyond.

The space age proved to be a false dawn, of course. After a sobering interlude, children who had sat rapt at the sight of the moon landings grew up, and accepted that terraforming space – once briefly assumed to be easy – was actually really, really hard. Intense cold war motivation flagged, and the Challenger and Columbia disasters taught us humility. Nasa budgets sagged from 5% of the US federal budget to less than 0.5%. People even began to doubt that we’d ever set foot on the moon: in a 2006 poll, more than one in four Americans between 18 and 25 said they suspected the moon landing was a hoax.

But now a countercurrent has surfaced. The children of Apollo, educated and entrepreneurial, are making real headway on some of the biggest difficulties. Large-scale settlement, as opposed to drab old scientific exploration, is back on the menu.”

Tags:

From Dana Hull at the San Jose Mercury News, more information about Elon Musk’s Gigafactory, which he believes can cut battery costs by 30%, a key to making Teslas more affordable:

“The planned $5 billion gigafactory is key to Tesla’s strategy of manufacturing a more affordable, mass-market electric car. Tesla has not finalized a location but is looking at several states, including Arizona, Nevada, New Mexico and Texas. California is also being considered but is regarded as a long shot because of the lengthy time required for the permitting process.

In an onstage interview with venture capitalist Ira Ehrenpreis, an early investor in Tesla who sits on its board of directors, Musk said that vertically integrating the battery production makes economic sense.

‘The gigafactory will take that to another level,’ he said. ‘You’ll have stuff coming directly from the mine, getting on a rail car and getting delivered to the factory, with finished battery packs coming out the other side. The cost-compression potential is quite high if you are willing to go all the way down the supply chain.’

But the gigafactory will not just supply batteries for Tesla’s electric cars: Stationary battery packs will be provided to SolarCity, the San Mateo solar-installation company run by Musk’s cousins, and other renewable energy companies in the solar and wind industries.”

Tags: ,

In this 1977 Canadian talk show, Fran Lebowitz, selling her book Metropolitan Life, plays on a familiar theme: Her complicated relationship with children. She was concerned that digital watches and calculators and other new technologies entitled kids (and adults also) to a sense of power they should not have. She must be pleased with smartphones today.

Tags: ,

William S. Burroughs, in 1977, offering questionable advice regarding drugs, though, in all fairness, he had conducted a great deal of field research. Heroin use certainly didn’t diminish his powers with Junky. The prose is flawless.

Tags:

With Russia announcing dubious plans for colonizing the moon by 2030, Noah Davis of Pacific Standard interviewed physicist Dr. Nadine G. Barlow about the pros, cons and costs of such an endeavor. An exchange about the dark side of the proposed mission:

Question:

Are there unintended consequences we might not be considering if we colonize the moon?

Dr. Nadine G. Barlow:

There are several concerns about human activities on the moon. The lunar day is about 29 Earth days long, which means most places on the lunar surface receive about two weeks of daylight followed by two weeks of night. This places strong constraints on possible energy sources (power by solar energy would not work without development of some very effective energy storage technologies) and will affect human circadian rhythms to a greater extent than we see even with shift workers here on Earth. The Apollo missions to the moon between 1969 and 1972 showed that the lunar dust is very abrasive, sticks to everything, and may be toxic to humans—machinery is likely to need constant maintenance and techniques will need to be developed to keep the astronauts from bringing dust into the habitats on their spacesuits after surface activities. Growing crops on the moon will present its own challenges between the long day/night cycles and the need to add nutrients/bacteria to the lunar soil. Surface activities will kick up dust from the surface, enhancing the thin veneer of particles that make up the lunar atmosphere and transporting the dust over larger distances to cause even more damage to machinery. The moon’s atmosphere is so thin that is provides no protection from micrometeorite bombardment or radiation—both of these issues will need to be addressed in habitat design and maintenance. Finally we know that astronauts living for extended periods of time in the microgravity environment of orbiting space stations often suffer physiological issues, particularly upon return to Earth. We don’t know if colonists living for extended periods of time in the 1/6 gravity of the moon will suffer similar physiological problems. And of course there is always the question of how humans will react psychologically to life in a confined habitat in such an alien environment.”

Tags: ,

In the new Aeon essay, “The Intimacy of Crowds,” Michael Bond argues that riotous mobs are often actually quite rational and goal-oriented, despite the seeming disorder of the melee. The opening:

“There’s nothing like a riot to bring out the amateur psychologist in all of us. Consider what happened in August 2011, after police killed Mark Duggan, a 29-year-old man from the London suburb of Tottenham. Thousands took to the streets of London and other English towns in the UK’s worst outbreak of civil unrest in a generation. When police finally restored order after some six days of violence and vandalism, everyone from the Prime Minister David Cameron to newspaper columnists of every political persuasion denounced the mindless madness, incredulous that a single killing, horrific as it was, could spark the conflagration at hand. The most popular theory was that rioters had surrendered their self-awareness and rationality to the mentality of the crowd.

This has been the overriding view of crowd behaviour since the French Revolution and the storming of the Bastille. The 19th-century French criminologist Gabriel Tarde likened even the most civilised of crowds to ‘a monstrous worm whose sensibility is diffuse and who still acts with disordered movements according to the dictates of its head’. Tarde’s contemporary, the social psychologist Gustave Le Bon, tried to explain crowd behaviour as a paralysis of the brain; hypnotised by the group, the individual becomes the slave of unconscious impulses. ‘He is no longer himself, but has become an automaton who has ceased to be guided by his will,’ he wrote in 1895. ‘Isolated, he may be a cultivated individual; in a crowd, he is a barbarian… a grain of sand amid other grains of sand, which the wind stirs up at will.’

This is still the prevailing view of mob behaviour, but it turns out to be wrong.”

Tags:

Sylvia Anderson, legendary British TV producer and brilliant costume designer, explaining in 1970 why her moon suits, created for the program UFO, would be a suitable style for women of the future.

Tags:

"It is certainly a robot."

“It is certainly a military robot.”

We’ve longed looked for ways to automate killing, even in those days when computers were more often referred to as “electronic brains” or “mechanical minds.” An early attempt at push-button warfare–a “robot gun”–developed by the U.S. between world wars was the subject of an article in the October 5, 1928 Brooklyn Daily Eagle. An excerpt:

Aberdeen Proving Grounds, Md.–Greatest among the marvels of a mechanized army demonstrated here yesterday for the Army Ordnance Administration is a ‘mechanical mind’ produced in the Sperry plants in Brooklyn.

Following a day which was replete with spectacular demonstrations of new engines of war the ‘mechanical mind,’ which is technically known as a ‘data computer,’ located an ‘enemy’ airplane in the black night skies, spotted it almost instantly with the beam of powerful searchlight and kept a battery of four three-inch guns trained on the airplane and then with the press of a button the whole battery of anti-aircraft artillery opened fire and blew the trailing target to bits.

Not a Hand Touched It

Not a hand touched the searchlight which spotted the airplane and not a hand was touched to the three-inch guns in the anti-aircraft gun battery to sight them. The ‘mechanical mind’ did all this.

Ordnance experts declared this device the outstanding feature of the show. ‘It is certainly a military robot,’ said one of them.

The senses of this mechanical mind are embodied in a very sensitive syntonic oscillator, which had direction determining and vague finding powers. What this syntonic oscillator detects is greatly amplified after the manner of radio sets and its findings, which are expressed in electrical signals, are fed to a ‘comparator.’ This part of the apparatus is a mathematical marvel. It takes the reading given it for direction and distance from the oscillator without any effect on the correctness of the aim given.”

Nintendo, a 19th-century Japanese playing-card company that became an American video-game sensation nearly a hundred years after its founding, is one of the subjects of Blake J. Harris’ new book, Console Warswhich Grantland has excerpted. A piece about how in the 1980s Nintendo presciently identified the existence of a ravenous appetite of fans for not just a piece of pop culture but for a community built around it, a phenomenon that later exploded on the Internet:

“[Gail] Tilden was at home, nursing her six-week-old son, when [Minoru] Arakawa called and asked her to come into the office the next day for an important meeting. So the following day, after dropping off her son with some trusted coworkers, she went into a meeting with Arakawa. The appetite for Nintendo tips, hints, and supplemental information was insatiable, so Arakawa decided that a full-length magazine would be a better way to deliver exactly what his players wanted.

Tilden was put in charge of bringing this idea to life. She didn’t know much about creating, launching, and distributing a magazine, but, as with everything that had come before, she would figure it out. What she was unlikely to figure out, however, was how to become an inside-and-out expert on Nintendo’s games. She played, yes, but she couldn’t close her eyes and tell you which bush to burn in The Legend of Zelda or King Hippo’s fatal flaw in Mike Tyson’s Punch-Out!! For that kind of intel, there was no one better than Nintendo’s resident expert gamer, Howard Phillips, an always-smiling, freckle-faced videogame prodigy.

Technically, Phillips was NOA’s warehouse manager, but along the way he revealed a preternatural talent for playing, testing, and evaluating games. After earning Arakawa’s trust as a tastemaker, he would scour the arcade scene and write detailed assessments that would go to Japan. Sometimes his advice was implemented, sometimes it was ignored, but in the best-case scenarios he would find something hot, such as the 1982 hit Joust, alert Japan’s R&D to it, and watch it result in a similar Nintendo title — in this case a 1983 Joust-like game called Mario Bros. As Nintendo grew, Phillips’s ill-defined role continued to expand, though he continued to remain the warehouse manager. That all changed, however, when he was selected to be the lieutenant for Tilden’s new endeavor.

In July 1988, Nintendo of America shipped out the first issue of Nintendo Power to the 3.4 million members of the Nintendo Fun Club. Over 30 percent of the recipients immediately bought an annual subscription, marking the fastest that a magazine had ever reached one million paid subscribers.”

____________________________

Nintendo Arm Wrestling, 1985:

Tags: , , ,

You have to wonder what the brand new New York Times Magazine editor Jake Silverstein, who was poached from Texas Monthly, must think of Jill Abramson’s abrupt ouster. He was personally courted for the job by the erstwhile Executive Editor, and the two meshed on a vision for the future of the glossy publication at a time when some believe the periodical-within-a-periodical redundant with what the legendary paper has become in the paper-less age. He moved his family thousands of miles to work for the institution and not just Abramson, but it helps to have an ally at the top of the masthead as Hugo Lindgren, his predecessor, learned when he was removed by Abramson after being tapped by Bill Keller. Because of his high level of talent and because the company’s new lead editor, Dean Baquet, was involved in his hiring, Silverstein will likely be fine, but it goes to show you how crazy the business has become, even at the top, in this worried age of technological disruption. If we were living in an era when newspapers were flush and the Times was profitable, it’s hard to imagine this change would have been made. But all bets are off now. The pressure is immense and the patience short. Even formerly plum jobs are pretty much the pits today, just like the rest of them. 

__________________________

From Ken Auletta at the New Yorker blog:

“As with any such upheaval, there’s a history behind it. Several weeks ago, I’m told, Abramson discovered that her pay and her pension benefits as both executive editor and, before that, as managing editor were considerably less than the pay and pension benefits of Bill Keller, the male editor whom she replaced in both jobs. ‘She confronted the top brass,’ one close associate said, and this may have fed into the management’s narrative that she was ‘pushy,’ a characterization that, for many, has an inescapably gendered aspect. [Arthur] Sulzberger is known to believe that the Times, as a financially beleaguered newspaper, needed to retreat on some of its generous pay and pension benefits; Abramson had also been at the Times for far fewer years than Keller, having spent much of her career at the Wall Street Journal, accounting for some of the pension disparity. (I was also told by another friend of hers that the pay gap with Keller has since been closed.) But, to women at an institution that was once sued by its female employees for discriminatory practices, the question brings up ugly memories. Whether Abramson was right or wrong, both sides were left unhappy. A third associate told me, ‘She found out that a former deputy managing editor’—a man—’made more money than she did’ while she was managing editor. ‘She had a lawyer make polite inquiries about the pay and pension disparities, which set them off.’

Sulzberger’s frustration with Abramson was growing. She had already clashed with the company’s C.E.O., Mark Thompson, over native advertising and the perceived intrusion of the business side into the newsroom. Publicly, Thompson and Abramson denied that there was any tension between them, as Sulzberger today declared that there was no church-state—that is, business-editorial—conflict at the Times. A politician who made such implausible claims might merit a front-page story in the Times. The two men and Abramson clearly did not get along.”

__________________________

From David Carr and Ravi Somaiya at the Times:

“The New York Times dismissed Jill Abramson as executive editor on Wednesday, replacing her with Dean Baquet, the managing editor, in an abrupt change of leadership.

Arthur O. Sulzberger Jr., the publisher of the paper and the chairman of The New York Times Company, told a stunned newsroom that had been quickly assembled that he had made the decision because of ‘an issue with management in the newsroom.’

Ms. Abramson, 60, had been in the job only since September 2011. But people in the company briefed on the situation described serious tension in her relationship with Mr. Sulzberger, who had been hearing concerns from employees that she was polarizing and mercurial. They had disagreements even before she was appointed executive editor, and she had also had clashes with Mr. Baquet.

In recent weeks, people briefed on the situation said, Mr. Baquet had become angered over a decision by Ms. Abramson to try to hire an editor from The Guardian, Janine Gibson, and install her alongside him a co-managing editor position without consulting him. It escalated the conflict between them and rose to the attention of Mr. Sulzberger.”

Tags: , , , , , ,

Viewtron, an early online service from AT&T and Knight-Ridder, opened its virtual doors in South Florida in 1983, offering email, banking, shopping, news, weather and updated airline schedules. Despite quickly reaching 15 U.S. markets, Viewtron folded in 1986, victim of being ahead of the wave before people had learned how to surf.

It’s not likely that legal issues regarding autonomous cars will be as much a hurdle as some think, but they will be somewhat of a story. In the New York Times article, “When Driverless Cars Break the Law,” Claire Cain Miller breaks down the potential future of civil and criminal culpability:

“In cases of parking or traffic tickets, the owner of the car would most likely be held responsible for paying the ticket, even if the car and not the owner broke the law.

In the case of a crash that injures or kills someone, many parties would be likely to sue one another, but ultimately the car’s manufacturer, like Google or BMW, would probably be held responsible, at least for civil penalties.

Product liability law, which holds manufacturers responsible for faulty products, tends to adapt well to new technologies, John Villasenor, a fellow at the Brookings Institution and a professor at U.C.L.A., wrote in a paper last month proposing guiding principles for driverless car legislation.

A manufacturer’s responsibility for problems discovered after a product is sold — like a faulty software update for a self-driving car — is less clear, Mr. Villasenor wrote. But there is legal precedent, particularly with cars, as anyone following the recent spate of recalls knows.

The cars could make reconstructing accidents and assigning blame in lawsuits more clear-cut because the car records video and other data about the drive, said Sebastian Thrun, an inventor of driverless cars.

‘I often joke that the big losers are going to be the trial lawyers,’ he said.

Insurance companies would also benefit from this data, and might even reward customers for using driverless cars, Mr. Villasenor wrote. Ryan Calo, who studies robotics law at the University of Washington School of Law, predicted a renaissance in no-fault car insurance, under which an insurer covers damages to its customer regardless of who is at fault.

Criminal penalties are a different story, for the simple reason that robots cannot be charged with a crime.”

Tags: , , ,

Extrapolating on the Wisdom of Crowds theory, new research suggests that small crowds might be wiser than large ones. Perhaps. But what if it’s a tight-knit community of morons? Would the thinking be good then? What if it’s a politicized group that makes decisions that have immediate benefits for its own members without regard to others or to long-term ramifications? What if we’re talking about a doomsday cult? From Drake Bennett at Businessweek:

“The wisdom of crowds is one of those perfectly of-our-moment ideas. The phrase comes from New Yorker writer James Surowiecki, whose book of that title was published almost a decade ago. Its thesis is nicely summed up in its opening, which describes the 19th-century English scientist Francis Galton’s realization, while attending a county fair, that in a competition to guess the weight of an ox the average of all of the guesses people had submitted (787 in all) was almost exactly right: 1,197 pounds vs. the actual weight of 1,198 pounds, a degree of accuracy that no individual could attain on his own. As individuals we may be ignorant and short-sighted, but together we’re wise.

The implication is that the bigger the crowd, the greater the accuracy. It’s like running an experiment: All else being equal, the larger the sample size, the more trustworthy the result. The idea has a particular resonance at a time when online businesses from Amazon.com to Yelp rely on aggregated user reviews, and social networks such as Facebook sell ads that rely in part on showing you how many of your friends ‘like’ something.

A new paper by the Princeton evolutionary biologist Iain Couzin and his student Albert Kao, however, suggests that bigger isn’t necessarily better. In fact, small crowds may actually be the smartest. ‘We do not find the classic view of the wisdom of crowds in most environments,’ says Couzin of their results. ‘Instead, what we find is that there’s a small optimal group size of eight to 12 individuals that tends to optimize decisions.’

The research started from the fact that, in nature—where, unlike at county fairs, accuracy has life-or-death consequences—many animals live in relatively small groups. Why, Couzin wondered, would so many species fail to take advantage of the informational benefits of the crowd?”

Tags: , , ,

From the latest Edge article, “The Thing Which Has No Name,” by Ogilvy & Mather UK creative director Rory Sutherland, who argues, perhaps unsurprisingly, that marketers and advertisers understand certain things better than classical economists:

“It is true of quite a lot of progress in human life that businesses, in their blundering way, sometimes discover things before academics do. This is true of the steam engine. People developed steam engines before anybody knew how they worked. It’s true of the jet engine, true of aspirin, and so forth. People discover through trial and error—what Nassim Taleb calls ‘stochastic tinkering.’ People make progress on their own without really understanding how it works. At that point, academics come along, explain how what works works and to some extent take the credit for it. ‘Teaching birds to fly’ is the phrase that Taleb uses. As I say, I was seduced by economic thinking and the elegance of it, but at the same time having worked in advertising for 15 years, I was also fairly conscious of the fact that this isn’t really how people behave. We’d always known, in those fields of marketing, like direct marketing, where you actually got results—you sent out letters to 50,000 people and saw how many people replied—there was something going on that we didn’t understand. In other words, occasionally you might do incredibly elaborate, complex, and expensive work and have more or less no effect on the uptake of some product. Then someone would redesign the application form and slightly change the order of the questions on the application form, and the number of people replying would double. We knew there was this mysterious kind of dark force at work in human behavior.

The extraordinary thing about the marketing industry is that, by accident, it was pretty good at stumbling on some of these biases which behavioral economics later codified. There’s a wonderful/evil advertisement I mentioned in a recent piece, ‘How else could a month’s salary last a lifetime?’, which is a De Beers advertisement in about 1953 for engagement rings. Now, that’s a brilliant case of framing or price anchoring. How much should you spend on an engagement ring? We’ll suggest that whatever your month’s salary is, that’s what you should spend.

‘No one ever got fired for buying IBM’ is a wonderful example of understanding loss aversion or ‘defensive decision making,’ The advertising and marketing industry kind of acted as if it knew this stuff—but where we were disgracefully bad is that no one really attempted to sit down and codify it. When I discovered Nudge by Richard Thaler and Cass Sunstein, and the whole other corpus on Behavioral Economics…. when I started discovering there was a whole field of literature about ‘this thing for which we have no name’ …. these powerful forces which no one properly understood—that was incredibly exciting. And the effect of these changes can be an order of magnitude. This is the important thing. Really small interventions can have huge effects.”

Tags:

« Older entries § Newer entries »