The odd game of Auto Polo was popularized in the summer of 1912 because of a marketing ploy by a Kansas Ford dealer trying to sell Model T’s. It soon become a craze in New York City, headlining at Madison Square Garden for most of December. Although the activity had initially been devised a decade earlier, it was this moment when the game got (relatively) big.
Dangerous as all fuck, the sport squared off an equal number of teams of vehicles holding two players–the driver and the mallet-wielder–trying to propel a ball between two posts. It thrived in New York and Chicago for most of the 1920s but disappeared before the arrival of the Great Depression. By then, cars were largely stable enough to sell themselves, even if most Americans couldn’t afford them. The photographs above are not from the MSG contests, but an article in the December 8, 1912 New York Times recalls that particular series. An excerpt:
Not a few of the dwellers or toilers along Automobile Row have been predicting a popular future for auto polo, the game from the South and West which gave the public a number of thrills as a game and furnished food for thought for the motor enthusiast at Madison Square Garden for the week that just ended. There had been rumors of the game from time to time, and people heard that the four-wheel “ponies” on which it was played provided as many sensational moments as the four-legged ones of the horse-polo match. But no one was quite prepared for the exhibition which took place in the arena still covered, oddly enough, with the tanbark of the Horse Show.
As in regulation polo, the mallet is only a factor in the newer game. The horse, or in this case the car, is quite as important to success, if not more so. It was on the performance of the cars that the interest of automobile men naturally centered. Occasionally there was a bit of of engine trouble, but for the most part the little machines, stripped to the bare frames and lacking even bonnets, stood up manfully under conditions that were grueling to say the least. Every canon of good motor car driving, from the viewpoint of the car, was broken time and again as the drivers sought to block the bounding leather ball or fed gas to their motors until the pop of explosions became an almost continuous roar in an effort to be the first “on” the elusive prize. Turns so short that they resulted in turnovers were made several times, but still the motors remained operable, to the surprise of the onlookers.
Whether the game can ever become general–even as general as polo pony–is a moot question. It involves, in the first place, a deal of expense, for, played in earnest and in the heat of the desire to win, a big repair bill would be inevitable. In other words, it would be an expensive thing to promote in a professional way.
It would be hard to devise a game in which the players took bigger chances of mishap. The factor of danger may prove either a damper or a stimulus. At any rate the game has definitely taken its place as a circus stunt crowded with thrills, and a demonstration of car ability which is a revelation even to the man who has driven his hundreds of miles at a mile-a-minute clip.•
The particular rules Clayton Christensen laid down for disruptive innovation probably don’t much matter because the world doesn’t exist within his constructs, but ginormous companies (even entire industries) being done in by much smaller ones has become an accepted part of life in the Digital Age.
In trying to explain this phenomenon, Christopher Mims of the Wall Street Journalexplores the ideas in Anshu Sharma’s much-debated article about Stack Fallacy, which argues that companies moving up beyond their core businesses are likely to fail (Google+, anyone?), while those moving down into the guts of what they know have a far better chance. For an example of the latter, Mims writes of the ride-sharing sector. An excerpt:
To really understand the stack fallacy, it helps to recognize that companies move “down” the stack all the time, and it often strengthens their position. It is the same thing as vertical integration. For example, engineers of Apple’s iPhone know exactly what they want in a mobile chip, so Apple’s move to make its own chips has yielded enormous dividends in terms of how the iPhone performs. In the same way, Google’s move down its own stack—creating its own servers, designing its own data centers, etc.—allowed it to become dominant in search. Similarly, Tesla’s move to build its own batteries could—as long as it allows Tesla to differentiate its products in terms of price and/or performance—be a deciding factor in whether or not it succeeds.
Of course, the real test of a sweeping business hypothesis is whether or not it has predictive power. So here’s a prediction based on the stack fallacy: We’re more likely to see Uber succeed at making cars than to seeGeneral Motorssucceed at creating a ride-sharing service like Uber. Both companies appear eager to invade each other’s territory. But, assuming that ride sharing becomes the dominant model for transportation, Uber has the advantage of knowing exactly what it needs in a vehicle for such a service.
It is also worth noting that the stack fallacy is just that: a fallacy and not a law of nature. There are ways around it. The key is figuring out how to have true, firsthand empathy for the needs of the customer for whatever product you’re trying to build next.•
In addition to yesterday’s trove of posts about the late, great Marvin Minsky, I want to refer you to a Backchannel remembrance of the AI pioneer by Steven Levy, the writer who had the good fortune to arrive on the scene at just the right moment in the personal-computer boom and the great talent to capture it. The journalist recalls Minsky’s wit and conversation almost as much as his contributions to tech. Just a long talk with the cognitive scientist was a perception-altering experience, even if his brilliance was intimidating. The opening:
There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening.
But maybe Marvin could imagine it. His imagination respected no borders.
Minskydied Sunday night, at 88. His body had been slowing down, but that mind had kept churning. He was more than a pioneering computer scientist — he was a guiding light for what intellect itself could do. He was also our Yoda. The entire computer community, which includes all of us, of course, is going to miss him.
I first met him in 1982; I had written a story for Rolling Stone about young computer hackers, and it was optioned by Jane Fonda’s production company. I traveled to Boston with Fonda’s producer, Bruce Gilbert; andSusan Lyne, who had engineered my assignment to begin with. It was my first trip to MIT; my story about been about Stanford hackers.
I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement.•
You just mentioned Enceladus so, talking of space missions, we’ll go on to your next book: William Burrows’s This New Ocean: The Story of the First Space Age published in 1998. What do you like about this book?
Space! Rockets! When it came out I was about to go on holiday and wanted a thick book to read. Burrows is a science journalist: not a historian or a scientist. I find it incredibly readable, very exciting. Although it was written by an American, it didn’t cover up the fact that Wernher von Braun, the brains behind the Apollo programme, was a Nazi Party member who was absolved for his involvement with the Hitler regime because he could build ICBMs. The book contains a good account—as good as there could be at the time, given the archives in the USSR hadn’t fully opened—of the huge advances the Russians made, which became obvious as they first flew up the Sputnik and then put the first man in space. I find it an extremely readable account of a time I grew up in—almost like a novel. I wasn’t reading it with a professional eye because I don’t know much about space history.
Burrows’s book is very dramatic—especially some of the moments like the first moon landing.
I remember it! I was 11 years old at the time. I was watching it with my uncle Brian in the middle of the night. Although I remember the excitement of seeing Neil Armstrong’s feet stepping down on to the ground, I was equally amazed by the fact that Brian was eating four Weetabix at three o’clock in the morning. We have lost a lot of the excitement about space flight. A year ago NASA trialled the Orion space capsule, which they may use to fly to Mars. The launch was in the middle of one of my lectures, so I decided to take a brief break and show the students the NASA live stream. You don’t see rocket launches on live TV anymore. The space shuttle has been scrapped and although there are rockets going to the Space Station, and private companies like SpaceX and Blue Origin developing reusable rockets, they doesn’t enjoy the same media attention as in the 60s and 70s. So we all sat and watched it—the students were very excited.•
Who knows for sure if Avo Uvezian’s story about having his song stolen by Frank Sinatra is true, but it’s true to him, and the narratives we believe, myth or fact, shape our lives. The octogenarian claims, with some plausibility, that he had the melody for “Strangers in the Night” pilfered in the 1960s, altering his life, eventually ushering him bitterly from the music industry into the cigar business, where he found great success.
By the 1960s, he had written his own music. One melody stood out.
“The song itself is a very simple song,” Mr. Uvezian, 89, said this month by telephone from his home in Orlando, Fla. “You take the thing and you repeat it. ‘Dah-dah-dah-dah-daaaah.’ It’s the same line repeated throughout.”
He had a friend who knew Sinatra. The friend set up a meeting and told Mr. Uvezian to bring along his music. Someone else had put lyrics to the melody, and called it “Broken Guitar.”
Sinatra gave it a listen.
“He said, ‘I love the melody, but change the lyrics,’” Mr. Uvezian recalled. The task was given to studio songwriters, and they came back with new words. Sinatra, legend has it, hated it. “I don’t want to sing this,” he said when he first saw the sheet music, according to James Kaplan’s new book, “Sinatra: The Chairman.” Nonetheless, with his last No. 1 single several years behind him, he was persuaded to record the song in 1966.
The title was new, too. “Broken Guitar” was out. The new name was “Strangers in the Night.”
In Mr. Uvezian’s telling, what should have been a monumental triumph and breakthrough turned out to be a source of great grief.•
Sadly, the legendary MIT cognitive scientist Marvin Minsky just died. From building a robotic tentacle arm nearly 50 years ago to consulting on 2001: A Space Odyssey, the AI expert–originator, really–thought as much as anyone could about smart machines during a lifetime. From Glenn Rifkin’s just-published New York Times obituary:
Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.
“Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history,” said Alan Kay, a computer scientist and a friend and colleague of Professor Minsky’s.•
The following are a collection of past posts about his life and work.
“Such A Future Cannot Be Realized Through Biology”
Reading Michael Graziano’sgreat essayabout building a mechanical brain reminded me of Marvin Minsky’s 1994 Scientific American article, “Will Robots Inherit the Earth?” It foresees a future in which intelligence is driven by nanotechnology, not biology. Two excerpts follow.
· · · · · · · · · ·
Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains — using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives–with the option of immortality — and choose among other, unimagined capabilities as well.
In such a future, attaining wealth will not be a problem; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible–particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.
Such a future cannot be realized through biology.
· · · · · · · · · ·
Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that won’t be constrained to work at the crawling pace of “real time.” The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our “mind-children” to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.
But could such beings really exist? Many thinkers firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they’ll always lack some vital ingredient. They call this essence by various names–like sentience, consciousness, spirit, or soul. Philosophers write entire books to prove that, because of this deficiency, machines can never feel or understand the sorts of things that people do. However, every proof in each of those books is flawed by assuming, in one way or another, the thing that it purports to prove–the existence of some magical spark that has no detectable properties.
I have no patience with such arguments.•
“A Century Ago, There Would Have Been No Way Even To Start Thinking About Making Smart Machines”
AI pioneer Marvin Minskyat MIT in ’68 showing his robotic arm, which was strong enough to lift an adult, gentle enough to hold a child.
Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, “Am I representing the problem in an unsuitable way,” or “Am I trying to use an unsuitable method?” However, another way is to ask, “How would I make a machine to solve that kind of problem?”
A century ago, there would have been no way even to start thinking about making smart machines. Today, though, there are lots of good ideas about this. The trouble is, almost no one has thought enough about how to put all those ideas together. That’s what I think about most of the time.•
“People Have A Fuzzy Idea Of Consciousness”
Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.
“Do Outstanding Minds Differ From Ordinary Minds In Any Special Way?”
Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:
CAN MACHINES BE CREATIVE?
We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies. Most people think that creativity requires some special, magical ‘gift’ that simply cannot be explained. If so, then no computer could create – since anything machines can do most people think can be explained.
To see what’s wrong with that, we must avoid one naive trap. We mustn’t only look at works our culture views as very great, until we first get good ideas about how ordinary people do ordinary things. We can’t expect to guess, right off, how great composers write great symphonies. I don’t believe that there’s much difference between ordinary thought and highly creative thought. I don’t blame anyone for not being able to do everything the most creative people do. I don’t blame them for not being able to explain it, either. I do object to the idea that, just because we can’t explain it now, then no one ever could imagine how creativity works.
We shouldn’t intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas – and not just our ‘creative’ ones. Were so accustomed to the marvels of the unusual that we forget how little we know about the marvels of ordinary thinking. Perhaps our superstitions about creativity serve some other needs, such as supplying us with heroes with such special qualities that, somehow, our deficiencies seem more excusable.
Do outstanding minds differ from ordinary minds in any special way? I don’t believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself. There must be some intense concern with some subject, but that’s common enough. There also must be great proficiency in that subject; this, too, is not so rare; we call it craftsmanship. There has to be enough self-confidence to stand against the scorn of peers; alone, we call that stubbornness. And certainly, there must be common sense. As I see it, any ordinary person who can understand an ordinary conversation has already in his head most of what our heroes have. So, why can’t ‘ordinary, common sense’ – when better balanced and more fiercely motivated – make anyone a genius,
So still we have to ask, why doesn’t everyone acquire such a combination? First, of course, it sometimes just the accident of finding a novel way to look at things. But, then, there may be certain kinds of difference-in-degree. One is in how such people learn to manage what they learn: beneath the surface of their mastery, creative people must have unconscious administrative skills that knit the many things they know together. The other difference is in why some people learn so many more and better skills. A good composer masters many skills of phrase and theme – but so does anyone who talks coherently.
Why do some people learn so much so well? The simplest hypothesis is that they’ve come across some better ways to learn! Perhaps such ‘gifts’ are little more than tricks of ‘higher-order’ expertise. Just as one child learns to re-arrange its building-blocks in clever ways, another child might learn to play, inside its head, at rearranging how it learns!
Our cultures don’t encourage us to think much about learning. Instead we regard it as something that just happens to us. But learning must itself consist of sets of skills we grow ourselves; we start with only some of them and and slowly grow the rest. Why don’t more people keep on learning more and better learning skills? Because it’s not rewarded right away, its payoff has a long delay. When children play with pails and sand, they’re usually concerned with goals like filling pails with sand. But once a child concerns itself instead with how to better learn, then that might lead to exponential learning growth! Each better way to learn to learn would lead to better ways to learn – and this could magnify itself into an awesome, qualitative change. Thus, first-rank ‘creativity’ could be just the consequence of little childhood accidents.
So why is genius so rare, if each has almost all it takes? Perhaps because our evolution works with mindless disrespect for individuals. I’m sure no culture could survive, where everyone finds different ways to think. If so, how sad, for that means genes for genius would need, instead of nurturing, a frequent weeding out.”•
“Backgammon Is Now The First Board Or Card Game With, In Effect, A Machine World Champion”
For some reason, the editors of the New Yorker never ask me for advice. I don’t know what they’re thinking. I would tell them this if they did: Publish an e-book of the greatest technology journalism in the magazine’s history. Have one of your most tech-friendly writers compose an introduction and include Lillian Ross’1970 pieceabout the first home-video recorder, Malcolm Ross’1931 lookinside Bell Labs, Anthony Hiss’ 1977 story about the personal computer, Hiss’1975 articleabout visiting Philip K. Dick in Los Angeles, and Jeremy Bernstein’s short1965 pieceand long1966 oneabout Stanley Kubrick making 2001: A Space Odyssey.
Another inclusion could be A.I., Bernstein’s 1981 profile of the great artificial-intelligence pioneer Marvin Minsky. (It’s gated, so you need a subscription to read it.) The opening:
In July of 1979, a computer program called BKG 9.8–the creation of Hans Berliner, a professor of computer science at Carnegie-Mellon University, in Pittsburgh–played the winner of the world backgammon championship in Monte Carlo. The program was run on a large computer at Carnegie-Mellon that was connected by satellite to a robot in Monte Carlo. The robot, named Gammonoid, had a visual-display backgammon board on its chest, which exhibited its moves and those of its opponent, Luigi Villa, of Italy, who by beating all his human challengers a short while before had won the right to play against the Gammonoid. The stakes were five thousand dollars, winner take all, and the computer won, seven games to one. It had been expected to lose. In a recent Scientific American article, Berliner wrote:
Not much was expected of the programmed robot…. Although the organizers had made Gammonoid the symbol of the tournament by putting a picture of it on their literature and little robot figures on the trophies, the players knew the existing microprocessors could not give them a good game. Why should the robot be any different?
This view was reinforced at the opening ceremonies in the Summer Sports Palace in Monaco. At one point the overhead lights dimmed, the orchestra began playing the theme of the film Star Wars, and a spotlight focused on an opening in the stage curtain through which Gammonoid was supposed to propel itself onto the stage. To my dismay the robot got entangled and its appearance was delayed for five minutes.
This was one of the few mistakes the robot made. Backgammon is now the first board or card game with, in effect, a machine world champion. Checkers, chess, go, and the rest will follow–and quite possibly soon. But what does that mean for us, for our sense of uniqueness and worth–especially as machines evolve whose output we can less distinguish from our own?•
“Each One Of Us Already Has Experienced What It Is Like To Be Simulated By A Computer”
We know so little about the tools we depend on every day. When I was a child, I was surprised that no one expected me to learn how to build a TV even though I watched a TV. But, no, I was just expected to process the surface of the box’s form and function, not to understand the inner workings. Throughout life, we use analogies and signs and symbols to make sense of things we constantly consume but don’t truly understand. Our processing of these basics is not unlike a computer’s process. Marvin Minsky wrote brilliantly on this topic in an Afterword of a 1984 Vernor Vinge novel. An excerpt:
Let’s return to the question about how much a simulated life inside a world inside a machine could resemble our real life “out here.” My answer, as you know by now, is that it could be very much the same––since we, ourselves, already exist as processes imprisoned in machines inside machines! Our mental worlds are already filled with wondrous, magical, symbol–signs, which add to every thing we “see” its “meaning” and “significance.” In fact, all educated people have already learned how different are our mental worlds than the ‘real worlds’ that our scientists know.
Consider the table in your dining room; your conscious mind sees it as having familiar functions, forms, and purposes. A table is “a thing to put things on.” However, our science tells us that this is only in the mind; the only thing that’s “really there” is a society of countless molecules. That table seems to hold its shape only because some of those molecules are constrained to vibrate near one another, because of certain properties of force-fields that keep them from pursuing independent trajectories. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound––whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air––that is, of particles whose distances are so much less constrained.
And so––let’s face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!•
“The Book Is About Ways To Read Out The Contents Of A Person’s Brain”
In 1992, AI legend Marvin Minsky believed that by the year 2023 people would be able to download the contents of their brains and achieve “immortality.” That was probably too optimistic. He also thought such technology would only be possible for people who had great wealth. That was probably too pessimistic. Froman interviewthat Otto Laske conducted with Minsky about his sci-fi novel, The Turing Option:
I hear you are writing a science fiction novel. Is that your first such work?
Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.
At what point in time is the novel situated?
It’s set in the year 2023.
I may just be alive to experience it, then …
Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…
How wonderful …
… because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…
That’s a very American footnote…
Well, it’s also a very Darwinian concept.
Yes, of course.
There isn’t room for every possible being in this finite universe, so, we have to be selective …
And who selects, or what is the selective mechanism?
Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …
That’s worse than fighting, I think.•
“We Are On The Threshold Of An Era That Will Be Strongly Influenced, And Quite Possibly Dominated, By Intelligent Machines”
A VISITOR to our planet might be puzzled about the role of computers in our technology. On the one hand, he would read and hear all about wonderful “mechanical brains” baffling their creators with prodigious intellectual performance. And he (or it) would be warned that these machines must be restrained, lest they overwhelm us by might, persuasion, or even by the revelation of truths too terrible to be borne. On the other hand, our visitor would find the machines being denounced on all sides for their slavish obedience, unimaginative literal interpretations, and incapacity for innovation or initiative; in short, for their inhuman dullness.
Our visitor might remain puzzled if he set out to find, and judge for himself, these monsters. For he would find only a few machines mostly general-purpose computers), programmed for the moment to behave according to some specification) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of “artificial intelligence.”•
“He Is, In A Sense, Trying To Second-Guess The Future”
I posted a brief Jeremy Bernstein New Yorker pieceabout Stanley Kubrick that was penned in 1965 during the elongated production of 2001: A Space Odyssey.The following year the same writer turned out a much longer profile for the same magazine about the director and his sci-fi masterpiece. Among many other interesting facts, it mentions that MIT AI legend Marvin Minsky,who’s appeared on this blog many times, was a technical consultant for the film. An excerpt from “How About a Little Game?”:
By the time the film appears, early next year, Kubrick estimates that he and [Arthur C.] Clarke will have put in an average of four hours a day, six days a week, on the writing of the script. (This works out to about twenty-four hundred hours of writing for two hours and forty minutes of film.) Even during the actual shooting of the film, Kubrick spends every free moment reworking the scenario. He has an extra office set up in a blue trailer that was once Deborah Kerr’s dressing room, and when shooting is going on, he has it wheeled onto the set, to give him a certain amount of privacy for writing. He frequently gets ideas for dialogue from his actors, and when he likes an idea he puts it in. (Peter Sellers, he says, contributed some wonderful bits of humor for Dr. Strangelove.)
In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing incidental music. In making 2001, he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in 2001 the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in 2001 that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter–‘Maybe it ought to sound like Jackie Mason,’ Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that NASA does, such matters can be dealt with as they come up, but in a movie everything is visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty-five special effects people, and a staff of scientific advisers. By the time this picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether any of the things he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.•
“We Will Go On, As Always, To Seek More Robust Illusions”
Times of great ignorance are petri dishes for all manner of ridiculous myths, but, as we’ve learned, so are times of great information. The more things can be explained, the more we want things beyond explanation. And maybe for some people, it’s a need rather than a want. The opening of “Music, Mind and Meaning,” Marvin Minsky’s 1981 Computer Music Journal essay:
Why do we like music? Our culture immerses us in it for hours each day, and everyone knows how it touches our emotions, but few think of how music touches other kinds of thought. It is astonishing how little curiosity we have about so pervasive an “environmental” influence. What might we discover if we were to study musical thinking?
Have we the tools for such work? Years ago, when science still feared meaning, the new field of research called “Artificial Intelligence” started to supply new ideas about “representation of knowledge” that I’ll use here. Are such ideas too alien for anything so subjective and irrational, aesthetic, and emotional as music? Not at all. I think the problems are the same and those distinctions wrongly drawn: only the surface of reason is rational. I don’t mean that understanding emotion is easy, only that understanding reason is probably harder. Our culture has a universal myth in which we see emotion as more complex and obscure than intellect. Indeed, emotion might be “deeper” in some sense of prior evolution, but this need not make it harder to understand; in fact, I think today we actually know much more about emotion than about reason.
Certainly we know a bit about the obvious processes of reason–the ways we organize and represent ideas we get. But whence come those ideas that so conveniently fill these envelopes of order? A poverty of language shows how little this concerns us: we “get” ideas; they “come” to us; we are “reminded of” them. I think this shows that ideas come from processes obscured from us and with which our surface thoughts are almost uninvolved. Instead, we are entranced with our emotions, which are so easily observed in others and ourselves. Perhaps the myth persists because emotions, by their nature, draw attention, while the processes of reason (much more intricate and delicate) must be private and work best alone.
The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry.
Much of what we now know of the mind emerged in this century from other subjects once considered just as personal and inaccessible but which were explored, for example, by Freud in his work on adults’ dreams and jokes, and by Piaget in his work on children’s thought and play. Why did such work have to wait for modern times? Before that, children seemed too childish and humor much too humorous for science to take them seriously.
Why do we like music? We all are reluctant, with regard to music and art, to examine our sources of pleasure or strength. In part we fear success itself– we fear that understanding might spoil enjoyment. Rightly so: art often loses power when its psychological roots are exposed. No matter; when this happens we will go on, as always, to seek more robust illusions!•
“Most People Think Computers Will Never Be Able To Think”
Here’s the opening of a 1982 AI Magazine piece by cognitive scientist MIT’s Marvin Minsky, which considers the possibility of computers being able to think:
Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what’s happening.
Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers”. Yet even then, a few pioneers — especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.
Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?
Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers.” Yet even then, a few pioneers –especially Alan Turing — envisioned what’s now called “Artificial Intelligence” – or “AI.” They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.
Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, “computer experts” say machines will never really think. If so, how could they be so smart, and yet so dumb?•
“Using This Instrument, You Can ‘Work’ In Another Room, In Another City, In Another Country, Or On Another Planet”
The opening of “Telepresence,” Marvin Minsky’s 1980 Omni think piece which suggested we should bet our future on a remote-controlled economy:
You don a comfortable jacket lined with sensors and muscle-like motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dexterous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can ‘work’ in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.
The crude ‘robotic machines of today can do little of this. By building new kinds of versatile, remote‑controlled mechanical hands, however, we might solve critical problems of energy, health, productivity, and environmental quality, and we would create new industries. It might take 10 to 20 years and might cost $1 billion—less than the cost of a single urban tunnel or nuclear power reactor or the development of a new model of automobile.•
Scott Kelly, who’s nearing the end of a one-year stint aboard the International Space Station, just conducted an Ask Me Anything at Reddit. Because the astronaut and his inquisitors are human, many of the questions had to do with urine, food and sleep. A few exchanges follow.
What is the largest misconception about space/space travel that society holds onto?
I think a lot of people think that because we give the appearance that this is easy that it is easy. I don’t think people have an appreciation for the work that it takes to pull these missions off, like humans living on the space station continuously for 15 years. It is a huge army of hard working people to make it happen.
During a spacewalk what does it feel like having nothing but a suit (albeit a rather sophisticated one) between you and space?
It is a little bit surreal to know that you are in your own little spaceship and a few inches from you is instant death.
Upon completing your year in space, if the offer was on the table, would you do a two-year space mission in the future? And why? Would it depend on the mission (Moon, Mars, ISS again)?
It would definitely depend on the mission. If it was to the moon or Mars, yeah I would do it.
What’s the creepiest thing you’ve encountered while on the job?
Generally it has to do with the toilet. Recently I had to clean up a gallon-sized ball of urine mixed with acid.
Why do you always have your arms folded?
Your arms don’t hang by your side in space like they do on Earth because there is no gravity. It feels awkward to have them floating in front of me. It is just more comfortable to have them folded. I don’t even have them floating in my sleep, I put them in my sleeping bag.
Could you tell us something unusual about being in space that many people don’t think about?
The calluses on your feet in space will eventually fall off. So, the bottoms of your feet become very soft like newborn baby feet. But the top of my feet develop rough alligator skin because I use the top of my feet to get around here on space station when using foot rails.
What’s it like to sleep in 0G? It must be great for the back. Does the humming of the machinery in the station affect your sleep at all?
Sleeping here is harder here in space than on a bed because the sleep position here is the same position throughout the day. You don’t ever get that sense of gratifying relaxation here that you do on Earth after a long day at work. Yes, there are humming noises on station that affect my sleep, so I wear ear plugs to bag.
Does the ISS have any particular smell?
Smells vary depending on what segment you are in. Sometimes it has an antiseptic smell. Sometimes it has an odor that smells like garbage. But the smell of space when you open the hatch smells like burning metal to me.
What will be the first thing you eat once you’re back on Earth?
The first thing I will eat will probably be a piece of fruit (or a cucumber) the Russian nurse hands me as soon as I am pulled out of the space capsule and begin initial health checks.
What ONE thing will you forever do differently after your safe return home?
The technocratic office space has some of its roots in Fascist Italy, in the work of Italo Balbo, though Mussolini’s Air Minister wasn’t overly concerned like Google and Facebook are now with swallowing up employees’ lives by smothering them with amenities (though he did have coffee delivered to their desks via pneumatic tubes!). He just wanted the “trains” to run on time.
For lot of workers in today’s Gig Economy, the office has disappeared, the software serving as invisible middleman. The inverse of that reality is the sprawling technological wonderlands that are campuses from Apple to Zappos (which actually tried to reinvent downtown Las Vegas), with their amazing perks and services aimed to make managers and engineers feel not just at home but happy, incentivizing them to remain chained, if virtually, to their desks.
In a smart Aeon essay, Benjamin Naddaff-Hafrey traces the history of today’s all-inclusive technological “paradises,” isolationist attempts at utopias in a time of economic uncertainty and fear of terrorism, to yesteryear’s enclosed company towns and college campuses.
Google boasts more than 2 million job applicants a year. National media hailed its office plans as a ‘glass utopia’. There are hosts of articles for businesspeople on how to make their offices more like Google’s workplace. A 2015 CNNMoney survey of business students around the world showed Google as their most desired employer. Its campus is a cultural symbol of that desirability.
The specifics of Google’s proposed Mountain View office are unprecedented, but the scope of the campus is part of an emerging trend across the tech world. Alongside Google’s neighbourhood is a recent Facebook open office on their campus that, as the largest open office in the world, parallels the platform’s massive online community. Both offices seem modest next to the ambitious and fraught effort of Tony Hsieh, CEO of the online fashion retailer Zappos, to revitalise the downtown Las Vegas area around Zappos’ office in the old City Hall.
Such offices symbolise not just the future of work in the public mind, but also a new, utopian age with aspirations beyond the workplace. The dream is a place at once comfortable and entrepreneurial, where personal growth aligns with profit growth, and where work looks like play.
Yet though these tech campuses seem unprecedented, they echo movements of the past. In an era of civic wariness and economic fragility, the ‘total’ office heralds the rise of a new technocracy. In a time when terrorism from abroad provokes our fears, this heavily-planned workplace harks back to the isolationist values of the academic campus and even the social planning of the company town. As physical offices, they’re exceptional places to work – but while we increasingly uphold these places as utopic models for community, we make questionable assumptions about the best version of our shared life and values.•
Often the other side of extreme beauty is something too horrible to look at. One of the abiding memories of my childhood is seeing a brief clip of 73-year-old Karl Wallenda plunging to his death one windy day in San Juan. Why was that old man up on a wire? How did he even get to that age behaving in such a way?
French daredevil Tancrède Melet didn’t reach senior status, having earlier this month, at age 32, suffered a deadly fall from grace from a hot-air balloon. Similarly to Philippe Petit, he thought himself more philosopher and artist than extreme athlete, though intellectualizing didn’t soften his crash landing. An erstwhile engineer, he climbed to the sky to escape the air-conditioned nightmare, and he managed that feat, if only for a short while.
A thoughtful Economist obituary celebrates the audacity that abbreviated Melet’s life, which is one way to look at it. An excerpt:
Essentially he saw himself as an artist of the void, weaving together base-jumping, acrobatics and highlining to make hair-raising theatre among the peaks. Love of wildmélanges had been encouraged by his parents, who took him out of school when he was bullied for a stammer and, instead, let him range over drawing, music, gymnastics and the circus. Though for four years he slaved as a software engineer, he dreamed of recovering that freedom.
“One beautiful day” he threw up the job, bought a van, and took to the roads of France to climb and walk the slackwire. In the Verdon gorges of the Basses-Alpes he fell in with a fellow enthusiast, Julien Millot, an engineer of the sort who could fix firm anchors among snow-covered rocks for lines that spanned crevasses; with him he formed a 20-strong team, the Flying Frenchies, composed of climbers, cooks, musicians, technicians and clowns. These kindred spirits gave him confidence to push ever farther out into empty space.
Many thought him crazy. That was unfair. He respected the rules of physics, and made sure his gear was safe. When he died, by holding on too long to the rope of a hot-air balloon that shot up too fast, he had been on the firm, dull ground, getting ready. It looked like another devil-prompted connerie to push the limits of free flight, but this time there was no design in it. He was just taken completely by surprise, as he had hoped he might be all along.•
ANN ARBOR, Mich.–The brain of a dog was transferred to a man’s skull at University Hospital here to-day. W.A. Smith of Kalamazoo had been suffering from abscess on the brain, and in a last effort to save his life this remarkable operation was performed.
Opening his skull, the surgeons removed the diseased part of his brain, and in its place substituted the brain of a dog.
Smith was resting comfortably to-night, and the surgeons say he has a good chance to recover.•
It’s perplexing that video games aren’t used to teach children history and science, though the economics aren’t easy. A blockbuster game on par with today’s best offerings can cost hundreds of millions to develop and design, and that’s a steep price without knowing if such software would be welcomed into classrooms.
In addition to cost, there’s always been a prejudice against learning devices because they seem to reduce students into just more machines. That’s not altogether false if you consider that B.F. Skinner saw pupils as “programmable.” In an Atlantic article by Jacek Krywko looks the latest attempts at the making of mind-improving machines, which will not only teach language but also “monitor things like joy, sadness, boredom, and confusion.” Such robot social intelligence is thought to be the key difference: Don’t try to make the students more like machines but the machines more like the students.
A passage about the Skinner’s failed attempts in the 1950s at making education more robotic:
His new device taught by showing students questions one at a time, with the idea that the user would be rewarded for each right answer.
This time, there was no “cultural inertia.” Teaching machines flooded the market, and backlash soon followed. Kurt Vonnegut called the machines “playthings” and argued that they couldn’t prepare a kid for “one-millionth of what is going to hit him in the teeth, ready or not.” Fortune ran a story headlined “Can People Be Taught Like Pigeons?” By the end of the ‘60s, teaching machines had once again fallen out of favor. The concept briefly resurfaced again in the ‘80s, but the lack of quality educational software—and the public’s perception of mechanized teachers as something vaguely Orwellian—meant they once again failed to gain much traction.
But now, they’re back for another try.
Scientists in Germany, Turkey, the Netherlands, and the U.K. are currently working on language-teaching machines more complex than anything [Sydney] Pressley or Skinner dreamed up.•
Some structures survive because they’re made of sturdy material and some because of enduring symbolism. Chinese real-estate billionaire Zhang Xin doesn’t possess the hubris to believe her buildings, even those designed by Pritzker winners, will survive the Great Wall, but she’s hopeful about her nation’s future despite present-day economic turbulence. Zhang thinks the country must be more open politically and culturally, perhaps become a democratic state, and has invested heavily toward those ends by funding scholarships for students to be educated at top universities all over the world.
Zhang Xin, if China’s economy was an enterprise and you were running it, how would you make your company fit for the future?
No economy, no company, in fact no individual can develop its full potential today without embracing two fundamental trends — globalization and digitalization. They will dominate for quite some time to come.
What does this mean for China?
It means that the country needs to continue opening up and keep connecting. It needs to realize that the world has become one. The old concept of isolation, the idea that you can solve your problems on your own does not work anymore — neither in cultural, economic, nor political terms. Isolation means a lack of growth. I grew up in China at a time when the country was completely isolated. That era is over.
When countries prosper economically there comes a time when its people start asking for greater political participation. Will this eventually happen in China, too?
I said before that the Chinese no longer crave so much for food and accommodation, but they do crave democracy. I stand by that. I don’t know which model China will follow. But the higher our standard of living, the higher our levels of education, the further people will look around. And we can see which level of openness other societies enjoy. We are no different — we too want more freedom. The question is: How much freedom will be allowed?
Today the silhouettes of your buildings dominate the skylines of Beijing and Shanghai, almost serving as a signature of modern China. Have you ever wondered how long these buildings will continue to stand tall and just how sustainable these structures that you have created together with your architects will be?
We have become so quick and effective in building things today. It would be easy to build another Pyramid of Giza or another Great Wall. But these buildings haven’t withstood the test of time because of their building quality. They stand tall because they have a symbolic value, they represent a culture. I’m afraid what we are building today will not have the same impact and sustainability of the architecture of a 100, 500 or 1,000 years ago. The buildings of those days were miracles. We don’t perform such miracles today. So we should be a little more modest. For my part, I’ll be glad to show one of my buildings one day to my grandchildren and say: I’m proud of that.•
Strategy would not seem to be Donald Rumsfeld’s strong suit.
Despite that, the former Dubya Defense Secretary marshaled his forces and created an app for a strategic video game called “Churchill Solitaire,” based on actual card game played incessantly during WWII by the British Prime Minister. If you’re picturing an ill-tempered, computer-illiterate senior barking orders into a Dictaphone, then you’ve already figured out Rumsfeld’s creative process. At least tens of thousands of people were not needlessly killed during the making of the app.
Mr. Rumsfeld can’t code. He doesn’t much even use a computer. But he guided his young digitally minded associates who assembled the videogame with the same method he used to rule the Pentagon—a flurry of memos called snowflakes.
As a result, “Churchill Solitaire” is likely the only videogame developed by an 83-year-old man using a Dictaphone to record memos for the programmers.
At the Pentagon, Mr. Rumsfeld was known for not mincing words with his memos. Age hasn’t mellowed him.
“We need to do a better job on these later versions. They just get new glitches,” reads one note from Mr. Rumsfeld. “[W]e ought to find some way we can achieve steady improvement instead of simply making new glitches.”
Other notes were arguably more constructive, if still sharply worded.
“Instead of capturing history, it is getting a bit artsy,” he wrote in one snowflake in which he suggested ways to make the game better evoke Churchill—including scenes from World War II and quotes from the prime minister, changes that made it into the final game.•
Outside of North Korea, perhaps only Donald Trump is unconvinced of the treachery of Vladimir Putin, a capo with nuclear capabilities. When the Russian tyrant is someday gone from the kleptocracy, the evil he administered, both in plain sight and beneath the surface, will be tallied and described, and it will likely be even worse than feared. The body count won’t be Stalinesque, but the horrible intent will be similar.
His royal heinous is so awful that no one even looks twice at this point when the Kremlin is implicated in political assassination. We’ve crossed that threshold.
Today, a retired British High Court judge named Robert Owenpublisheda 328-page reporton the 2006 death in London of Alexander Litvinenko, a former agent of Russia’s Federal Security Service, the F.S.B. Nine years after Litvinenko went bald and wasted away in a London hospital bed, from poisoning with a rare radioactive isotope, Owen’s report found that there was “strong circumstantial evidence of Russian state responsibility” and that the Russian president, Vladimir Putin, and the head of the F.S.B. likely sanctioned the murder.
It’s a salacious tale of revenge and espionage, straight out of a John le Carre novel: an F.S.B. man turned whistleblower meets in a posh London hotel with his former colleagues, who slip polonium 210 into his green tea. Investigators find a clump of debris laced with the radioactive stuff in a sink drainpipe a few floors above, near where one of the F.S.B. men was staying. The other suspected assassin gave Litvinenko’s wealthy benefactor, the banished oligarch Boris Berezovsky, a T-shirt that said, “nuclear death is knocking your door [sic].”
And yet, in Russia the report merited little more than a yawn.•
In 1993, three years before his death, a shaky Dr. Timothy Leary was hired by ABC to interview fellow drug user Billy Idol about the new album (remember those?) Cyberpunk. From his first act as an LSD salesman, Leary was intrigued by the intersection ofpharmaceuticals and technology. After a stretch in prison, the guru reinvented himself as a full-time technologist, focusing specifically on software design and space exploration. One trip or another, I suppose.
Given the year this network special (which also featured the Ramones and Television) was broadcast, it’s no surprise the pair sneer at the marketing of the Generation X concept. Leary offers that cyberpunk means that “we have to be smarter than people who run the big machines.” Or maybe it means that we can purchase crap on eBay until the Uber we ordered arrives. Leary tells Idol that his music is “changing middle-class robot society.” Oh, Lord. Well, I’ll give the good doctor credit for saying that computers would rearrange traditional creative and economic roles.
This Q&A runs for roughly the first ten minutes, and while the footage may be of crappy quality, it’s a relic worth the effort.
Predictions are really difficult in a sport that features athletes hitting a round ball with a round bat, in which small differences in eyesight are so key and a couple of injuries or trades can make all the difference. Despite the statistical revolution, it’s hard to say what will happen. And the things that are pretty evident are known by every franchise. How to get an edge?
There’s no doubt the veritable data arms race between clubs, which Branch Rickey birthed during the Cold War, is becoming even more information-rich as technology and biotech play an increasingly bigger role. Brains as well as elbows are to be X-rayed. The deeper you dig, the more returns may be diminishing, but perhaps you strike gold.
Fangraphs, creator of some of those awful 2015 projections, has an article by Adam Guttridge and David Ogren about next-level data collection, explaining what teams are doing to try to acquire significantly more info than fans or their fellow front offices. An excerpt:
Third-party companies are supplying a wealth of data which previously didn’t exist. The most publicized forms of that have been Trackman and Statcast. The key phrase here is data, as opposed to supplying new analysis. Data is the manna from which new analysis may come, and new types or sources of data expand the curve under which we can operate. That’s a fundamentally good thing.
There’s a wave of companies providing something different than Statcast and Trackman. While Statcast and Trackman are generally providing data that’s a more granular form of information which we already have — i.e. more detailed accounts of hitting, fielding, or pitching — others are aiming to provide information in spaces it hasn’t yet been available. A startup namedDeCervois using brain-scan technology to map the relationship between cognition and athletic performance. Wearable-tech companies likeMotusandZeppaim to provide detailed, data-centric information in the form of bat speed, a pitcher’s arm path, and more. Biometric solutions likeKitman Labsare competing to capture and provide biometric data to teams as well.
The solutions which provide more granular data (Trackman, Statcast, and also ever-evolving developments from Baseball Info Solutions) are of perhaps unknown significance. They offer a massive volume of data, but it’s an open question as to whether it yet offers significant actionable information, whether it has value as a predictive/evaluative tool rather than merely a descriptive one.•
Larry Page didn’t start Google (now Alphabet) primarily to help you find the nearest car wash. It was always intended to be an AI company. So much the better that the very thing that supplied the search giant with its gazillions in ad revenue was also collecting information that could be used for the creation of machine intelligence. But even by the lofty standards of the original mission statement, Page has moved far further afield, ambitiously angling to create driverless cars, colonize space and even “cure death.” I’ll bet the under on that last one.
In a smart New York Times article, Conor Dougherty profiles the unorthodox CEO who eschews earnings calls but not robotics conferences, hoping to remake our world–and other ones. An excerpt:
Larry Page is not a typical chief executive, and in many of the most visible ways, he is not a C.E.O. at all. Corporate leaders tend to spend a good deal of time talking at investor conferences or introducing new products on auditorium stages. Mr. Page, who is 42, has not been on an earnings call since 2013, and the best way to find him at Google I/O — an annual gathering where the company unveils new products — is to ignore the main stage and follow the scrum of fans and autograph seekers who mob him in the moments he steps outside closed doors.
But just because he has faded from public view does not mean he is a recluse. He is a regular at robotics conferences and intellectual gatherings like TED. Scientists say he is a good bet to attend Google’s various academic gatherings, like Solve for X and Sci Foo Camp, where he can be found having casual conversations about technology or giving advice to entrepreneurs.
Mr. Page is hardly the first Silicon Valley chief with a case of intellectual wanderlust, but unlike most of his peers, he has invested far beyond his company’s core business and in many ways has made it a reflection of his personal fascinations.
He intends to push even further with Alphabet, a holding company that separates Google’s various cash-rich advertising businesses from the list of speculative projects like self-driving cars that capture the imagination but do not make much money. Alphabet companies and investments span disciplines from biotechnology to energy generation to space travel to artificial intelligence to urban planning.•
A new society of cranks has been started by a former Lieutenant in the German Army. His name is Wäthe. He is the leader of a new ‘ism,’ and as such has sailed from San Francisco to Honolulu. The ‘Fruitarians’ is the name of the new society he represents, and their belief–or rather notion–is, that modern civilization is full of vanities and strange motions, and greatly needs reforming. The members eat nothing but ripe fruit, eschew cooked food of any kind, and drink only water. They are to live in huts, bare of the comforts of civilization, and go naked. Ex-Lient. Wäthe intends to buy a large tract of land in the Sandwich Islands, or perhaps, a small island outright, for the purpose of founding a colony.•
Donald Trump, a mix of Mussolini and QVC host, is loathsome to different people for different reasons.
Take good people for instance: They despise Trump because he’s a lying, egotistical, demeaning, manipulative, racist, xenophobic, sexist misery. There are coke dealers who are more honest. The man is a human waste product.
Now let’s consider terrible people like Glenn Beck and L. Brent Bozell III: They abhor Trump because he isn’t “legitimately conservative.” Well, that’s true, but it probably should be at least #453 on the list of reasons to not support him. That would be like Democrats saying that John Wilkes Booth wasn’t a good representative of their party because of his questionable stance on land taxes. Of course, you can’t expect much from Beck, a cynical salesman of gold-plated bunkers, or Bozell, who once referred to President Obama as looking like a “skinny ghetto crackhead.”
Those shitbags are two of the right-wingers enlisted for a National Review “Against Trump” cover story. In all fairness, some of the essayists do make a moral case as well against hideous hotelier. From Mona Charen:
In December, Public Policy Polling found that 36 percent of Republican voters for whom choosing the candidate “most conservative on the issues” was the top priority said they supported Donald Trump. We can talk about whether he is a boor (“My fingers are long and beautiful, as, it has been well documented, are various other parts of my body”), a creep (“If Ivanka weren’t my daughter, perhaps I’d be dating her”), or a louse (he tried to bully an elderly woman, Vera Coking, out of her house in Atlantic City because it stood on a spot he wanted to use as a garage). But one thing about which there can be no debate is that Trump is no conservative—he’s simply playing one in the primaries. Call it unreality TV.
Put aside for a moment Trump’s countless past departures from conservative principle on defense, racial quotas, abortion, taxes, single-payer health care, and immigration. (That’s right: In 2012, he derided Mitt Romney for being too aggressive on the question, and he’s made extensive use of illegal-immigrant labor in his serially bankrupt businesses.) The man has demonstrated an emotional immaturity bordering on personality disorder, and it ought to disqualify him from being a mayor, to say nothing of a commander-in-chief.
Trump has made a career out of egotism, while conservatism implies a certain modesty about government. The two cannot mix.
Who, except a pitifully insecure person, needs constantly to insult and belittle others including, or perhaps especially, women? Where is the center of gravity in a man who in May denounces those who “needlessly provoke” Muslims and in December proposes that we (“temporarily”) close our borders to all non-resident Muslims? If you don’t like a Trump position, you need only wait a few months, or sometimes days. In September, he advised that we “let Russia fight ISIS.” In November, after the Paris massacre, he discovered that “we’re going to have to knock them out and knock them out hard.” A pinball is more predictable.
Is Trump a liberal? Who knows? He played one for decades — donating to liberal causes and politicians (including Al Sharpton) and inviting Hillary Clinton to his (third) wedding. Maybe it was all a game, but voters who care about conservative ideas and principles must ask whether his recent impersonation of a conservative is just another role he’s playing. When a con man swindles you, you can sue—as many embittered former Trump associates who thought themselves ill used have done. When you elect a con man, there’s no recourse.•
Losing the first leg of the Space Race ultimately proved beneficial to the U.S. The jolt of the Soviet Sputnik 1 success spurred the government to establish DARPA and fund the ARPANET, which, of course, eventually became the Internet.
Another profound consequence of the Cold War satellite race was the creation of Astrobiology, a field that couldn’t quite form until Sputnik’s brilliant blast provided it with its raison d’être. In a beautifully written Nautilus piece, Caleb Scharf traces the branch’s beginnings, which were propelled in the late 1950s by forward-thinking American scientist Joshua Lederberg, who, to paraphrase Leonard Cohen, saw the future and thought it might be murder. His work and warnings put our forays into the final frontier, as Scharf writes, in “bio-containment lockdown,” which was fortunate.
By the 1990s, the mission of astrobiology had morphed and become immense, and it will likely grow larger still as we press further across the universe.
Astronomy and biology have been circling each other with timid infatuation since the first time a human thought about the possibility of other worlds and other suns. But the melding of the two into the modern field of astrobiology really began on Oct. 4, 1957, when a 23-inch aluminum sphere called Sputnik 1 lofted into low Earth orbit from the desert steppe of the Kazakh Republic. Over the following weeks its gently beeping radio signal heralded a new and very uncertain world. Three months later it came tumbling back through the atmosphere, and humanity’s small evolutionary bump was set on a trajectory never before seen in 4 billion years of terrestrial history.
At the time of the ascent of Sputnik, a 32-year-old American called Joshua Lederberg was working in Australia as a visiting professor at the University of Melbourne. Born in 1925 to immigrant parents in New Jersey, Lederberg was a prodigy. Quick-witted, generous, and with an incredible ability to retain information, he blazed through high school and was enrolled at Columbia University by the time he was 15. Earning a degree in zoology and moving on to medical studies, his research interests diverted him to Yale. There, at age 21, he helped research the nascent field of microbial genetics, with work on bacterial gene transfer that would later earn him a share of the 1958 Nobel Prize.
Like the rest of the planet, Australia was transfixed by the Soviet launch; as much for the show of technological prowess as for the fact that a superpower was now also capable of easily lobbing thermonuclear warheads across continents. But, unlike the people around him, Lederberg’s thoughts were galvanized in a different direction. He immediately knew that another type of invisible wall had been breached, a wall that might be keeping even more deadly things at bay, as well as incredible scientific opportunities.
If humans were about to travel in space, we were also about to spread terrestrial organisms to other planets, and conceivably bring alien pathogens back to Earth. As Lederberg saw it, either we were poised to destroy indigenous life-forms across our solar system, or ourselves. Neither was an acceptable option.•
Prejudice in the justice system is acknowledged to be a bad thing, but is the slippery slope of predictive analytics much different?
Schools and social services have always strived to identify children who might be headed for trouble, and that’s a good thing, but algorithms now being used in this area seem to have a flawless authority when identifying some minors as criminals-in-waiting. Couldn’t that system guide policing to pre-judge? Should we have to defend ourselves from guilt before having done anything wrong?
When people talk about predictive analytics—whether it’s in reference to policing, banking, gas drilling, or whatever else—they’re often talking about identifying trends: using predictive tools to intuit how groups of people and/or objects might behave in the future. But that’s changing.
In a growing number of places, prediction is getting more personal. In Chicago, for example, there’s the “heat list”—a Chicago Police Department project designed to identify the Chicagoans most likely to be involved in a shooting. In some state prison systems, analysts are working on projects designed to identifywhich particular prisonerswill re-offend. In 2014, Rochester, New York, rolled outits versionof L.A. County’s DPP program—with the distinction that it’s run by cops, and spearheaded by IBM—which offered the public just enough informationto cause concern.
“It’s worrisome,” says Andrew G. Ferguson, alaw professorat the University of the District of Columbia who studies and writes about predictive policing. “You don’t want a cop arresting anyone when they haven’t done anything wrong. The idea that some of these programs are branching into child welfare systems—and that kids might get arrested when they haven’t done anything wrong—only raises more questions.”
Ferguson says the threat of arrest poses a problem in all the most widely reported predictive programs in the country. But he acknowledges that there are valid arguments underpinning all of them.•
It’s not all bad news for Track Palin in the wake of his domestic violence charges. The Yankees just offered him a contract.
After Sarah Palin’s asinine attempt yesterday to blame President Obama for her son’s recent arrest, it made me wonder if Obama was also responsible for Track’s alleged legal problems from a decade ago, before he enlisted. The Palin boy must have been suffering from (PTSD) Pre-Traumatic Stress Disorder.
In the usual perplexing Palin fashion, she accused Obama of “not supporting the troops” as she was endorsing Donald Trump, the only national American politician in memory to actually demean the troops. How perfect.
More Trump news: In an excellent Gawker post by Will Kaufman, the writer dug through the Woody Guthrie Archives to find documents about the songwriter’s painful period in the 1950s living in a Beach Haven building owned by the Trump paterfamilias, Fred. It speaks to a real-estate empire built, in part, on racism.
‘Old Man Trump’s’ color line
Only a year into his Beach Haven residency, Guthrie – himself a veteran – was already lamenting the bigotry that pervaded his new, lily-white neighborhood, which he’d taken to calling “Bitch Havens.”
In his notebooks, he conjured up a scenario of smashing the color line to transform the Trump complex into a diverse cornucopia, with “a face of every bright color laffing and joshing in these old darkly weeperish empty shadowed windows.” He imagined himself calling out in Whitman-esque free verse to the “negro girl yonder that walks along against this headwind / holding onto her purse and her fur coat”:
I welcome you here to live. I welcome you and your man both here to Beach Haven to love in any ways you please and to have some kind of a decent place to get pregnant in and to have your kids raised up in. I’m yelling out my own welcome to you.
For Guthrie, Fred Trump came to personify all the viciousness of the racist codes that continued to put decent housing – both public and private – out of reach for so many of his fellow citizens:
I suppose Old Man Trump knows Just how much Racial Hate he stirred up In the bloodpot of human hearts When he drawed That color line Here at his Eighteen hundred family project ….
Daniel Kahneman has argued that the robot revolution, if it comes, will arrive just in time to save China. Realizing such a transition, however, is tougher than promising it, as Foxconn has learned. Perhaps even more urgently requiring robotics in Asia is Japan, which has a graying, homogenous population. Who will do the work and care for the elderly in a country that isn’t based on immigration?
SUZU, Japan—It has been a decade since the train stopped running in a sleepy town at the tip of Japan’s Noto Peninsula, and bus routes have dwindled. The trend limits mobility options for the city’s dwindling rural population of 15,000, nearly half older than 65.
However, Suzu city officials and researchers may have a solution: vehicles that drive themselves.
For months, a white, self-driving Toyota Prius has been zipping along the city’s winding seaside roads. The test car attracts plenty of attention from the community. A bulky spinning sensor mounted to the roof helps the vehicle make critical decisions instead of relying on a researcher from Kanazawa University who is sitting in the driver’s seat.
The societal challenges that come with Suzu’s graying population are common throughout Japan,which leads the world in aging, with one in four people older than 65, compared with 15% in the U.S. and 8% world-wide. The trend is particularly prominent in the countryside, where the young often flee to big cities.