Marvin Minsky

You are currently browsing articles tagged Marvin Minsky.

26minsky-obit-web-facebookJumboThe Humanities has had “physics envy” for quite awhile, trying to turn literature and such into a science for some silly reasons rooted in insecurity. It’s probably less understood outside academic circles that AI also long hoped to be like physics, wrapping up staggering complexities into a few laws. Marvin Minsky wasn’t a believer in such tidiness. A Fold article collects thoughts about the recently deceased AI pioneer from some of his colleagues. One, MIT Professor Patti Maes, recalls his parrying with popular beliefs in the field. An excerpt:

Explorations of Consciousness and Longevity

Some of Marvin’s most fascinating work was around the idea of the Mind As Society and the nature of consciousness. Throughout this work, what stood out was his enormous respect for the human.

“When a person says ‘I’m not a machine’, they’re showing a lack of respect for people….because we are the greatest machine in the world.”

He was curious about the fuzzy ideas people hold about the nature of consciousness and marveled at how we can effectively navigate the world without the slightest idea of how we were doing it.

“The mystery of consciousness to me is not ‘Isn’t it wonderful that we’re conscious’, but it’s the opposite. Isn’t it wonderful that we can do things like talk and walk, and understand without having the slightest idea of how it works.”

He also focused on health and longevity, speaking of immortality as a perfectly reasonable goal. He thought deeply about everything from the priorities we should have as a species to body part replacement.

From Pattie Maes:

“Marvin was always a true original, out-of-the-box thinker. While he is of course widely recognized as one of the founders of the field of Artificial Intelligence (AI), his views were often at odds with the majority of the AI community.

For many decades, and even today, AI was plagued by “physics envy.” Researchers sought a few universal principles or mechanisms that could model or produce human-like intelligence. Marvin constantly reminded us that the real solution was likely to be a lot more complex. He described a myriad of different mechanisms that may be involved in producing intelligence in his books ‘The Society of Mind’ and ‘The Emotion Machine’ and emphasized the importance of giving computers large amounts of ‘common sense knowledge’, a problem few AI researchers, even today, have attempted to tackle.

I suspect that gradually the field will come to align with his views, recognizing that his views and writings have a timeless and deep quality.”•

Tags: ,

tentaclearm (1)

In addition to yesterday’s trove of posts about the late, great Marvin Minsky, I want to refer you to a Backchannel remembrance of the AI pioneer by Steven Levy, the writer who had the good fortune to arrive on the scene at just the right moment in the personal-computer boom and the great talent to capture it. The journalist recalls Minsky’s wit and conversation almost as much as his contributions to tech. Just a long talk with the cognitive scientist was a perception-altering experience, even if his brilliance was intimidating. The opening:

There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening.

But maybe Marvin could imagine it. His imagination respected no borders.

Minsky died Sunday night, at 88. His body had been slowing down, but that mind had kept churning. He was more than a pioneering computer scientist — he was a guiding light for what intellect itself could do. He was also our Yoda. The entire computer community, which includes all of us, of course, is going to miss him. 

I first met him in 1982; I had written a story for Rolling Stone about young computer hackers, and it was optioned by Jane Fonda’s production company. I traveled to Boston with Fonda’s producer, Bruce Gilbert; and Susan Lyne, who had engineered my assignment to begin with. It was my first trip to MIT; my story about been about Stanford hackers.

I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement.•

Tags: ,

1Sadly, the legendary MIT cognitive scientist Marvin Minsky just died. From building a robotic tentacle arm nearly 50 years ago to consulting on 2001: A Space Odyssey, the AI expert–originator, really–thought as much as anyone could about smart machines during a lifetime. From Glenn Rifkin’s just-published New York Times obituary:

Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.

“Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history,” said Alan Kay, a computer scientist and a friend and colleague of Professor Minsky’s.•

The following are a collection of past posts about his life and work.

_______________________________

“Such A Future Cannot Be Realized Through Biology”

WESTWORLD

Reading Michael Graziano’s great essay about building a mechanical brain reminded me of Marvin Minsky’s 1994 Scientific American article,Will Robots Inherit the Earth?It foresees a future in which intelligence is driven by nanotechnology, not biology. Two excerpts follow.

· · · · · · · · · ·

Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains — using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives–with the option of immortality — and choose among other, unimagined capabilities as well.

In such a future, attaining wealth will not be a problem; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible–particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.

Such a future cannot be realized through biology. 

       · · · · · · · · · ·

Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that won’t be constrained to work at the crawling pace of “real time.” The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our “mind-children” to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.

But could such beings really exist? Many thinkers firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they’ll always lack some vital ingredient. They call this essence by various names–like sentience, consciousness, spirit, or soul. Philosophers write entire books to prove that, because of this deficiency, machines can never feel or understand the sorts of things that people do. However, every proof in each of those books is flawed by assuming, in one way or another, the thing that it purports to prove–the existence of some magical spark that has no detectable properties.

I have no patience with such arguments.•

_________________________________

“A Century Ago, There Would Have Been No Way Even To Start Thinking About Making Smart Machines”

AI pioneer Marvin Minsky at MIT in ’68 showing his robotic arm, which was strong enough to lift an adult, gentle enough to hold a child.

Minsky discussing smart machines on Edge: 

Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, “Am I representing the problem in an unsuitable way,” or “Am I trying to use an unsuitable method?” However, another way is to ask, “How would I make a machine to solve that kind of problem?”

A century ago, there would have been no way even to start thinking about making smart machines. Today, though, there are lots of good ideas about this. The trouble is, almost no one has thought enough about how to put all those ideas together. That’s what I think about most of the time.•

________________________________

“People Have A Fuzzy Idea Of Consciousness”

800px-На_концерте_Шойфет_2-Ashek1881-e1416263704118

Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.

________________________________

“The Brain Doesn’t Work In A Simple Way”

tentaclearm

Marvin Minsky, visionary of robotic arms, thinking computers and major motion pictures, is interviewed by Ray Kurzweil. The topic, unsurprisingly: “Is the Singularity Near?”

________________________________

“Do Outstanding Minds Differ From Ordinary Minds In Any Special Way?”

hairdryer44

Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:

CAN MACHINES BE CREATIVE?

We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies. Most people think that creativity requires some special, magical ‘gift’ that simply cannot be explained. If so, then no computer could create – since anything machines can do most people think can be explained.

To see what’s wrong with that, we must avoid one naive trap. We mustn’t only look at works our culture views as very great, until we first get good ideas about how ordinary people do ordinary things. We can’t expect to guess, right off, how great composers write great symphonies. I don’t
believe that there’s much difference between ordinary thought and highly creative thought. I don’t blame anyone for not being able to do everything the most creative people do. I don’t blame them for not being able to explain it, either. I do object to the idea that, just because we can’t explain it now, then no one ever could imagine how creativity works.

We shouldn’t intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas – and not just our ‘creative’ ones. Were so accustomed to the marvels of the unusual that we forget how little we know about the marvels of ordinary thinking. Perhaps our superstitions about creativity serve some other needs, such as supplying us with heroes with such special qualities that, somehow, our deficiencies seem more excusable.

Do outstanding minds differ from ordinary minds in any special way? I don’t believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself. There must be some intense concern with some subject, but that’s common enough. There also must be great proficiency in that subject; this, too, is not so rare; we call it craftsmanship. There has to be enough self-confidence to stand against the scorn of peers; alone, we call that stubbornness. And certainly, there must be common sense. As I see it, any ordinary person who can understand an ordinary conversation has already in his head most of what our heroes have. So, why can’t ‘ordinary, common sense’ – when better balanced and more fiercely motivated – make anyone a genius,

So still we have to ask, why doesn’t everyone acquire such a combination? First, of course, it sometimes just the accident of finding a novel way to look at things. But, then, there may be certain kinds of difference-in-degree. One is in how such people learn to manage what they learn: beneath the surface of their mastery, creative people must have unconscious administrative skills that knit the many things they know together. The other difference is in why some people learn so many more and better skills. A good composer masters many skills of phrase and theme – but so does anyone who talks coherently.

Why do some people learn so much so well? The simplest hypothesis is that they’ve come across some better ways to learn! Perhaps such ‘gifts’ are little more than tricks of ‘higher-order’ expertise. Just as one child learns to re-arrange its building-blocks in clever ways, another child might learn to play, inside its head, at rearranging how it learns!

Our cultures don’t encourage us to think much about learning. Instead we regard it as something that just happens to us. But learning must itself consist of sets of skills we grow ourselves; we start with only some of them and and slowly grow the rest. Why don’t more people keep on learning more and better learning skills? Because it’s not rewarded right away, its payoff has a long delay. When children play with pails and sand, they’re usually concerned with goals like filling pails with sand. But once a child concerns itself instead with how to better learn, then that might lead to exponential learning growth! Each better way to learn to learn would lead to better ways to learn – and this could magnify itself into an awesome, qualitative change. Thus, first-rank ‘creativity’ could be just the
consequence of little childhood accidents.

So why is genius so rare, if each has almost all it takes? Perhaps because our evolution works with mindless disrespect for individuals. I’m sure no culture could survive, where everyone finds different ways to think. If so, how sad, for that means genes for genius would need, instead of nurturing, a frequent weeding out.”•

_______________________________

“Backgammon Is Now The First Board Or Card Game With, In Effect, A Machine World Champion”

sciambackgammon123 (1)

For some reason, the editors of the New Yorker never ask me for advice. I don’t know what they’re thinking. I would tell them this if they did: Publish an e-book of the greatest technology journalism in the magazine’s history. Have one of your most tech-friendly writers compose an introduction and include Lillian Ross’1970 pieceabout the first home-video recorder, Malcolm Ross’1931 lookinside Bell Labs, Anthony Hiss’ 1977 story about the personal computer, Hiss’1975 articleabout visiting Philip K. Dick in Los Angeles, and Jeremy Bernstein’s short1965 pieceand long1966 oneabout Stanley Kubrick making 2001: A Space Odyssey.

Another inclusion could be A.I.Bernstein’s 1981 profile of the great artificial-intelligence pioneer Marvin Minsky. (It’s gated, so you need a subscription to read it.) The opening:

In July of 1979, a computer program called BKG 9.8–the creation of Hans Berliner, a professor of computer science at Carnegie-Mellon University, in Pittsburgh–played the winner of the world backgammon championship in Monte Carlo. The program was run on a large computer at Carnegie-Mellon that was connected by satellite to a robot in Monte Carlo. The robot, named Gammonoid, had a visual-display backgammon board on its chest, which exhibited its moves and those of its opponent, Luigi Villa, of Italy, who by beating all his human challengers a short while before had won the right to play against the Gammonoid. The stakes were five thousand dollars, winner take all, and the computer won, seven games to one. It had been expected to lose. In a recent Scientific American article, Berliner wrote:

Not much was expected of the programmed robot…. Although the organizers had made Gammonoid the symbol of the tournament by putting a picture of it on their literature and little robot figures on the trophies, the players knew the existing microprocessors could not give them a good game. Why should the robot be any different?

This view was reinforced at the opening ceremonies in the Summer Sports Palace in Monaco. At one point the overhead lights dimmed, the orchestra began playing the theme of the film Star Wars, and a spotlight focused on an opening in the stage curtain through which Gammonoid was supposed to propel itself onto the stage. To my dismay the robot got entangled and its appearance was delayed for five minutes.

This was one of the few mistakes the robot made. Backgammon is now the first board or card game with, in effect, a machine world champion. Checkers, chess, go, and the rest will follow–and quite possibly soon. But what does that mean for us, for our sense of uniqueness and worth–especially as machines evolve whose output we can less distinguish from our own?•

________________________________

“Each One Of Us Already Has Experienced What It Is Like To Be Simulated By A Computer”

MaxheadroomMpegMan

We know so little about the tools we depend on every day. When I was a child, I was surprised that no one expected me to learn how to build a TV even though I watched a TV. But, no, I was just expected to process the surface of the box’s form and function, not to understand the inner workings. Throughout life, we use analogies and signs and symbols to make sense of things we constantly consume but don’t truly understand. Our processing of these basics is not unlike a computer’s process. Marvin Minsky wrote brilliantly on this topic in an Afterword of a 1984 Vernor Vinge novel. An excerpt:

Let’s return to the question about how much a simulated life inside a world inside a machine could resemble our real life “out here.” My answer, as you know by now, is that it could be very much the same––since we, ourselves, already exist as processes imprisoned in machines inside machines! Our mental worlds are already filled with wondrous, magical, symbol–signs, which add to every thing we “see” its “meaning” and “significance.” In fact, all educated people have already learned how different are our mental worlds than the ‘real worlds’ that our scientists know.

Consider the table in your dining room; your conscious mind sees it as having familiar functions, forms, and purposes. A table is “a thing to put things on.” However, our science tells us that this is only in the mind; the only thing that’s “really there” is a society of countless molecules. That table seems to hold its shape only because some of those molecules are constrained to vibrate near one another, because of certain properties of force-fields that keep them from pursuing independent trajectories. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound––whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air––that is, of particles whose distances are so much less constrained.

And so––let’s face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!•

_________________________________

“The Book Is About Ways To Read Out The Contents Of A Person’s Brain”

heads789-2

In 1992, AI legend Marvin Minsky believed that by the year 2023 people would be able to download the contents of their brains and achieve “immortality.” That was probably too optimistic. He also thought such technology would only be possible for people who had great wealth. That was probably too pessimistic. Froman interview that Otto Laske conducted with Minsky about his sci-fi novel, The Turing Option:

Otto Laske:

I hear you are writing a science fiction novel. Is that your first such work?

Marvin Minsky:

Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.

Otto Laske:

At what point in time is the novel situated?

Marvin Minsky:

It’s set in the year 2023.

Otto Laske: 

I may just be alive to experience it, then …

Marvin Minsky:

Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…

Otto Laske: 

How wonderful …

Marvin Minsky:

 … because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…

Otto Laske: 

 That’s a very American footnote…

Marvin Minsky:

Well, it’s also a very Darwinian concept.

Otto Laske: 

Yes, of course.

Marvin Minsky:

There isn’t room for every possible being in this finite universe, so, we have to be selective …

Otto Laske: 

 And who selects, or what is the selective mechanism?

Marvin Minsky:

Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …

Otto Laske: 

That’s worse than fighting, I think.•

___________________________________

“We Are On The Threshold Of An Era That Will Be Strongly Influenced, And Quite Possibly Dominated, By Intelligent Machines”

sa

In the introduction to his 1960 paper, “Steps Toward Artificial Function,” Marvin Minsky, who later served as a technical consultant for 2001: A Space Odyssey, succinctly described the present and future of computers:

A VISITOR to our planet might be puzzled about the role of computers in our technology. On the one hand, he would read and hear all about wonderful “mechanical brains” baffling their creators with prodigious intellectual performance. And he (or it) would be warned that these machines must be restrained, lest they overwhelm us by might, persuasion, or even by the revelation of truths too terrible to be borne. On the other hand, our visitor would find the machines being denounced on all sides for their slavish obedience, unimaginative literal interpretations, and incapacity for innovation or initiative; in short, for their inhuman dullness.

Our visitor might remain puzzled if he set out to find, and judge for himself, these monsters. For he would find only a few machines mostly general-purpose computers), programmed for the moment to behave according to some specification) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of “artificial intelligence.”•

_________________________________

“He Is, In A Sense, Trying To Second-Guess The Future”

sk

I posted a brief Jeremy Bernstein New Yorker piece about Stanley Kubrick that was penned in 1965 during the elongated production of 2001: A Space Odyssey.The following year the same writer turned out a much longer profile for the same magazine about the director and his sci-fi masterpiece. Among many other interesting facts, it mentions that MIT AI legend Marvin Minsky, who’s appeared on this blog many times, was a technical consultant for the film. An excerpt from “How About a Little Game?:

By the time the film appears, early next year, Kubrick estimates that he and [Arthur C.] Clarke will have put in an average of four hours a day, six days a week, on the writing of the script. (This works out to about twenty-four hundred hours of writing for two hours and forty minutes of film.) Even during the actual shooting of the film, Kubrick spends every free moment reworking the scenario. He has an extra office set up in a blue trailer that was once Deborah Kerr’s dressing room, and when shooting is going on, he has it wheeled onto the set, to give him a certain amount of privacy for writing. He frequently gets ideas for dialogue from his actors, and when he likes an idea he puts it in. (Peter Sellers, he says, contributed some wonderful bits of humor for Dr. Strangelove.)

In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing incidental music. In making 2001, he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in 2001 the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in 2001 that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter–‘Maybe it ought to sound like Jackie Mason,’ Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that NASA does, such matters can be dealt with as they come up, but in a movie everything is visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty-five special effects people, and a staff of scientific advisers. By the time this picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether any of the things he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.•

_____________________________

“We Will Go On, As Always, To Seek More Robust Illusions”

416px-Von_Krahl_Theatre_The_Magic_flute_Kantele

Times of great ignorance are petri dishes for all manner of ridiculous myths, but, as we’ve learned, so are times of great information. The more things can be explained, the more we want things beyond explanation. And maybe for some people, it’s a need rather than a want. The opening of “Music, Mind and Meaning,” Marvin Minsky’s 1981 Computer Music Journal essay:

Why do we like music? Our culture immerses us in it for hours each day, and everyone knows how it touches our emotions, but few think of how music touches other kinds of thought. It is astonishing how little curiosity we have about so pervasive an “environmental” influence. What might we discover if we were to study musical thinking?

Have we the tools for such work? Years ago, when science still feared meaning, the new field of research called “Artificial Intelligence” started to supply new ideas about “representation of knowledge” that I’ll use here. Are such ideas too alien for anything so subjective and irrational, aesthetic, and emotional as music? Not at all. I think the problems are the same and those distinctions wrongly drawn: only the surface of reason is rational. I don’t mean that understanding emotion is easy, only that understanding reason is probably harder. Our culture has a universal myth in which we see emotion as more complex and obscure than intellect. Indeed, emotion might be “deeper” in some sense of prior evolution, but this need not make it harder to understand; in fact, I think today we actually know much more about emotion than about reason.

Certainly we know a bit about the obvious processes of reason–the ways we organize and represent ideas we get. But whence come those ideas that so conveniently fill these envelopes of order? A poverty of language shows how little this concerns us: we “get” ideas; they “come” to us; we are “reminded of” them. I think this shows that ideas come from processes obscured from us and with which our surface thoughts are almost uninvolved. Instead, we are entranced with our emotions, which are so easily observed in others and ourselves. Perhaps the myth persists because emotions, by their nature, draw attention, while the processes of reason (much more intricate and delicate) must be private and work best alone.

The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry.

Much of what we now know of the mind emerged in this century from other subjects once considered just as personal and inaccessible but which were explored, for example, by Freud in his work on adults’ dreams and jokes, and by Piaget in his work on children’s thought and play. Why did such work have to wait for modern times? Before that, children seemed too childish and humor much too humorous for science to take them seriously.

Why do we like music? We all are reluctant, with regard to music and art, to examine our sources of pleasure or strength. In part we fear success itself– we fear that understanding might spoil enjoyment. Rightly so: art often loses power when its psychological roots are exposed. No matter; when this happens we will go on, as always, to seek more robust illusions!•

________________________

“Most People Think Computers Will Never Be Able To Think”

h9

Here’s the opening of a 1982 AI Magazine piece by cognitive scientist MIT’s Marvin Minsky, which considers the possibility of computers being able to think:

Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting  aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what’s happening.

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers”. Yet even then, a few pioneers — especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers.” Yet even then, a few pioneers –especially Alan Turing — envisioned what’s now called “Artificial Intelligence” – or “AI.” They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, “computer experts” say machines will never really think. If so, how could they be so smart, and yet so dumb?•

___________________________

“Using This Instrument, You Can ‘Work’ In Another Room, In Another City, In Another Country, Or On Another Planet”

vrnasalead-thumb-550x388-19831

The opening of “Telepresence,” Marvin Minsky’s 1980 Omni think piece which suggested we should bet our future on a remote-controlled economy:

You don a comfortable jacket lined with sensors and muscle-like motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dexterous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can ‘work’ in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.

The crude ‘robotic machines of today can do little of this. By building new kinds of versatile, remote‑controlled mechanical hands, however, we might solve critical problems of energy, health, productivity, and environmental quality, and we would create new industries. It might take 10 to 20 years and might cost $1 billion—less than the cost of a single urban tunnel or nuclear power reactor or the development of a new model of automobile.•

Tags:

Zentralbild-Biscan bgm-Zi 13.8.1964 "Aufgeblasene Männer" für den Transport Garderobenformer, wie sie in Färbereien, Reinigungsanstalten, und Kleiderfabriken benötgt werden,produziert die mit staatlicher Beteiligung arbeitende Fa.Horst Gessner KG in Güsten, Kreis Staßfurt. Jürgen Stein ist gerade bei der Montage und Erprobunmg einer Bügelpuppe. Mit diesen Dampf-,und Bügelpuppen, die von innen mit Dampf und Luft aufgeblasen werden, dauert das Bügeln von Mänteln, Kleidern und Sakkos nur noch etwa zwei Minuten.Dieser Betrieb wird in diesem Jahr für 400000 Mark mehr Erzeugnisse produzieren als vorgesehen und hat bereits 3-4 des Exportsplanes erfüllt."

Theologians and technologists have their similarities, with both believing in a magical higher power of sorts and sometimes showing a lack of fondness for humans beings. Bishop John Fisher argued in 1535 that a person is merely a “satchel full of dung.” Compared to that epithet, Marvin Minsky defining us as “meat machines” in nearly a compliment. 

Ari Schulman’s Washington Post opinion piece asks the question, “Do we love robots because we hate ourselves?” If that’s so, I would say humans have a higher degree of self-awareness than I’ve given us credit for. An excerpt:

Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual.

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal.

It’s not exactly a boutique idea, either.•

Tags: , ,

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

Excerpts follow from a pair of 1990s interviews with Artificial Intelligence pioneer Marvin Minsky. I wonder how much he’s changed his mind one way or another about AI as he enters his 88th year.

_____________________________

From Claudia Dreifus’ 1998 NYT article:

Question:

How do you define common sense?

Marvin Minsky:

Common sense is knowing maybe 30 or 50 million things about the world and having them represented so that when something happens, you can make analogies with others. If you have common sense, you don’t classify the things literally; you store them by what they are useful for or what they remind us of. For instance, I can see that suitcase (over there in a corner) as something to stand on to change a light bulb as opposed to something to carry things in.

Question:

Could you get machines to the point where they can deal with the intangibles of humanness?

Marvin Minsky:

It’s very tangible, what I’m talking about. For example, you can push something with a stick, but you can’t pull it. You can pull something with a string, but you can’t push it. That’s common sense. And no computer knows it. Right now, I’m writing a book, a sequel to The Society of Mind, and I am looking at some of this. What is pain? What is common sense? What is falling in love?

Question:

What is love?

Marvin Minsky:

Well, what are emotions? Emotions are big switches, and there are hundreds of these. . . . If you look at a book about the brain, the brain just looks like switches. . . . You can think of the brain as a big supermarket of goodies that you can use for different purposes. Falling in love is turning on some 20 or 30 or these and turning a lot of the others off. It’s some particular arrangement. To understand it, one has to get some theory of what are the resources in the brain, what kind of arrangements are compatible and what happens when you turn several on and they get into conflict. Being angry is another collection of switches. In this book, I’m trying to give examples of how these things work.

Question:

In the 1968 Stanley Kubrick film 2001: A Space Odyssey, a computer named Hal developed a lethal jealousy of his space companion, a human astronaut. How far are we away from a jealous machine?

Marvin Minsky:

We could be five minutes from it, but it would be so stupid that we couldn’t tell. Though Hal is fiction, why shouldn’t he be jealous? There’s an argument between my friend John McCarthy and me because he thinks you could make smart machines that don’t have any humanlike emotions. But I think you’re going to have to go to great lengths to prevent them from having some acquisitiveness and the need to control things. Because to solve a problem, you have to have the resources and if there are limited resources . . .

Question:

Where were Stanley Kubrick and his co-author, Arthur C. Clarke, right with their 2001: Space Odyssey predictions?

Marvin Minsky:

On just about everything except for the date. It’s quite a remarkable piece.

Question:

Do you believe the National Aeronautics and Space Administration wastes money by insisting on humans for space exploration?

Marvin Minsky:

It’s not that they waste money. It’s that they waste ALL the money.

Question:

If you were heading NASA, how would you run it?

Marvin Minsky:

I would have a space station, but it would be unmanned. And we would throw some robots up there that are not intelligent, but just controlled through teleoperators and you could sort of feel what’s doing. Then, we could build telescopes and all sorts of things and perhaps explore the moon and Mars by remote control. Nobody’s thought of much use for space. The clearest use is building enormous telescopes to see the rest of the universe.•

_____________________________

From Otto Laske’s 1991 AAAI Press interview:

Otto Laske:

I hear you are writing a science fiction novel. Is that your first such work?

Marvin Minsky:

Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.

Otto Laske:

At what point in time is the novel situated?

Marvin Minsky:

It’s set in the year 2023.

Otto Laske:

I may just be alive to experience it, then …

Marvin Minsky:

Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…

Otto Laske:

How wonderful …

Marvin Minsky:

 … because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…

Otto Laske:

That’s a very American footnote …

Marvin Minsky:

Well, it’s also a very Darwinian concept.

Otto Laske:

Yes, of course.

Marvin Minsky:

There isn’t room for every possible being in this finite universe, so, we have to be selective …

Otto Laske:

And who selects, or what is the selective mechanism?

Marvin Minsky:

Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …

Otto Laske:

That’s worse than fighting, I think.•

Tags: , ,

Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.

Tags: ,

Marvin Minsky, visionary of robotic arms, thinking computers and major motion pictures, is interviewed by Ray Kurzweil. The topic, unsurprisingly: “Is the Singularity Near?”

Tags: ,

Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:

CAN MACHINES BE CREATIVE?

We naturally admire our Einsteins and Beethovens, and wonder if
computers ever could create such wondrous theories or symphonies. Most
people think that creativity requires some special, magical ‘gift’ that
simply cannot be explained. If so, then no computer could create – since
anything machines can do most people think can be explained.

To see what’s wrong with that, we must avoid one naive trap. We mustn’t
only look at works our culture views as very great, until we first get good
ideas about how ordinary people do ordinary things. We can’t expect to
guess, right off, how great composers write great symphonies. I don’t
believe that there’s much difference between ordinary thought and
highly creative thought. I don’t blame anyone for not being able to do
everything the most creative people do. I don’t blame them for not being
able to explain it, either. I do object to the idea that, just because we can’t
explain it now, then no one ever could imagine how creativity works.

We shouldn’t intimidate ourselves by our admiration of our Beethovens
and Einsteins. Instead, we ought to be annoyed by our ignorance of how
we get ideas – and not just our ‘creative’ ones. Were so accustomed to the
marvels of the unusual that we forget how little we know about the
marvels of ordinary thinking. Perhaps our superstitions about creativity
serve some other needs, such as supplying us with heroes with such
special qualities that, somehow, our deficiencies seem more excusable.

Do outstanding minds differ from ordinary minds in any special way? I
don’t believe that there is anything basically different in a genius, except
for having an unusual combination of abilities, none very special by
itself. There must be some intense concern with some subject, but that’s
common enough. There also must be great proficiency in that subject;
this, too, is not so rare; we call it craftsmanship. There has to be enough
self-confidence to stand against the scorn of peers; alone, we call that
stubbornness. And certainly, there must be common sense. As I see it, any
ordinary person who can understand an ordinary conversation has
already in his head most of what our heroes have. So, why can’t
‘ordinary, common sense’ – when better balanced and more fiercely
motivated – make anyone a genius,

So still we have to ask, why doesn’t everyone acquire such a combination?
First, of course, it sometimes just the accident of finding a novel way to
look at things. But, then, there may be certain kinds of difference-in-
degree. One is in how such people learn to manage what they learn:
beneath the surface of their mastery, creative people must have
unconscious administrative skills that knit the many things they know
together. The other difference is in why some people learn so many more
and better skills. A good composer masters many skills of phrase and
theme – but so does anyone who talks coherently.

Why do some people learn so much so well? The simplest hypothesis is
that they’ve come across some better ways to learn! Perhaps such ‘gifts’
are little more than tricks of ‘higher-order’ expertise. Just as one child
learns to re-arrange its building-blocks in clever ways, another child
might learn to play, inside its head, at rearranging how it learns!

Our cultures don’t encourage us to think much about learning. Instead
we regard it as something that just happens to us. But learning must itself
consist of sets of skills we grow ourselves; we start with only some of them
and and slowly grow the rest. Why don’t more people keep on learning
more and better learning skills? Because it’s not rewarded right away, its
payoff has a long delay. When children play with pails and sand, they’re
usually concerned with goals like filling pails with sand. But once a child
concerns itself instead with how to better learn, then that might lead to
exponential learning growth! Each better way to learn to learn would lead
to better ways to learn – and this could magnify itself into an awesome,
qualitative change. Thus, first-rank ‘creativity’ could be just the
consequence of little childhood accidents.

So why is genius so rare, if each has almost all it takes? Perhaps because
our evolution works with mindless disrespect for individuals. I’m sure no
culture could survive, where everyone finds different ways to think. If
so, how sad, for that means genes for genius would need, instead of
nurturing, a frequent weeding out.”

Tags:

For some reason, the editors of the New Yorker never ask me for advice. I don’t know what they’re thinking. I would tell them this if they did: Publish an e-book of the greatest technology journalism in the magazine’s history. Have one of your most tech-friendly writers compose an introduction and include Lillian Ross’ 1970 piece about the first home-video recorder, Malcolm Ross’ 1931 look inside Bell Labs, Anthony Hiss’ 1977 story about the personal computer, Hiss’ 1975 article about visiting Philip K. Dick in Los Angeles, and Jeremy Bernstein’s short 1965 piece and long 1966 one about Stanley Kubrick making 2001: A Space Odyssey.

Another inclusion could be A.I., Bernstein’s 1981 profile of the great artificial-intelligence pioneer Marvin Minsky. (It’s gated, so you need a subscription to read it.) The opening:

In July of 1979, a computer program called BKG 9.8–the creation of Hans Berliner, a professor of computer science at Carnegie-Mellon University, in Pittsburgh–played the winner of the world backgammon championship in Monte Carlo. The program was run on a large computer at Carnegie-Mellon that was connected by satellite to a robot in Monte Carlo. The robot, named Gammonoid, had a visual-display backgammon board on its chest, which exhibited its moves and those of its opponent, Luigi Villa, of Italy, who by beating all his human challengers a short while before had won the right to play against the Gammonoid. The stakes were five thousand dollars, winner take all, and the computer won, seven games to one. It had been expected to lose. In a recent Scientific American article, Berliner wrote:

Not much was expected of the programmed robot…. Although the organizers had made Gammonoid the symbol of the tournament by putting a picture of it on their literature and little robot figures on the trophies, the players knew the existing microprocessors could not give them a good game. Why should the robot be any different?

This view was reinforced at the opening ceremonies in the Summer Sports Palace in Monaco. At one point the overhead lights dimmed, the orchestra began playing the theme of the film Star Wars, and a spotlight focused on an opening in the stage curtain through which Gammonoid was supposed to propel itself onto the stage. To my dismay the robot got entangled and its appearance was delayed for five minutes.

This was one of the few mistakes the robot made. Backgammon is now the first board or card game with, in effect, a machine world champion. Checkers, chess, go, and the rest will follow–and quite possibly soon. But what does that mean for us, for our sense of uniqueness and worth–especially as machines evolve whose output we can less distinguish from our own?•

Tags: , , ,

We know so little about the tools we depend on every day. When I was a child, I was surprised that no one expected me to learn how to build a TV even though I watched a TV. But, no, I was just expected to process the surface of the box’s form and function, not to understand the inner workings. Throughout life, we use analogies and signs and symbols to make sense of things we constantly consume but don’t truly understand. Our processing of these basics is not unlike a computer’s process. Marvin Minsky wrote brilliantly on this topic in an Afterword of a 1984 Vernor Vinge novel. An excerpt:

“Let’s return to the question about how much a simulated life inside a world inside a machine could resemble our real life ‘out here.’ My answer, as you know by now, is that it could be very much the same––since we, ourselves, already exist as processes imprisoned in machines inside machines! Our mental worlds are already filled with wondrous, magical, symbol–signs, which add to every thing we ‘see’ its ‘meaning’ and ‘significance.’ In fact, all educated people have already learned how different are our mental worlds than the ‘real worlds’ that our scientists know.

Consider the table in your dining room; your conscious mind sees it as having familiar functions, forms, and purposes. A table is ‘a thing to put things on.’ However, our science tells us that this is only in the mind; the only thing that’s ‘really there’ is a society of countless molecules. That table seems to hold its shape only because some of those molecules are constrained to vibrate near one another, because of certain properties of force-fields that keep them from pursuing independent trajectories. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound––whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air––that is, of particles whose distances are so much less constrained.

And so––let’s face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!”

Tags:

In 1992, AI legend Marvin Minsky believed that by the year 2023 people would be able to download the contents of their brains and achieve “immortality.” That was probably too optimistic. He also thought such technology would only be possible for people who had great wealth. That was probably too pessimistic. From an interview that Otto Laske conducted with Minsky about his sci-fi novel, The Turing Option:

Otto Laske:

I hear you are writing a science fiction novel. Is that your first such work?

Marvin Minsky:

Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.

Otto Laske:

At what point in time is the novel situated?

Marvin Minsky:

It’s set in the year 2023.

Otto Laske: 

I may just be alive to experience it, then …

Marvin Minsky: 

Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…

Otto Laske: 

How wonderful …

Marvin Minsky:

 … because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…

Otto Laske: 

 That’s a very American footnote …

Marvin Minsky:

Well, it’s also a very Darwinian concept.

Otto Laske: 

Yes, of course.

Marvin Minsky:

There isn’t room for every possible being in this finite universe, so, we have to be selective …

Otto Laske: 

 And who selects, or what is the selective mechanism?

Marvin Minsky: 

Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …

Otto Laske:  

That’s worse than fighting, I think.”

Tags: ,

In the introduction to his 1960 paper, “Steps Toward Artificial Function,” Marvin Minsky, who later served as a technical consultant for 2001: A Space Odyssey, succinctly described the present and future of computers:

“A VISITOR to our planet might be puzzled about the role of computers in our technology. On the one hand, he would read and hear all about wonderful ‘mechanical brains’ baffling their creators with prodigious intellectual performance. And he (or it) would be warned that these machines must be restrained, lest they overwhelm us by might, persuasion, or even by the revelation of truths too terrible to be borne. On the other hand, our visitor would find the machines being denounced on all sides for their slavish obedience, unimaginative literal interpretations, and incapacity for innovation or initiative; in short, for their inhuman dullness.

Our visitor might remain puzzled if he set out to find, and judge for himself, these monsters. For he would find only a few machines mostly general-purpose computers), programmed for the moment to behave according to some specification) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of ‘artificial intelligence.'”

Tags:

The opening of “Will Robots Inherit the Earth?Marvin Minsky’s 1994 Scientific American article about the end of carbon’s dominance:

“Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains — using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives–with the option of immortality — and choose among other, unimagined capabilities as well.

In such a future, attaining wealth will not be a problem; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible–particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.

Such a future cannot be realized through biology.”

Tags:

I posted a brief Jeremy Bernstein New Yorker piece about Stanley Kubrick that was penned in 1965 during the elongated production of 2001: A Space Odyssey. The following year the same writer turned out a much longer profile for the same magazine about the director and his sci-fi masterpiece. Among many other interesting facts, it mentions that MIT AI legend Marvin Minsky, who’s appeared on this blog many times, was a technical consultant for the film. An excerpt from “How About a Little Game?” (subscription required):

By the time the film appears, early next year, Kubrick estimates that he and [Arthur C.] Clarke will have put in an average of four hours a day, six days a week, on the writing of the script. (This works out to about twenty-four hundred hours of writing for two hours and forty minutes of film.) Even during the actual shooting of the film, Kubrick spends every free moment reworking the scenario. He has an extra office set up in a blue trailer that was once Deborah Kerr’s dressing room, and when shooting is going on, he has it wheeled onto the set, to give him a certain amount of privacy for writing. He frequently gets ideas for dialogue from his actors, and when he likes an idea he puts it in. (Peter Sellers, he says, contributed some wonderful bits of humor for Dr. Strangelove.)

In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing incidental music. In making 2001, he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in 2001 the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in 2001 that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter–‘Maybe it ought to sound like Jackie Mason,’ Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that NASA does, such matters can be dealt with as they come up, but in a movie everything is visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty-five special effects people, and a staff of scientific advisers. By the time this picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether any of the things he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.•

Tags: , , ,

“We are entranced with our emotions, which are so easily observed in others and ourselves.” (Image by Kantele.)

Times of great ignorance are petri dishes for all manner of ridiculous myths, but, as we’ve learned, so are times of great information. The more things can be explained, the more we want things beyond explanation. And maybe for some people, it’s a need rather than a want. The opening of “Music, Mind and Meaning,” Marvin Minsky’s 1981 Computer Music Journal essay:

“Why do we like music? Our culture immerses us in it for hours each day, and everyone knows how it touches our emotions, but few think of how music touches other kinds of thought. It is astonishing how little curiosity we have about so pervasive an ‘environmental’ influence. What might we discover if we were to study musical thinking?

Have we the tools for such work? Years ago, when science still feared meaning, the new field of research called ‘Artificial Intelligence’ started to supply new ideas about ‘representation of knowledge’ that I’ll use here. Are such ideas too alien for anything so subjective and irrational, aesthetic, and emotional as music? Not at all. I think the problems are the same and those distinctions wrongly drawn: only the surface of reason is rational. I don’t mean that understanding emotion is easy, only that understanding reason is probably harder. Our culture has a universal myth in which we see emotion as more complex and obscure than intellect. Indeed, emotion might be ‘deeper’ in some sense of prior evolution, but this need not make it harder to understand; in fact, I think today we actually know much more about emotion than about reason.

Certainly we know a bit about the obvious processes of reason–the ways we organize and represent ideas we get. But whence come those ideas that so conveniently fill these envelopes of order? A poverty of language shows how little this concerns us: we ‘get’ ideas; they ‘come’ to us; we are ‘re-minded of’ them. I think this shows that ideas come from processes obscured from us and with which our surface thoughts are almost uninvolved. Instead, we are entranced with our emotions, which are so easily observed in others and ourselves. Perhaps the myth persists because emotions, by their nature, draw attention, while the processes of reason (much more intricate and delicate) must be private and work best alone.

The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry.

Much of what we now know of the mind emerged in this century from other subjects once considered just as personal and inaccessible but which were explored, for example, by Freud in his work on adults’ dreams and jokes, and by Piaget in his work on children’s thought and play. Why did such work have to wait for modern times? Before that, children seemed too childish and humor much too humorous for science to take them seriously.

Why do we like music? We all are reluctant, with regard to music and art, to examine our sources of pleasure or strength. In part we fear success itself– we fear that understanding might spoil enjoyment. Rightly so: art often loses power when its psychological roots are exposed. No matter; when this happens we will go on, as always, to seek more robust illusions!”

Tags:

I recently posted a classic article about telepresence by MIT’s Marvin Minsky. Here’s the opening of a 1982 AI Magazine piece by the cognitive scientist, which considers the possibility of computers being able to think:

“Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting  aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what’s happening.

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers”. Yet even then, a few pioneers — especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called ‘computers.’ Yet even then, a few pioneers –especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?”

Tags:

The opening of “Telepresence,” Marvin Minsky’s 1980 Omni think-piece which suggested we should bet our future on a remote-controlled economy:

“You don a comfortable jacket lined with sensors and muscle-like motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dexterous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can ‘work’ in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.

The crude ‘robotic machines of today can do little of this. By building new kinds of versatile, remote‑controlled mechanical hands, however, we might solve critical problems of energy, health, productivity, and environmental quality, and we would create new industries. It might take 10 to 20 years and might cost $1 billion—less than the cost of a single urban tunnel or nuclear power reactor or the development of a new model of automobile.”

Tags:

AI pioneer Marvin Minsky at MIT in ’68 showing his robotic arm, which was strong enough to lift an adult, gentle enough to hold a child.

Minsky discussing smart machines on Edge: “Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, ‘Am I representing the problem in an unsuitable way,’ or ‘Am I trying to use an unsuitable method?’ However, another way is to ask, ‘How would I make a machine to solve that kind of problem?’

A century ago, there would have been no way even to start thinking about making smart machines. Today, though, there are lots of good ideas about this. The trouble is, almost no one has thought enough about how to put all those ideas together. That’s what I think about most of the time.”

Tags: