David Gelernter

You are currently browsing articles tagged David Gelernter.

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Discussion of the ideas in David Gelernter’s new book, The Tides of Mind: Uncovering the Spectrum of Consciousness, which just landed in my mailbox, forms the crux of the latest episode of EconTalk with Russ Roberts. The computer scientist talks about the variety of cognizance that forms our days, an idea he believes lost in the unstudied acceptance of binary labels “conscious” or “unconscious.” He thinks, for instance, that we operate at various levels of up- or down-spectrum consciousness, which permits us to function in different ways. 

Clearly the hard problem is still just that, and the creativity that emerges from consciousness, often the development of new symbols or the successful comparison and combination of seemingly disparate thoughts, isn’t yet understood. Someday we’ll comprehend the chemical reactions that enable these mysterious and magnificent syntheses, but for now we can enjoy though not understand them. In one passage, the author wonderfully articulates the creative process, the parts that are knowable and those that remain inscrutable. The excerpt:

David Gelernter:

You also mention, which is important, the fact that you have a focused sense when you are working on lyrics or writing poetry, let’s say. And I’ve argued, on the other hand, that you need to be well down-spectrum in order to get creativity started. That is, you can’t be at your creative peak when you’ve just got up in the morning: your attention is focused and you are tapping your pencil; you want to get to work and start, you know, getting through the day’s business at a good clip. It’s not the mood in which one can make a lot of progress writing poetry. But that’s exactly why–that’s one of the important reasons why creativity is no picnic. It’s not easily achieved. I think it’s fair to say that everybody is creative in a certain way. In the sort of daily round of things we come up with new solutions to old problems routinely. But the kind of creativity that yields poetry that other people value, that yields original work in any area, is highly valued, is more highly valued than any other human project, because it’s rare. And it’s rare not because it requires a gigantic IQ (Intelligence Quotient), but because it requires a certain kind of balance, which is not something everybody can achieve. On the one hand–it’s not my observation; it’s a general observation–that creativity often hinges on inventing new analogies. When I think of a new resemblance and an analogy between a tree and a tent pole, which is a new analogy let’s say that nobody else has ever thought of before, I take the new analogy and can perhaps use it in a creative way. One of a million other, a billion, a trillion other possible analogies. Now, what makes me come up with a new analogy? What allows me to do that? Generally, it’s a lower-spectrum kind of thinking, a down-spectrum kind of thinking, in which I’m allowing my emotions to emerge. And, I’m allowing emotional similarity between two memories that are in other respects completely different. I’m maybe thinking as a graduate student in computing about an abstract problem involving communication in a network like the ARPANET (Advanced Research Projects Agency Network) or the Internet, in which bits get stuck. And I may suddenly find myself thinking about traffic on a late Friday afternoon in Grand Central Station in Manhattan. And the question is–and that leads to a new approach. And I write it up; and I prove a theorem, and I publish a paper. And there’s like a million other things in the sciences and in engineering technology. But the question is: Where does the analogy come from? And it turns out in many cases–not in every case–that there are emotional similarities. Emotion is a tremendously powerful summarizer, abstractor. We can look at a complex scene involving loads of people rushing back and forth because it’s Grand Central Station, and noisy announcements on [?] to understand, loudspeakers, and you’re being hot and tired, and lots of advertisements, and colorful clothing, and a million other things; and smells, and sounds, and–we can take all that or any kind of complex scene or situation, the scene out your window, the scene on the TV (television) when you turn on the news, or a million other things. And take all those complexities and boil them down to a single emotion: it makes me feel some way. Maybe it makes me happy. Maybe it makes me happy. It’s not very usual to have an emotion as simple as that. But it might be. I see my kids romping in the backyard, and I just feel happy. Usually the emotion to which a complex scene has boiled down is more complex than that–is more nuanced. Doesn’t have a name. It’s not just that I’m happy or sad or excited. It’s a more nuanced; it’s a more–it’s a subtler emotion which is cooked up out of many bits and pieces of various emotions. But the distinctive emotion, the distinctive feeling that makes me feel a certain way, the feeling that I get when I look at some scene can be used as a memory cue when I am in the right frame of mind. And that particular feeling–let’s say, Happiness 147–a particular subtle kind of happiness which is faintly shaded by doubts about the coming week and by serious questions I have about what I’m supposed to do tomorrow morning but which is encouraged by the fact that my son is coming home tonight and I’m looking forward to seeing him–so that’s Happiness 147. And it may be that when I look out at some scene and feel Happiness 147, that some other radically different scene that also made me feel that way comes to mind–looking out at that complex thing and I think of some abstract problem in network communications, or I think of a mathematics problem, or I think of what color chair we should get for the living room, or one of a million other things. Any number of things can be boiled down in principle, can be reduced, can be summarized or abstracted by this same emotion. My emotions are so powerful because the phrase, ‘That makes me feel like x,’ can apply to so many situations. So many different things give us a particular feeling. And that feeling can drive in a new analogy. And a new analogy can drive creativity. But the question is: Where does the new analogy come from? And it seems to come often from these emotional overlaps, from a special kind of remembering. And I can only do that kind of remembering when I am paying attention to my emotions. We tend to do our best to suppress emotions when we’re up-spectrum. We’re up-spectrum: We have jobs to do, we have work to do, we have tasks to complete; our minds are moving briskly along; we’re energetic. We generally don’t like indulging in emotions when we are energetic and perky and happy and we want to get stuff done. Emotions tend to bring thought to a halt, or at any rate to slow us down. It tends to be the case as we move lower on the spectrum, we pay more attention to emotions. Emotions get a firmer grip on us. And when we are all the way at the bottom of the spectrum–when we are asleep and dreaming–it’s interesting that although we–often we think of dreaming as emotionally neutral except in the rare case of a nightmare or a euphoria dream, and neither of those happen very often–we think of dreams as being sort of gray and neutral. But if you read the biological[?] literature and the sleep-lab literature, you’ll find that most dreams are strongly colored emotionally. And that’s what we would expect. They occur at the bottom of the spectrum. Life becomes more emotional, just as when you are tired you are more likely to lose your temper; you are more likely to lose your self-control–to be cranky, to yell at your kids, or something like that. We are less self-controlled, we are less self-disciplined; we give freer rein to our emotions as we move down spectrum. And that has a good side. It’s not good to yell at your kids. But as you allow your emotions to emerge, you are more likely to remember things that yield new analogies. You are more likely to be reminded in a fresh way of things that you hadn’t thought of together before.•

Tags: ,

robotmanjapaneyes

When someone asks if machines can someday become conscious, my first thought is always this: Well, if that’s what they choose.

I intend that answer somewhat, though not entirely, glibly. Machines will achieve superintelligence long before consciousness, so who knows if we or they execute their awakening, if it should ever occur. Similarly, I believe bioengineering will allow humans to achieve a heretofore unapproached IQ level. Not soon, but someday.

AI, it is often said by very brilliant people, is our conqueror awaiting on the horizon, but I think it’s a distance beyond the further limits of our current perception. When we do get close enough to see them, they may resemble us. They will be another version of ourselves, a new “human” resulting from a souped-up evolution of our own design. We will be the end of us.

Of course, I could be completely wrong. We’re likely to find out the answer, however, since computers and science are probably too decentralized for curiosity to be checked. 

David Gelernter watched AlphaGo’s recent smashing triumph and now fears AI more than he does women who work outside the home. In his WSJ essay “Machines That Will Think and Feel,” he argues that “superhuman robots” will become reality once scientists appreciate that emotion is as vital to their creation as rational thought. An excerpt:

AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paper proposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake.

Suppose you had a fleet of AI software apps with IQs of 150 (and eventually 500 or 5,000) to help you manage life. You download them like other apps, and they spread out into your phones and computers—and walls, clothes, office, car, luggage—traveling within the dense computer network of the near future that is laid in by the yard, like thin cloth, everywhere.

AI apps will read your email and write responses, awaiting your nod to send them. They will escort your tax return to the IRS, monitor what is done and report back. They will murmur (from your collar, maybe) that the sidewalk is icier than it looks, a friend is approaching across the street, your gait is slightly odd—have you hurt your back? They will log on for you to 19 different systems using 19 different ridiculous passwords, rescuing you from today’s infuriating security protocols. They will answer your phone and tactfully pass on messages, adding any reminders that might help.

In a million small ways, next-generation AI apps will lessen the friction of modern life. Living without them will seem, in retrospect, like driving with no springs or shocks.

But we don’t have the vaguest idea what an IQ of 5,000 would mean.•

Tags:

Edge asked dozens of scientists, theorists and journalists this question: “What Scientific Idea Is Ready for Retirement?” There are responses from Gary Marcus, Rodney Brooks, Kevin Kelly, etc. In his answer, David Gelernter continues to take aim at Singularitarians:

The Grand Analogy

Today computationalists and cognitive scientists—those researchers who see digital computing as a model for human thought and the mind—are nearly unanimous in believing the Grand Analogy and teaching it to their students. And whether you accept it or not, the analogy is milestone of modern intellectual history. It partly explains why a solid majority of contemporary computationalists and cognitive scientists believe that eventually, you will be able to give your laptop a (real not simulated) mind by downloading and executing the right software app. Whereupon if you tell the machine, ‘imagine a rose,’ it will conjure one up in its mind, just as you do. Tell it to ‘recall an embarrassing moment’ and it will recall something and feel embarrassed, just as you might. In this view, embarrassed computers are just around the corner.

But no such software will ever exist, and the analogy is false and has slowed our progress in grasping the actual phenomenology of mind. We have barely begun to understand the mind from inside. But what’s wrong with this suggestive, provocative analogy? My first reason is old; the other three are new.

1. The software-computer system relates to the world in a fundamentally different way from the mind-brain system. Software moves easily among digital computers, but each human mind is (so far) wedded permanently to one brain. The relationship between software and the world at large is arbitrary, determined by the programmer; the relationship between mind and world is an expression of personality and human nature, and no one can re-arrange it.

There are computers without software, but no brains without minds. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me. Computers can be erased; minds cannot. Computers can be made to operate precisely as we choose; minds cannot. And so on. Everywhere we look we see fundamental differences.

2. The Grand Analogy presupposes that minds are machines, or virtual machines—but a mind has two equally-important functions, doing and being; a machine is only for doing. We build machines to act for us. Minds are different: yours might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted—or you might merely be conscious.

Emotions in particular are not actions, they are ways to be. And emotions—states of being—play an important part in the mind’s cognitive work. They allow you, for instance, to feel your way to a cognitive goal. (‘He walked to the window to recollect himself, and feel how he ought to behave.’ Jane Austen, Persuasion.) Thoughts contain information, but feelings (mild wistfulness, say, on a warm summer morning) contain none. Wistfulness is merely a way to be.

Until we understand how to make digital computers feel (or experience phenomenal consciousness), we have no business talking up a supposed analogy between mind:brain and software:computer.

(Those who note that computers-that-can-feel are incredible are sometimes told: “You assert that many billions of tiny, meaningless computer instructions, each unable to feel, could never create a system that feels. Yet neurons are also tiny, ‘meaningless’ and feel nothing–but a hundred billion of those yields a brain that does feel.’ Which is irrelevant: 100 billion neurons yield a brain that supports a mind, but a hundred billion sandgrains or used tires yields nothing. You need billions of the right article arranged in the right way to get feeling.)

3. The process of growing up is innate to the idea of human being. Social interactions and body structure change over time, and the two sets of changes are intimately connected. A toddler who can walk is treated differently from an infant who can’t. No robot could acquire a human-like mind unless it could grow and change physically, interacting with society as it did.

But even if we focus on static, snapshot minds, a human mind requires a human body. Bodily sensations create mind-states that cause physical changes that create further mind-changes. A feedback loop. You are embarrassed; you blush; feeling yourself blush, your embarrassment increases. Your blush deepens.

We don’t think with our brains only. We think with our brains and bodies together. We might build simulated bodies out of software—but simulated bodies can’t interact in human ways with human beings. And we must interact with other people to become thinking persons.

4. Software is inherently recursive; recursive structure is innate to the idea of software. The mind is not and cannot be recursive.

A recursive structure incorporates smaller versions of itself: an electronic circuit made of smaller circuits, an algebraic expression built of smaller expressions.

Software is a digital computer realized by another digital computer. (You can find plenty of definitions of digital computer.) ‘Realized by’ means made-real-by or embodied-by. The software you build is capable of exactly the same computations as the hardware on which it executes. Hardware is a digital computer realized by electronics (or some equivalent medium).

Suppose you design a digital computer; you embody it using electronics. So you’ve got an ordinary computer, with no software. Now you design another digital computer: an operating system, like Unix. Unix has a distinctive interface—and, ultimately, the exact same computing power as the machine it runs on. You run your new computer (Unix) on your hardware computer. Now you build a word processor (yet another dressed up digital computer), to run on Unix. And so on, ad infinitum. The same structure (a digital computer) keeps recurring. Software is inherently recursive.

The mind is not and cannot be. You cannot ‘run’ another mind on yours, and a third mind on that, and a fourth atop the third.

In conclusion: much has been gained by mind science’s obsession with computing. Computation has been a useful lens to focus scientific and philosophic thinking on the essence of mind. The last generation has seen, for example, a much clearer view of the nature of consciousness. But we have always known ourselves poorly. We still do. Your mind is a room with a view, and we still know the view (objective reality) a lot better than the room (subjective reality). Today subjectivism is re-emerging among those who see through the Grand Analogy. Computers are fine, but it’s time to return to the mind itself, and stop pretending we have computers for brains; we’d be unfeeling, unconscious zombies if we had.”

Tags:

Moore’s Law won’t apply to anything–even integrated circuits–forever. And it doesn’t apply to many things at all. Growth has its spurts, but other things get in the way: entropy, priorities, politics, etc. So I think the near-term questions regarding machines aren’t about transhumanism and other such lofty ones but rather more practical considerations. You know, like a highly automated society creating new jobs and 3-D printers making the manufacturing of firearms uncontrollable and undetectable. In a Commentary broadside, David Gelernter, that brilliant and perplexing thinker, takes aim at the approach of today’s technologists and what he sees as their lack of commitment to humanism. An excerpt about Ray Kurzweil:

The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the ‘singularity.’ After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in ‘transhumanism,’ the merging of men and machines. He believes human immortality is just around the corner. He works for Google.

Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.

Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.” (Thanks Browser.)

Tags: ,

David Gelernter, a computer genius with perplexing, reductive politics, believes the next wave of our online interconnectedness will see streams largely replace searches, something that has happened already to a certain extent. From his new Wired article, “The End of the Web, Search and Computer As We Know It“:

“Today’s operating systems and browsers — and search models — become obsolete, because people no longer want to be connected to computers or ‘sites’ (they probably never did).

What people really want is to tune in to information. Since many millions of separate lifestreams will exist in the cybersphere soon, our basic software will be the stream-browser: like today’s browsers, but designed to add, subtract, and navigate streams.

Searching content in a time stream is a matter of stream algebra, which is easier than the algebra of space-based structures like today’s web. Add two timestreams and get a third (simply merge the AP news feed and my friend Freeman’s blog streams into time-order); and content search is a matter of stream subtraction (simply subtract all entries that don’t mention ‘cranberries’ to yield all the entries that do). The simple, practical features of stream algebra have one huge benefit: giving us made-to-order information.

Every news source is a lifestream. Stream-browsers will help us tune in to the information we want by implementing a type of custom-coffee blender: We’re offered thousands of different stream ‘flavors,’ we choose the flavors we want, and the blender mixes our streams to order.

Every site’s content is liberated from the confines of space. It becomes part of a universal timestream. Instead of relying on Amazon the site to notify me if there’s a new Cynthia Ozick book or new books on the city of Florence, I can blend together several booksellers’ lifestreams and then apply my search since stream algebra allows any streams to be added (new and used books) and content (Florence, Ozick) to be subtracted.

E-commerce changes drastically. We shouldn’t have to work to find what’s new, yet the way the web is currently architected it’s no different logically than having to visit a thousand separate physical shops. The time-based worldstream lets us sit back instead and watch a single, customized fashion show across sites.”

Tags:

A genius computer scientist who long ago predicted cloud computing, social networks and the current connectivity, David Gelernter was famously sent an explosive by the Unabomber, though his life accomplishments should render that bold headline a footnote. The Economist has an excellent short profile of the technologist. An excerpt:

“More than two decades ago, Dr Gelernter foresaw how computers would be woven into the fabric of everyday life. In his book Mirror Worlds, published in 1991, he accurately described websites, blogging, virtual reality, streaming video, tablet computers, e-books, search engines and internet telephony. More importantly, he anticipated the consequences all this would have on the nature of social interaction, describing distributed online communities that work just as Facebook and Twitter do today.

‘Mirror Worlds aren’t mere information services. They are places you can ‘stroll around’, meeting and electronically conversing with friends or random passers-by. If you find something you don’t like, post a note; you’ll soon discover whether anyone agrees with you,’ he wrote. ‘I can’t be personal friends with all the people who run my local world any longer, but via Mirror Worlds we can be impersonal friends. There will be freer, easier, more improvisational communications, more like neighbourhood chatting and less like typical mail and phone calls. Where someone is or when he is available won’t matter. Mirror Worlds will rub your nose in the big picture and society may be subtly but deeply different as a result.'”

••••••••••

Gelernter’s Lifestreaming predated Facebook:

Tags:

In his 1998 book, Machine Beauty: Elegance And The Heart Of Technology, David Gelernter implored technologists to study art history and create more pleasing products. It would seem his hopes have been realized, but not because of some academic intervention. It’s simply because of the iPod and subsequent Apple products which offered great external aesthetics and software to match. Competitors were forced to try to keep pace. An excerpt from Gelernter’s book:

“Great technology is beautiful technology. If we care about technology excellence, we are foolish not to train our young scientists and engineers in aesthetics, elegance, and beauty. The idea of such a thing happening is so far-fetched it’s funny — but, yes, good technology is terribly important to our modern economy and living standards and comfort levels, the ‘software crisis’ is real, we do get from our fancy computers a tiny fraction of the value they are capable of delivering…. We ought to start teaching Velázquez, Degas, and Matisse to young technologists right now on an emergency basis. Every technologist ought to study drawing, design, and art history…. Art education is no magic wand. But I can guarantee that such a course of action would make things better: our technology would improve, our technologists would improve, and we would never regret it.”

Tags:

Evan R. Goldstein has a really interesting article in The Chronicle of Higher Education about computer-science genius, conservative polemicist, Jewish scholar, Yale professor, artist and Unabomber target David Gelernter. In one passage, Gelernter addresses his odd-duck assortment of ideas and interests:

“‘I’m a misfit,’ he said. ‘Most people fit in a groove and focus on one thing, but I cut across the grain of different areas.’ In conversation, the eclecticism of Gelernter’s mind is immediately apparent. An opinionated raconteur, he seamlessly transitions from literary criticism (‘Deconstructionists destroy texts’), to trends in the art world (‘Modern museums are devoted to diversity as opposed to greatness’), gender roles (‘Women mainly work because of male greed’), contemporary politics (‘Anti-Semitism in Europe is so intense that, I think, Hitler would have an easier time today then he did in 1933’), and earthier topics (‘I am obsessed with sex and sexuality as much as anyone I have ever met’).” Read the rest of this entry »

Tags: , , ,