James Gleick

You are currently browsing articles tagged James Gleick.

What we dream we become” wrote Henry Miller, offering a curse as much as a promise, wary as he always was of science and technology and America.

Nobody in the U.S. has ever dreamed more than Hugo Gernsback, immigrant technological tinkerer and peddler of science fiction, and he was sure the most outré visions would come to pass: instant newspapers printed in the home, TV eyeglasses, teleportation, etc. Some of these amazing stories proved to be true and others…perhaps someday? In Gernsback’s view what separated fiction and fact was merely time.

From James Gleick’s wonderful New York Review of Books piece about The Perversity of Things: Hugo Gernsback on Media, Tinkering, and Scientifiction:

Born Hugo Gernsbacher, the son of a wine merchant in a Luxembourg suburb before electrification, he started tinkering as a child with electric bell-ringers. When he emigrated to New York City at the age of nineteen, in 1904, he carried in his baggage a design for a new kind of electrolytic battery. A year later, styling himself in Yankee fashion “Huck Gernsback,” he published his first article in Scientific American, a design for a new kind of electric interrupter. That same year he started his first business venture, the Electro Importing Company, selling parts and gadgets and a “Telimco” radio set by mail order to a nascent market of hobbyists and soon claiming to be “the largest makers of experimental Wireless material in the world.”

His mail-order catalogue of novelties and vacuum tubes soon morphed into a magazine, printed on the same cheap paper but now titled Modern Electrics. It included articles and editorials, like “The Wireless Joker” (it seems pranksters had fun with the new communications channel) and “Signaling to Mars.” It was hugely successful, and Gernsback was soon a man about town, wearing a silk hat, dining at Delmonico’s and perusing its wine list with a monocle.

Public awareness of science and technology was new and in flux. “Technology” was barely a word and still not far removed from magic. “But wireless was magical to Gernsback’s readers,” writes Wythoff, “not because they didn’t understand how the trick worked but because they did.” Gernsback asked his readers to cast their minds back “but 100 years” to the time of Napoleon and consider how far the world has “progressed” in that mere century. “Our entire mode of living has changed with the present progress,” he wrote in the first issue of Amazing Stories “and it is little wonder, therefore, that many fantastic situations—impossible 100 years ago—are brought about today.”

So for Gernsback it was completely natural to publish Science Wonder Stories alongside Electrical Experimenter. He returned again and again to the theme of fact versus fiction—a false dichotomy, as far as he was concerned. Leonardo da Vinci, Jules Verne, and H. G. Wells were inventors and prophets, their fantastic visions giving us our parachutes and submarines and spaceships.•

Tags: ,

Time has been collapsing for centuries, with technology shortening seasons to seconds.

This insta-world has been willed into being relatively quickly. Telegraph lines didn’t dot the American landscape until 1861, so it was only 157 years ago that news was carried primarily by horseship and lip. The latest word about doings in Europe didn’t reach the U.S. for several days. Now news is, of course, instantaneous, as is fake news. The future has arrived, and it mostly sucks.

• • •

In a scientific sense, this assault of the senses caused by our latest tools is just a mirage, as is our notion of time itself.

From the physicist’s point of view–certain physicists, anyhow–our orderly sense of past, present and future doesn’t actually exist. This neat progression of events is a psychological trick, an illusory quirk, which allows our consciousness to frame existence and make it manageable. It fools us into believing we’re in charge of what comes next when it’s already here.

James Gleick, author of Time Travel: A History, writes on the topic in “When They Came From Another World,” a New York Review of Books piece on Ted Chiang’s collection, Stories of Your Life and Others, and the film it inspired, Arrival. An excerpt:

There is a strain of physicist that likes to think of the world as settled, inevitable, its path fully determined by the grinding of the gears of natural law. Einstein and his heirs model the universe as a four-dimensional space-time continuum—the “block universe”—in which past and future are merely different places, like left and right. Even before Einstein, a deterministic view of physics goes all the way back to Newton. His laws operated like clockwork and gave astronomers the power of foresight. If scientists say the moon will totally eclipse the sun in New York on April 8, 2024, beginning at 12:38 PM, you can bank on it. If they can’t tell you whether the sun will be obscured by a rainstorm, a strict Newtonian would say that’s only because they don’t yet have enough data or enough computing power. And if they can’t tell you whether you’ll be alive to see the eclipse, well, maybe they haven’t discovered all the laws yet.

As Richard Feynman put it, “Physicists like to think that all you have to do is say, ‘These are the conditions, now what happens next?’” Meanwhile, other physicists have learned about chaos and quantum uncertainty, but in the determinist’s view chance does not take charge. What we call accidents are only artifacts of incomplete knowledge. And there’s no room for choice. Free will, the determinist will tell you, is only an illusion, if admittedly a persistent one.

Even without help from mathematical models, we have all learned to visualize history as a timeline, with the past stretching to the left, say, and the future to the right (if we have been conditioned Sapir-Whorf-style by a left-to-right written language). Our own lifespans occupy a short space in the middle. Now—the infinitesimal present—is just the point where our puny consciousnesses happen to be.

This troubled Einstein. He recognized that the present is special; it is, after all, where we live. (In Chiang’s story, Louise says to her infant daughter: “NOW is the only moment you’ll perceive; you’ll live in the present tense. In many ways, it’s an enviable state.”) But Einstein felt that this was fundamentally a psychological matter; that the question of now need not, or could not, be addressed within physics. The specialness of the present moment doesn’t show up in the equations; mathematically, all the moments look alike. Now seems to arise in our minds. It’s a product of consciousness, inextricably bound up with sensation and memory. And it’s fleeting, tumbling continually into the past.

Still, if the sense of the present is an illusion, it’s awfully powerful for us humans. I don’t know if it’s possible to live as if the physicists’ model is real, as if we never make choices, as if the very idea of purpose is imaginary. We may be able to visualize the time before our birth and the time after our death as mathematically equivalent; yet we can’t help but fret more about what effects we might have on the future in which we will not exist than about what might have happened in the past when we did not exist. Nor does it seem possible to tell a story or enjoy a narrative that is devoid of intention. Choice and purpose—that’s where the suspense comes from. “What is your purpose on Earth?”•

Tags: ,

roboread

In his NYRB piece “What Libraries Can (Still) Do” James Gleick takes a hopeful view of a foundering institution, seeing a 2.0 life for those formerly vaunted knowledge centers, once-liberating forces now chiefly a collection of dusty rooms offering short blocks of free computer-terminal time to those lacking a wi-fi connection. I wish I could share his cautious optimism, but the inefficiency inherent in a library search seems a deal breaker in this age.

Gleick references John Palfrey’s manifesto BiblioTech (which he probably didn’t borrow from a library) in urging these erstwhile knowledge storehouses to become stewards and curators rather than trying to be exhaustive collections. That sounds right, but Palfrey’s assertion that “our attention cannot be bought and sold in a library” is a truth that ignores a bigger truth: That’s the very thing our transactional souls seem to want. We’ll sell our attention–our privacy, even–if the return is something that flatters or conveniences us.

From Gleick:

Is the library, storehouse and lender of books, as anachronistic as the record store, the telephone booth, and the Playboy centerfold? Perversely, the most popular service at some libraries has become free Internet access. People wait in line for terminals that will let them play solitaire and Minecraft, and librarians provide coffee. Other patrons stay in their cars outside just to use the Wi-Fi. No one can be happy with a situation that reduces the library to a Starbucks wannabe.

Perhaps worst of all: the “bookless library” is now a thing. You can look it up in Wikipedia.

I’m an optimist. I think the pessimists and the worriers—and this includes some librarians—are taking their eyes off the ball. The library has no future as yet another Internet node, but neither will it relax into retirement as an antiquarian warehouse. Until our digital souls depart our bodies for good and float away into the cloud, we retain part citizenship in the physical world, where we still need books, microfilm, diaries and letters, maps and manuscripts, and the experts who know how to find, organize, and share them.

In the midst of an information explosion, librarians are still the most versatile information specialists we have. And the purest. In his new book, BiblioTech, a wise and passionate manifesto, John Palfrey reminds us that the library is the last free space for the gathering and sharing of knowledge: “Our attention cannot be bought and sold in a library.” As a tradition barely a century and a half old in the United States, it gives physical form to the principle that public access to knowledge is the foundation of democracy.•

Tags: ,

In a NYRB blog post, James Gleick tries to identify the invaders among us, the social bots that cajole and troll on Twitter. Who are they? Just as importantly: Who are we if we’re not quite sure if we’re communicating with something not human, or if we know we are and yet still choose to interact? An excerpt:

A well-traveled branch of futuristic fiction explores worlds in which artificial creatures—the robots—live among us, sometimes even indistinguishable from us. This has been going for almost a century now. Stepford wives. Androids dreaming of electric sheep. (Next, Ex Machina?)

Well, here they come. It’s understood now that, beside what we call the “real world,” we inhabit a variety of virtual worlds. Take Twitter. Or the Twitterverse. Twittersphere. You may think it’s a stretch to call this a “world,” but in many ways it has become a toy universe, populated by millions, most of whom resemble humans and may even, in their day jobs, behumans. But increasing numbers of Twitterers don’t even pretend to be human. Or worse, do pretend, when they are actually bots. “Bot” is of course short for robot. And bots are very, very tiny, skeletal, incapable robots—usually little more than a few crude lines of computer code. The scary thing is how easily we can be fooled.

Because the Twitterverse is made of text, rather than rocks and trees and bones and blood, it’s suddenly quite easy to make bots. Now there are millions, by Twitter’s own estimates—most of them short-lived and invisible nuisances. …

Most of these bots have all the complexity of a wind-up toy. Yet they have the potential to influence the stock market and distort political discourse. The surprising thing—disturbing, if your human ego is easily bruised—is how few bells and gears have to be added to make a chatbot sound convincing. How much computational complexity is powering our own chattering mouths? The grandmother of all chatbots is the famous Eliza, described by Joseph Weizenbaum at MIT in a 1966 paper (yes, children, Eliza is fifty years old). His clever stroke was to give his program the conversational method of a psychotherapist: passive, listening, feeding back key words and phrases, egging on her poor subjects. “Tell me more.” “Why do you feel that way?” “What makes you think [X]?” “I am sorry to hear you are depressed.” Oddly, Weizenbaum was a skeptic about “artificial intelligence,” trying to push back against more optimistic colleagues. His point was that Eliza knew nothing, understood nothing. Still the conversations could run on at impressive length. Eliza’s interlocutors felt her empathy radiating forth. It makes you wonder how often real shrinks get away with leaving their brains on autopilot.

Today Eliza has many progeny on Twitter, working away in several languages.

Tags: ,

Some scientific explanations are so beautiful that they just have to be true. Except maybe some of them are not. Confusing physics and poetry can be dangerous.

Time is an illusion we’ve always been told, but perhaps it isn’t so. From James Gleick’s New York Review of Books piece about Lee Smolin’s just-published book on the topic:

“In an empty universe, would time exist?

No, it would not. Time is the measure of change; if nothing changes, time has no meaning.

Would space exist, in the absence of any matter or energy? Newton would have said yes: space would be empty.

For Smolin, the key to salvaging time turns out to be eliminating space. Whereas time is a fundamental property of nature, space, he believes, is an emergent property. It is like temperature: apparent, measurable, but actually a consequence of something deeper and invisible—in the case of temperature, the microscopic motion of ensembles of molecules. Temperature is an average of their energy. It is always an approximation, and therefore, in a way, an illusion. So it is with space for Smolin: ‘Space, at the quantum-mechanical level, is not fundamental at all but emergent from a deeper order’—an order, as we will see, of connections, relationships. He also believes that quantum mechanics itself, with all its puzzles and paradoxes (“cats that are both alive and dead, an infinitude of simultaneously existing universes”), will turn out to be an approximation of a deeper theory.

For space, the deeper reality is a network of relationships. Things are related to other things; they are connected, and it is the relationships that define space rather than the other way around. This is a venerable notion: Smolin traces the idea of a relational world back to Newton’s great rival, Gottfried Wilhelm Leibniz: ‘Space is nothing else, but That Order or Relation; and is nothing at all without Bodies, but the Possibility of placing them.’ Nothing useful came of that, while Newton’s contrary view—that space exists independently of the objects it contains—made a revolution in the ability of science to predict and control the world. But the relational theory has some enduring appeal; some scientists and philosophers such as Smolin have been trying to revive it.

Nowadays, the Internet—like the telegraph a century before—is commonly said to ‘annihilate’ space. It does this by making neighbors of the most distant nodes in a network that transcends physical dimension. Instead of six degrees of separation, we have billions of degrees of connectedness. As Smolin puts it:

We live in a world in which technology has trumped the limitations inherent in living in a low-dimensional space…. From a cell-phone perspective, we live in 2.5-billion-dimensional space, in which very nearly all our fellow humans are our nearest neighbors.

The Internet, of course, has done the same thing. The space separating us has been dissolved by a network of connections.

So maybe it’s easier now for us to see how things really are. This is what Smolin believes: that time is fundamental but space an illusion; ‘that the real relationships that form the world are a dynamical network’; and that the network itself, along with everything on it, can and must evolve over time.”

Tags: ,

Kim Kardashian Kim Kardashian@KimsThoughts_

Do ants have dicks?

______________________

From James Gleick’s New York Review of Books article about the Library of Congress collecting the whole of Twitter, no matter how stupid the tweets, a historical antecedent for such a massive information-collecting undertaking:

“For a brief time in the 1850s the telegraph companies of England and the United States thought that they could (and should) preserve every message that passed through their wires. Millions of telegrams—in fireproof safes. Imagine the possibilities for history!

‘Fancy some future Macaulay rummaging among such a store, and painting therefrom the salient features of the social and commercial life of England in the nineteenth century,’wrote Andrew Wynter in 1854. (Wynter was what we would now call a popular-science writer; in his day job he practiced medicine, specializing in ‘lunatics.’) ‘What might not be gathered some day in the twenty-first century from a record of the correspondence of an entire people?’

Remind you of anything?

Here in the twenty-first century, the Library of Congress is now stockpiling the entire Twitterverse, or Tweetosphere, or whatever we’ll end up calling it—anyway, the corpus of all public tweets. There are a lot.”

Tags: ,

“Bada-bing.”

From “Cyber-Neologoliferation,” James Gleick’s fun 2006 New York Times Magazine article about his visit to the offices of the Oxford English Dictionary, an explanation of how the word “bada-bing” came to be listed in the OED:

“Still, a new word as of September is bada-bing: American slang ‘suggesting something happening suddenly, emphatically, or easily and predictably.’ The Sopranos gets no credit. The historical citations begin with a 1965 audio recording of a comedy routine by Pat Cooper and continue with newspaper clippings, a television news transcript and a line of dialogue from the first Godfather movie: ‘You’ve gotta get up close like this and bada-bing! you blow their brains all over your nice Ivy League suit.’ The lexicographers also provide an etymology, a characteristically exquisite piece of guesswork: ‘Origin uncertain. Perh. imitative of the sound of a drum roll and cymbal clash…. Perh. cf. Italian bada bene mark well.’ But is bada-bing really an official part of the English language? What makes it a word? I can’t help wondering, when it comes down to it, isn’t bada-bing (also badda-bing, badda badda bing, badabing, badaboom) just a noise? ‘I dare say the thought occurs to editors from time to time,’ Simpson says. ‘But from a lexicographical point of view, we’re interested in the conventionalized representation of strings that carry meaning. Why, for example, do we say Wow! rather than some other string of letters? Or Zap! Researching these takes us into interesting areas of comic-magazine and radio-TV-film history and other related historical fields. And it often turns out that they became institutionalized far earlier than people nowadays may think.'”

Tags: ,

“Kofee.” (Image by Ricardo Stuckert/ABr.)

This life is a fluid thing, as precise meaning is chased by algorithms, with no print books in sight. From a new NYT piece about the art of the auto-correct by information heavyweight James Gleick:

“When Autocorrect can reach out from the local device or computer to the cloud, the algorithms get much, much smarter. I consulted Mark Paskin, a longtime software engineer on Google’s search team. Where a mobile phone can check typing against a modest dictionary of words and corrections, Google uses no dictionary at all. ‘

A dictionary can be more of a liability than you might expect,’ Mr. Paskin says. ‘Dictionaries have a lot of trouble keeping up with the real world, right?’ Instead Google has access to a decent subset of all the words people type — ‘a constantly evolving list of words and phrases,’ he says; ‘the parlance of our times.’

If you type ‘kofee’ into a search box, Google would like to save a few milliseconds by guessing whether you’ve misspelled the caffeinated beverage or the former United Nations secretary-general. It uses a probabilistic algorithm with roots in work done at AT&T Bell Laboratories in the early 1990s. The probabilities are based on a ‘noisy channel’ model, a fundamental concept of information theory. The model envisions a message source — an idealized user with clear intentions — passing through a noisy channel that introduces typos by omitting letters, reversing letters or inserting letters.

‘We’re trying to find the most likely intended word, given the word that we see,’ Mr. Paskin says. ‘Coffee’ is a fairly common word, so with the vast corpus of text the algorithm can assign it a far higher probability than ‘Kofi.’ On the other hand, the data show that spelling ‘coffee’ with a K is a relatively low-probability error. The algorithm combines these probabilities. It also learns from experience and gathers further clues from the context.”

Tags: ,

"Ideas cause ideas and help evolve new ideas." (Image by Hannes Grobe.)

FromWhat Defines a Meme?James Gleick’s great 2011 Smithsonian article, a section about French scientist Jacques Monod’s prescient, pre-PC ideas about the organic spread of information:

“Jacques Monod, the Parisian biologist who shared a Nobel Prize in 1965 for working out the role of messenger RNA in the transfer of genetic information, proposed an analogy: just as the biosphere stands above the world of nonliving matter, so an ‘abstract kingdom’ rises above the biosphere. The denizens of this kingdom? Ideas.

‘Ideas have retained some of the properties of organisms,’ he wrote. ‘Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.’

Ideas have ‘spreading power,’ he noted—’infectivity, as it were’—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are ‘just as real’ as the neurons they inhabit. Ideas have power, he said:

Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet.” (Thanks TETW.)

Tags: ,

James Gleick, author of The Information; A History, a Theory, a Flood, explaining how the shift from oral communications to the written word impacted humanity.

Tags:

I don’t know why, but I would rather read newspapers and magazines online but still prefer to read old-fashioned, non-virtual books. Maybe it has to do with the length of time we spend with an article as opposed to a longer work. I think it’s something I’ll get over soon, though I probably won’t have a choice. But I’m all in favor of digitization of printed materials of value (and even of dubious value) and the democratization of scholarship that it allows. James Gleick speaks to this issue a new Op-Ed piece in the New York Times. An excerpt:

“Where some see enrichment, others see impoverishment. Tristram Hunt, an English historian and member of Parliament, complained in The Observer this month that ‘techno-enthusiasm’ threatens to cheapen scholarship. ‘When everything is downloadable, the mystery of history can be lost,’ he wrote. ‘It is only with MS in hand that the real meaning of the text becomes apparent: its rhythms and cadences, the relationship of image to word, the passion of the argument or cold logic of the case.’

I’m not buying this. I think it’s sentimentalism, and even fetishization. It’s related to the fancy that what one loves about books is the grain of paper and the scent of glue.

Some of the qualms about digital research reflect a feeling that anything obtained too easily loses its value. What we work for, we better appreciate. If an amateur can be beamed to the top of Mount Everest, will the view be as magnificent as for someone who has accomplished the climb? Maybe not, because magnificence is subjective. But it’s the same view.”

Another James Gleick post:

 

Tags:

In a 1993 Wired feature, “Seven Wired Wonders,” science writer James Gleick was right on target in identifying the telephone as the tool of the near-future. An excerpt:

“After a century of fading into our bedside tables and kitchen walls, the telephone — both the instrument and its network — is on the march again. As a device shrinking to pocket size, the telephone is subsuming the rest of our technological baggage — the fax machine, the pager, the clock, the compass, the stock ticker, and the television. A sign of the telephone’s power: It is pressing the computer into service as its accessory, not the other way round.

We know now that the telephone is not just a device. It is a network — it is the network, copper or fiber or wireless — sprouting terminals that may just as well be workstations as headsets or Princesses. As the network spreads, it is fostering both the universality and the individuality of human discourse. The Net itself, the world’s fastest-spreading communications medium, is the telephone network in its most liberating, unruly, and fertile new guise.

Thus Bell’s child is freeing our understanding of the possibilities that lie in ancient words: neighborhood and meeting and information and news. It is global; it is democratic; it is the central agent of change in our sense of community. It is how, and why, we are wired.”

Tags: