Science/Tech

You are currently browsing the archive for the Science/Tech category.

Galaxy-class_replicator

Even a non-Trekkie, non-TV-watching person like myself has fully absorbed the program’s ideas, so fully have they immersed themselves in the culture. Beyond the sheer entertainment of the Enterprise lies, of course, a colorblind society that during the days of Gene Roddenberry could only seem realistic in space. Imperfect though we still are, we’ve moved closer to realizing this world ever since the original Star Trek iteration debuted in 1966.

Another less talked about aspect of the sci-fi show is that it exists in a post-scarcity world. There are still challenges and obstacles, but basic needs are universally met. Manu Saadia, author of the soon-to-be-published Trekonomics, argues in a Money article that for all the very real concerns about wealth inequality, we may be closer than we think to achieving such a system.

An excerpt:

In Star Trek’s hypothetical society — the Federation — poverty, greed and want no longer exist. Most goods are made for free by robots known as replicators. The obligation to work has been abolished. Work has become an exploration of one’s abilities. The people of Star Trek have solved what British economist John Maynard Keynes pithily called “the economic problem,” that is, the necessity for individuals and societies to allocate scarce goods and resources. They live secure in the knowledge that all needs will be fulfilled and free from the tyranny of base economic pursuits.

The replicator is the keystone of Star Trek’s cornucopia. It’s a Santa Claus machine that can produce anything upon request: foods, beverages, knick-knacks, and tools. Like Captain Picard of Star Trek: The Next Generation, you merely have to ask for “tea, Earl Grey, hot,” and the machine will make your beverage appear out of thin air with a satisfying, tingling visual effect.

The replicator is the perfect, and therefore last, machine. You cannot improve upon it. You ask and it makes. This signals that Star Trek speaks to us from the other side of the industrial revolution. The historical process by which machines enhance and replace human labor has reached its conclusion. …

The replicator is a public good, available to all for free. In the show’s universe, the decision was made to distribute the fruits of progress among all members of society. Abundance is a political choice as much as the end result of technological innovation. And to underscore that point, Star Trek goes so far as to feature alien societies where replicators’ services aren’t free.

To a 21st century audience, beset by growing inequality and a sense of dread in the face of coming automation, such a world seems entirely out of reach. We will probably never go where no one has gone before, nor will we ever meet alien Vulcans.

But some of Star Trek’s blissful vision of society has already come to pass.•

Tags:

New England tinkerer Arthur Blanchard didn’t patent a machine in 1916 to remove the guesswork from the pre-Talkie screenwriting process but merely to alleviate humans of the guessing. The so-called thinking machine was a handheld device that used a slot-machine method to cough up plots. It was marketed as “The Movie Writer,” though it was said to be helpful in the creation of poems and novels as well. In 1921, the Brooklyn Daily Eagle ran an article celebrating this simple technology.

randomstorymachine6

 

Tags:

Author Richard Brautigan 1968

A miscast spokesperson of drugged-out hippies, the writer Richard Brautigan wasn’t enamored with narcotics nor the wide-eyed, bell-bottomed set. He wrote two things I love: The 1967 novel Trout Fishing in America and the 1968 poem “All Watched Over by Machines of Loving Grace.

What follows is an excerpt from Lawrence Wright’s 1985 Rolling Stone article about Brautigan’s death and a German TV interview conducted a year before his passing. In the latter, he says this: “I think perception is one of the incredible qualities of human beings, and anyway that we can expand or define or redefine or adventure into the future of perception, we should use whatever means to do so.”


From Wright:

His passions were basketball, the Civil War, Frank Lloyd Wright, Southern women writers, soap operas, the National Enquirer, chicken-fried steak and talking on the telephone. Wherever he was in the world, he would phone up his friends and talk for hours, sometimes reading them an entire book manuscript on a transpacific call. Time meant nothing to him, for he was a hopeless insomniac. Most of his friends dreaded it when Richard started reading his latest work to them, because he could not abide criticism of any sort. He had a dead ear for music. [His daughter] Ianthe remembered that he used to buy record albums because of the girls on the covers. He loved to take walks, but he loathed exercise in any other form.

The fact that Richard couldn’t drive allowed him to build up an entourage of chauffeurs wherever he went. For many of them, it was an honor, and they didn’t mind that it was calculated dependency on Richard’s part.

Richard had wild notions about money. Although he was absurdly parsimonious, sometimes demanding a receipt for a purchase of bubblegum, he was also a heavy tipper, handing out fifty-dollar tips for five-dollar cab fares. He liked to give the impression that money was meaningless to him. The floor of his apartment was littered with spare change, like the bottom of a wishing well, and he always kept his bills wadded up in his pants pockets, but he knew to the dime how much money he was carrying. He was famously openhanded, but when he had to borrow money from his friends, he was slow paying it back. He often tried to pay them in “trout money,” little scraps of paper on which he had scrawled an image of a fish. He had the idea that they would be wildly valuable, because they had been signed by Richard Brautigan. At least, that’s what he told his creditors.

Christmas was a special problem for him. His friends were horrified that Richard liked to spend his Christmases in porno theaters. They decided it must have something to do with his childhood. Richard was mum on the subject. Ron Loewinsohn remembered when Richard came to read at Harvard. Yes, Richard was famous, a spokesman for his generation, but he was also a kind of bumpkin, half-educated, untraveled, a true provincial. He had never been East. He wanted to be taken seriously, of course, but he was suspicious and a little afraid of academicians — including Ron, who was in graduate school at Harvard when Richard arrived. Life magazine came along, and there was even a parade down Massachusetts Avenue, with a giant papier-mâché trout in the lead. After the reading, Ron and Richard went to Walden Pond, and as they walked along the littered banks of Thoreau’s wilderness, the photographer walked backward in front of them, snapping away. It was strange to be linked in this media ceremony to the two American writers who had most influenced Richard — Thoreau, who was like Richard at least in his solitariness and his love of nature, and Hemingway, who had also received the star treatment from Life.

In 1970, when Richard was still tremendously popular, he confided to Margot Patterson Doss, the San Francisco Chronicle columnist, that he had never had a birthday party. She let him plan one for himself at her house. He decorated the house with fish drawings — “shoals of them,” Margot said — and when she asked whom he wanted to cater the affair, he picked Kentucky Fried Chicken. Everyone came — Gary Snyder, Robert Duncan, Ginsberg, Gregory Corso, Phil Whalen, many of the finest poets of the era — all honoring Richard. When it came time to blow out the candles on the cake, Richard refused. “This is the Age of Aquarius,” he said. “The candles will blow themselves out.” He was thirty-five.


The German TV interview.

h-armstrong-roberts-men-and-women-standing-side-by-side-in-voting-booths-filling-out-election-ballots_i-g-56-5632-to4mg00z

For me Twitter is a place to put links to articles I’ve read that I love, to try to encourage others to also read them, so I pretty much ignore my feed. However, every now and then I’ll have a look and am sometimes happily surprised. An example:

I don’t know if I agree with Kirn on this argument–I’m pretty sure both requirements will need to be met before superintelligence will emerge–but it was great spending time thinking about the big idea he embedded in a few characters.

On the topic of recommending writing I love, Kirn has penned “The Improbability Party,” a beauty of a Harper’s essay about our algorithmic age running up against a tumultuous political season. Our new tools were supposed to measure the world better but haven’t, at least not so far during this mishegoss election. The writer sees gambling parlors and newsrooms as analogous, visiting a gaming professional in Las Vegas who explains that he “sells suspense.” You know, like CNN or pretty much anyone in the election-year horserace business. 

Two comments:

1) One caveat to Kirn’s theory is that primary seasons have low turnout and fewer polls, making prognostication far tougher. If Nate Silver and company get the general wrong, that would be surer proof of the article’s assertions. Of course, if that happens, we’re talking about a Trump Administration, and many may be too busy speeding north on the 281 to Canada to curse FiveThirtyEight.

2) Kirn also sees the fringe-gone-mainstream campaigns of Trump and Sanders as two very separate wings of a new party of sorts, one that can’t be pinned down, which gives him the title of his piece. You could also wonder if moderate Republicans and Dems will synthesize into a nouveau voting bloc if the two traditional groups swing even further to the Right and Left in 2020.

The opening:

like to think I’m unique. Don’t you? Complicated. Surprising. Unpredictable. I like to think that people who’ve only just met me or who know only the basic facts about me still have a lot to learn. I also like to think that people who’ve known me for a long time — family members, say — still don’t know the essential, the true me. What I really like to think, however, is that corporate statisticians, who track my consumer choices and feed them into algorithms to forecast my behavior for Google or Amazon, are capable, on occasion, of getting me wrong. No, just because I bought Tender Is the Night doesn’t mean I’ll like A Moveable Feast; the Fitzgerald book is fiction, see, and the Hemingway book that mentions Fitzgerald and happens also to be set in Paris is memoir. And I’m not a fan of memoir.

The slightly demeaning guesses are inescapable. I recently spoke to a young woman who accidentally left a streaming-video service running on her television while she went out. Show after show played with no one in the room. When the woman returned, the service’s suggestion engine had cued up a slate of grim reality programs about unwed teenage mothers. Such shows aren’t to her taste, she told me, and so the recommendations confused her. Then she solved the puzzle. If you averaged the subjects of all the programs she had supposedly watched, the result would be unwed teenage mothers.

This is similar to what Nate Silver does with polls, if I understand the super-pollster’s methods. On his vaunted website, FiveThirtyEight, Silver lumps together lots of polls and takes a weighted average to yield a single result. He then makes subtle adjustments to this result using a proprietary formula that I’ve seen referred to as his “secret sauce.” (Being more a language guy than a numbers guy — on the team that’s rarely favored to win — I can’t let it pass unremarked that “secret sauce” feels like a high-end rebranding of “bullshit.”) The word on the street is that if you bet with Silver, especially if you’re betting on elections, which were once thought to be incredibly complex events but turn out to be more like ball games without the balls, you can’t go wrong.

Except you can!•

Tags:

glennbeckurine

Trending Topics at Afflictor: Monkeys, Astronauts, Monkey Astronauts. Oh, and President Trump’s tiny monkey hands. Take it or leave it.

Facebook got into trouble last week with the GOP because a report claimed human editors choosing the social network’s Trending Topics are exercising bias against conservatives. It was widely thought these keywords rose and fell purely based on popularity, in an automated way.

What’s most amusing about this is that Republicans have been greatly aided in recent congressional races because of gerrymandering, in which humans draw up districts in a willfully biased way. Just suggest to them that prejudice-less algorithms decide the nation’s districts based purely on population statistics. The connection will go silent.

There’s no doubt that Facebook perhaps becoming our chief news source is problematic because the social network isn’t mainly in the journalism business and news will never be its main priority. But I wonder if what it delivers is really any more biased than what we get from traditional outlets. 

In a Wall Street Journal column, Christopher Mims looks deeper at the issue and questions if narrowcasting in Facebook feeds is actually a problem. An excerpt:

Claiming that Facebook is contributing to our age of hyper-partisanship by only showing us things that fit our own personal slant is, ironically, an example of confirmation bias, because the evidence around it is mixed.

After an exhaustive search of the literature around filter bubbles, five co-authors and Frederik J. Zuiderveen Borgesius, a researcher at the a researcher at the Personalised Communication project at the University of Amsterdam, concluded concerns might be overblown. “In spite of the serious concerns voiced, at present there is no empirical evidence that warrants any strong worries about filter bubbles,” Mr. Zuiderveen Borgesius wrote in an email.

The authors examined not only Facebook but other online services, including Google search. Mr. Zuiderveen Borgesius’s conclusion: We don’t have enough data to say whether Facebook is biasing the news its readers see, or—and this is even more important—whether it affects their views and behavior.

Facebook’s opacity aside, where does the hand-wringing come from? Two places, I think: the first is that everyone in the media is terrified of Facebook’s power to determine whether individual stories and even entire news organizations succeed or fail. The second is an ancient fear that, by associating only with people like ourselves, and being selective in what we read, we are biasing ourselves unduly.

Before the filter bubble, there was the so-called echo chamber.•

Tags:

Carl Tanzler (Von Cosel) C 1940. From the Stetson Kennedy Collection.

Preserved_body_of_Maria_Elena_Milagro_de_Hoyos

From the August 14, 1952 Brooklyn Daily Eagle:

corpse1

corpse3

Tags: , ,

alberteinsteinwitheinsteinpuppet

God knows we could use a few more Einsteins in the world, but should we be proactive about it?

When Freeman Dyson prophesied the games could get messy and possibly in reference to genetic engineering, that life forms might be born in garage laboratories in addition to maternity wards, he was sometimes dismissed as a physicist crossing disciplines with a sci-fi vision. But he was bound, sooner or later, to be correct.

Whether now is that moment is not yet determined, but we should begin to ask ourselves the important ethical questions that attend such progress. In one country or another, these games will soon begin in earnest, which will likely trigger a new space race with everyone shooting for the moon.

From Andrew Pollack of the New York Times:

Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes.

The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents.

While the project is still in the idea phase, and also involves efforts to improve DNA synthesis in general, it was discussed at a closed-door meeting on Tuesday at Harvard Medical School in Boston. The nearly 150 attendees were told not to contact the news media or to post on Twitter during the meeting.

Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals.

But such an attempt would raise numerous ethical issues. Could scientists create humans with certain kinds of traits, perhaps people born and bred to be soldiers? Or might it be possible to make copies of specific people?

“Would it be O.K., for example, to sequence and then synthesize Einstein’s genome?” Drew Endy, a bioengineer at Stanford, and Laurie Zoloth, a bioethicist at Northwestern University, wrote in an essay criticizing the proposed project. “If so how many Einstein genomes should be made and installed in cells, and who would get to make them?”•

Tags: , ,

WALT DISNEY AND DR. WERNER VON BRAUN, 1954

ironmanmusk

A few years back, I blogged about how the dream factory in California had migrated North, shifting from Hollywood to Silicon Valley. While Jules Verne’s visions required more than a century to be born, The Truman Show needed only a decade, and Her essentially came into the world already talking. 

Science fiction is no longer the starting gun for the future but is regularly lapped by it. It’s not only because Moore’s Law has speeded up research while culture still slowly gestates but because of the great wealth concentrated in the technology sector and the game-changing ambitions of those who possess it. They’re largely more concerned with legacy than bank ledgers, and ego is a powerful tool if harnessed correctly. Whether this new normal leads us to a better tomorrow is TBD.

In “The Future Is Almost Now,” Elizabeth Alsop writes wisely of this altered reality and more. An excerpt:

Many new works of science fiction seem to represent a strain of pre-apocalyptic cinema, characterized by a willingness to dramatize disasters that are less hypothetical than poised to happen. Both Ex Machina and Her, for instance, unfold against backdrops whose production design suggests that viewers are witnessing only a lightly futurized version of 21st-century life. However technically fictional the gadgets on display, the advances the films imagine—an artificially intelligent OS, a Turing-test approved robot—strike audiences as not just possible, but highly probable. As Ex Machina’s partly mad scientist declares, “[t]he arrival of strong AI has been inevitable for decades. The variable was when, not if.” Spike Jonze’s Her similarly takes its paradigm shift—humans falling in love with machines—for granted. Unlike The Terminator and Matrix franchises, these films don’t predict an apocalyptic “rise” of machines so much as a gradual digital takeover, the next phase of a revolution already in progress.

As such, the worlds of newer sci-fi films can look and feel eerily familiar. The opening shots of Interstellar, which feature hardscrabble towns and actual Depression-era footage, initially lead viewers to suspect they’re witnessing, if anything, the recent past. As the critic A.O. Scott noted in The New York Times,[the director Christopher] Nolan … drops us quietly into what looks like a fairly ordinary reality.” Or as NPR’s Amanda Fiegl put it, “it’s science fiction with an uncomfortable ring of truth.” It’s possible that such realistic settings—also seen in Ex Machina and Her—are meant to serve moralizing ends, reminding audiences that dystopia is nigh.•

Tags:

HamburgerMachine

If the only options many workers have are to either accept a less-than-living wage or be replaced by machines, we’re in trouble. Fast-food places use to offer starter jobs, but there’s no place for many burger-and-fries folks to advance to, so these are careers now, for lack of a better word. A $15 minimum wage is necessary yet may drive corporations to automate food ordering and even preparation. At the very least, some of the jobs will vanish. 

From Jed Graham at Investor’s Business Daily:

Wendy’s said that self-service ordering kiosks will be made available across its 6,000-plus restaurants in the second half of the year as minimum wage hikes and a tight labor market push up wages.

It will be up to franchisees whether to deploy the labor-saving technology, but Wendy’s President Todd Penegor did note that some franchise locations have been raising prices to offset wage hikes.

McDonald’s has been testing self-service kiosks. But Wendy’s, which has been vocal about embracing labor-saving technology, is launching the biggest potential expansion. …

In addition to self-order kiosks, the company is also getting ready to move beyond the testing phase with labor-saving mobile ordering and mobile payment available systemwide by the end of the year. Yum Brands and McDonald’s already have mobile ordering apps.•

Tags: ,

Kartograph

Thanks to Nicholas Carr’s great blog, Rough Type, for pointing me to Justin O’Beirne’s 2015 essay “The Universal Map,” which I missed last year. It addresses the quietly seismic changes occurring in cartography. The romance of the profession, formerly an often solitary and painstaking thing, has been replaced by (almost) real-time, computerized efficiency, or so it would seem.

So much of the next wave of AI and automation will demand insta-maps communicated from gadget to gadget and constantly updated (think driverless cars). That means maps will become increasingly universal as smartphones continue to spread. Mistakes will still sneak though most likely, and I suppose they’ll become universally accepted as well. Even if those flaws are corrected relatively quickly, they might cause problems for a brief spell. That at least is the best-case scenario.

As Carr notes, the map itself is being disappeared as tiny bits of information and directions obliterate the larger picture. O’Beirne himself is less sanguine on the topic a year later, writing that the work of Google Maps has surprisingly deteriorated. Is it more troubling to have one mediocre map for all of us than plenty of different ones of varying quality?

O’Beirne’s 2015 opening:

Just thirty years ago — and for most of human history — a cartographer would make a map, print and distribute it, and hope that maybe a few thousand or so people would ever use it before it went out of date. Apart from a handful of atlases and classroom maps, most maps had small, local audiences, went out of date quickly, and were often difficult to read and understand — let alone share.

Fast forward to today, and cartography has since undergone a number of profound changes:

  • An unprecedented level of detail is now available to the average person, for little or no cost. The same map literally shows every human settlement in the world at every scale, from the world’s largest cities to its tiniest neighborhoods and hamlets. Every country. Every city. Every road. All mapped in exquisite detail. Moreover, maps increasingly show every business open today — an interactive, visual yellow pages for the whole world. And add to that imagery, street view, and live transit and traffic. No one has ever had access to this much detail, for so cheaply, until now.
     
  • Maps are now always up to date. Errors are corrected in hours and minutes, instead of months and years — and new roads and businesses are added instantly. Unlike the paper maps of thirty years ago, today’s maps never expire.
     
  • Maps fit us, regardless of who or where we are. Foreign lands are presented in our own language, and we can easily and endlessly adjust scales, orientations, dimensions, and even time. We have day mode, night mode, and even basic personalization. And every corner of the globe is presented in the same style, and every map feature is made to be so intuitive, that there’s never a need for a map key. (Google and Apple Maps don’t even have one.) Thirty years ago, we adjusted ourselves to maps; now, maps adjust to us.
     
  • Maps are integrated with robust search & routing. No more looking up the coordinates of an obscure town or street in a map index. No more sitting down and painstakingly planning routes before you leave. Find any place in the world in milliseconds. Calculate any route — be it by walking, driving, or even flying — with unprecedented ease.
     
  • Advanced sensors keep us apprised of our current location, 24 hours a day.Now, we’re never lost.

These are all profound technical changes, 10x improvements that are hugely impactful in their own right. But there’s an even deeper, more profound cultural change seemingly on the horizon:

FOR THE FIRST TIME IN HUMAN HISTORY, THE MAJORITY OF THE WORLD MIGHT SOON BE USING THE SAME MAP.

Think of how deeply profound this is.•

Tags:

socialmediastar5

There’s an agency that enables you to hire your very own image-enhancing squad, so it’s no shock other companies are paid to pair off product pushers with social-media “stars,” celebrities shrunk down to smartphone size. Those who’ve mastered Vine or Instagram or Youtube, usually teens, are paid hundreds–occasionally hundreds of thousands–to post pictures of brands or do stunts involving them. If corporations feared wasting money on yesterday’s unquantifiable print ads, they’re really no more sure, even with all the new statistics, they aren’t burning currency on Gladwellian “influencers.” It doesn’t seem like sound business, even if it speaks to the further Warholization of fame.

From Shareen Pathak’s Digiday interview with an anonymous social-media executive about the new industry:

Question:

How do you find them?

Answer:

Social team is a bunch of millennials, so we’ll often find someone we like and we’ll throw it into a database with keywords. But usually it’s a CEO or CMO or whoever saying, “Oh, my kid likes this guy.” At this major car brand I worked for, we paid $300,000 for a few photographs because the CEO’s kid liked someone.

Question:

What about the influencer agencies?

Answer:

They’re huge now. Like the big media networks that say they work with 2,200 followers. They’re helpful. The big problem is, they don’t operate much like a traditional talent management company. They don’t provide insurance in case their talent doesn’t deliver or anything. Agencies can’t really hire them through them. They sort of just expect the brands to approach them. They don’t pitch them or anything. It’s silly.

Question:

Tell me about the process.

Answer:

We’ll do a meet and greet. Tell them what we’re thinking and ask them for concepts. You can tell right away who is serious: The good ones come back within a day with ideas. Some send us decks or presentations that are pretty but not tailored to the brand. They’re all nuts. “I want to take a car and pick it up in London and drive it around Europe, so give me $100,000,” they say. Nope, let’s totally never do it that ever. These people don’t understand budgets.•

Tags:

seance1

When my brother died last year, he remained in the ether. A devoted Facebook user, his likes and friends were gathered conveniently in one space, and the updates and comments keep coming even though he’s no longer here to read, or respond to, them. This Digital Age seance bothers me. It seems an odd and unsatisfactory afterlife of sorts.

That’s my own problem, however. He would have loved receiving messages on birthdays and holidays, even after he was gone. The question is this: If I knew my own end was approaching and I was the last relative alive who knew him, would I delete his account or let it “live on,” whatever that may come to mean? I’m pretty sure I would choose the latter.

In a BBC Future pieceBrandon Ambrosino writes about this social-media phenomenon, in which Facebook serves as not only a virtual city-state but also a necropolis, impacting the way we mourn, remember and forget. Someday, if the company lasts long enough–and not even that long, really–the dead will outnumber the living. An excerpt about the author’s late aunt:

Observing that phenomenon is a strange thing. There she is, the person you love – you’re talking to her, squeezing her hand, thanking her for being there for you, watching the green zigzag move slower and slower – and then she’s not there anymore.

Another machine, meanwhile, was keeping her alive: some distant computer server that holds her thoughts, memories and relationships.

While it’s obvious that people don’t outlive their bodies on digital technology, they do endure in one sense. People’s experience of you as a seemingly living person can and does continue online.

How is our continuing presence in digital space changing the way we die? And what does it mean for those who would mourn us after we are gone?

Observing that phenomenon is a strange thing. There she is, the person you love – you’re talking to her, squeezing her hand, thanking her for being there for you, watching the green zigzag move slower and slower – and then she’s not there anymore.

Another machine, meanwhile, was keeping her alive: some distant computer server that holds her thoughts, memories and relationships.

While it’s obvious that people don’t outlive their bodies on digital technology, they do endure in one sense. People’s experience of you as a seemingly living person can and does continue online.

How is our continuing presence in digital space changing the way we die? And what does it mean for those who would mourn us after we are gone?•

Tags:

If driverless cars were to emerge only after all infrastructure has been uniformly upgraded and every possible hazard anticipated, it might be a long wait. An override to these problems is autonomous vehicles being connected to a network–and each other–and constantly be “educated.” In Steve Ranger’s ZDNet interview with Jim McBride of Ford’s driverless division, the latter addresses this issue, promising that shift from driver to driverless will “not terribly dissimilar from [the shift from] horses and carriages going to cars.” An excerpt:

Question:

What are the big technical challenges you are facing?

Jim McBride:

When you do a program like this, which is specifically aimed at what people like to call ‘level four’ or fully autonomous, there are a large number of scenarios that you have to be able to test for. Part of the challenge is to understand what we don’t know. Think through your entire lifetime of driving experiences and I’m sure there are a few bizarre things that have happened. They don’t happen very frequently but they do.

Question:

How do you build that kind of intelligence in?

Jim McBride:

It’s a difficult question because you can’t sit down and write a list of everything you might imagine, because you are going to forget something. You need to make the vehicle generically robust to all sorts of scenarios, but the scenarios that you do anticipate happening a lot, for example people violating red lights at traffic intersections, we can, under controlled conditions, test those very repeatedly. We have a facility near us called Mcity, and it’s basically a mock-urban environment where we control the infrastructure. While you and I may only see someone run a red light a few times a year, we can go out there and do it dozens of times just in the morning.

So for that category of things we can do the testing in a controlled environment, pre-planned. We can also do simulation work on data and, aside from that, it’s basically getting out on the roads and aggregating a lot of experiences.•

Tags: ,

spaceagemedicinedoctors (3)

Huawei executive Kevin Ho acknowledges science fiction has influenced his belief that the absence of disease, poverty and death could occur within 20 years. I’ll (sadly) bet the over on that one–way over.

In time, bioengineering will help ease disease and 3D printers will do the same for the want of material goods, though I think immortality through uploading isn’t arriving anytime soon. If it does become a reality at some point, such a system still won’t duplicate humans: Trading skin for a new casing alters identity in obvious and subtle ways.

From Bloomberg:

Chinese technology giant Huawei is preparing for a world where people live forever, dead relatives linger on in computers and robots try to kill humans.

Huawei is best known as one of the world’s largest producers of broadband network equipment and smartphones. But Kevin Ho, president of its handset product line, told the CES Asia conference in Shanghai on Wednesday the company used science fiction movies like The Matrix to envision future trends and new business ideas.

“Hunger, poverty, disease or even death may not be a problem by 2035, or 25 years from now,” he said. “In the future you may be able to purchase computing capacity to serve as a surrogate, to pass the baton from the physical world to the digital world.”

He described a future where children could use apps like WeChat to interact with dead grandparents, thanks to the ability to download human consciousness into computers. All of these technologies would require huge amounts of data storage, which in turn could generate business for Huawei, he added.

Tags:

The_Soviet_Union_1971_CPA_4003_stamp_(Alexander_A._Bogomolets,_Hero_of_Socialist_Labour_(after_Anatoly_Yar-Kravchenko))

From the June 16, 1946 Brooklyn Daily Eagle:

bogomets

bogomets2

falloutshelter5

Speaking of the End of Days, utter societal collapse in the United States doesn’t seem likely to me, even if we’re apparently dumber and more racist than feared. Many of my fellow Americans disagree, however, thinking things will soon fall apart. In advance of the November elections, the panic-room business is booming, as some among us are counting their gold coins and covering their asses.

The opening of “Prepping for Doomsday,” Clare Trapasso’s Realtor.com article:

The apocalypse has become big business. And it’s getting bigger every day.

In the ’50s, homeowners fearing Communist attacks built bunkers in their backyards and basements, hung up a few “God Bless Our Bomb Shelter” signs and called it a Cold War.

But today, Americans en masse are again preparing for the worst—and Communists are just about the only thing not on their list. What is? Terrorist attacks, a total economic collapse, perhaps even zombie invasions. Or maybe just a complete societal breakdown after this November’s scorched-Earth presidential election.

But this is not your Uncle Travis’ guns-and-canned-foods-militia vision of Armageddon preparedness. While the fears of survivalists and so-called preppers are modernizing, so too are their ideas and methods of refuge.

The business of disaster readiness is getting higher tech, higher priced, and way more geographically diverse, with state-of-the-art underground shelters tricked out with greenhouses, gyms, and decontamination units in the boondocks and the latest in plush panic rooms in city penthouses.

Welcome to the brave (and for some, highly profitable) new world of paranoia. 

“There’s a lot of uneasiness in society. You see it in politics. You see it in the economy. The world is changing really, really quickly and not always for the better,” says Richard Duarte, author of “Surviving Doomsday: A Guide for Surviving an Urban Disaster.”

Prepping “gives them a certain comfort that at least they’ve got some sort of preparations to … take care of their family if things start falling apart all around them,” he says.•

Tags: ,

hollywoodredcarpet8

Donald Trump announced his candidacy for President in June to a crowd of paid actors, which was sort of quaint in this Digital Age, as if he were Frank Sinatra in the ’40s crooning before “fainting” bobby soxers who’d been slipped a few dollars in advance to encourage their dizzy spells. You would think the practice of poseur appreciators and persuaders would be passé in our time, when there are bots and algorithms to goad the gormless, but there are things about human flesh that still cannot be replicated by machines. In some cases, all the world’s a stage and we’re all merely players–or at least some of us who are been compensated for pretending to be paparazzi or protesters or proponents. 

In “Crowd Source,” Davy Rothbart’s smart California Sunday Magazine article, the writer profiles a company that can make any carpet red and anyone a Kardashian, selling the aura of popularity in this Reality TV era. They offer a little extra–they offer extras, with titles like “Selfie Guy.” The opening:

The text message says to show up at the Los Angeles Airport Marriott Hotel at 11 a.m. on a Monday. But through some combination of traffic and my own chronic lateness, I find myself rushing into the lobby at 12 minutes after, aware that it’s not a good look to be late for work, my first day on a new job.

I’ve been hired by a company called Crowds on Demand. If you need a crowd of people — for nearly any reason — Crowds on Demand can make it happen. Now it has taken me on as one of its crowd members, although the specifics remain a mystery. It’s an odd sensation to be headed into a gig with no idea what task I’m expected to perform. All I know is that I’ll be making 15 bucks an hour.

In the hotel lobby, Adam Swart, the company’s 24-year-old CEO, is greeting a dozen other recruits. Handsome, fit, sporting slacks and a button-down shirt, Adam bears an uncanny resemblance to House Speaker Paul Ryan, though he’s more than 20 years younger. He circles around us with manic energy, as though jacked up on six cups of coffee. While he gently reprimands me for my lateness, I take his tone to mean, You’re off the hook this time, but don’t do it again. He leads us downstairs to a ballroom in the basement and gives us the lowdown.

The Marriott, Adam explains, is hosting a conference for life coaches from around the country. As these folks arrive in the ballroom to register and pick up their badges, lanyards, and gift bags, our job is to treat them like mega-celebrities, to behave like a wild throng of fans desperate for their love. As it turns out, this is one of Crowds on Demand’s most pop­­ular services.•

Tags:

robot-congo-2 (1)

There’s nothing theoretically impossible, I think, about superintelligent machines, and if humans go on long enough, they and even far stranger things will come to pass. But the Singularity is not near, nowhere near near. There are plenty of machine-related issues to worry about in the meantime: Weak AI may decimate employment, the Internet of Things will place us inside of a machine with no OFF switch and automation could lead to a cascading disaster. Machines needn’t be conscious to help or hurt. 

In a smart Aeon essay, Luciano Floridi analyzes the increasingly popular idea that AI is our biggest existential threat, even more so than climate change. An excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine.•

Tags:

Mathew Brady.

Mathew Brady.

HD_LincolnA_nara

President Lincoln.

Ulysses S. Grant.

Ulysses S. Grant.

Robert E. Lee.

Robert E. Lee.


The Civil War would have a name without Mathew Brady but not a face.

Other notable photographers worked in that tumultuous, internecine period, but it was Brady and his pioneering photojournalism that truly captured the visages burdened by the fate of a nation. While Brady was rich in life experience, his relentless attempt to record the Civil War with the expensive daguerreotype process essentially bankrupted him. He expected the U.S. government to eagerly purchase his trove in the post-war period and restore his financial standing, but the money never materialized. Brady died penniless in the charity ward of New York’s Presbyterian Hospital in 1896. Two years before his death, a Brooklyn Daily Eagle article misspelled his first name while chronicling how money troubles cost him his gallery in Washington D.C. 

brady1a

rened

Speaking of mind-altering substances, when a teenager, the French Surrealist writer René Daumal blasted his brain with the carbon tetrachloride he normally used to kill beetles for his insect collection. Not a good idea. By the time he was 36, he’d joined the bugs in the great beyond, no doubt in part because of his amateur chemistry experiments.

Known primarily today for the novel Mount Analogue: A Tale of Non-Euclidean and Symbolically Authentic Mountaineering Adventures, which Alejandro Jodorowsky used as the basis for his crazy-as-fuck 1973 film, Holy Mountain, Daumal’s recollection of his auto-dosing, “A Fundamental Experiment,” was reprinted in a 1965 Psychedelic Review. The opening:

The simple fact of the matter is beyond telling.  In the 18 years since it happened, I have often tried to put it into words.  Now, once and for all, I should like to employ every resource of language I know in giving an account of at least the outward and inward circumstances. This ‘fact’ consists in a certainty I acquired by accident at the age of sixteen or seventeen; ever since then, the memory of it has directed the best part of me toward seeking a means of finding it again, and for good.

My memories of child-hood and adolescence are deeply marked by a series of attempts to experience the beyond, and those random attempts brought me to the ultimate experiment, the fundamental experience of which I speak.

At about the age of six, having been taught no kind of religious belief whatsoever, I struck up against the stark problem of death.

I passed some atrocious nights, feeling my stomach clawed to shreds and my breathing half throttled by the anguish of nothingness, the ‘no more of anything’.

One night when I was about eleven, relaxing my entire body, I calmed the terror and revulsion of my organism before the unknown, and a new feeling came alive in me; hope, and a foretaste of the imperishable. But I wanted more, I wanted a certainty. At fifteen or sixteen I began my experiments, a search without direction or system.

Finding no way to experiment directly on death-on my death-I tried to study my sleep, assuming an analogy between the two.

By various devices I attempted to enter sleep in a waking state. The undertaking is not so utterly absurd as it sounds, but in certain respects it is perilous. I could not go very far with it; my own organism gave me some serious warnings of the risks I was running. One day, however, I decided to tackle the problem of death itself.

I would put my body into a state approaching as close as possible that of physiological death, and still concentrate all my attention on remaining conscious and registering everything that might take place.

I had in my possession some carbon tetrachloride, which I used to kill beetles for my collection. Knowing this substance belongs to the same chemical family as chloroform (it is even more toxic), I thought I could regulate its action very simply and easily: the moment I began to lose consciousness, my hand would fall from my nostrils carrying with it the handkerchief moistened with the volatile fluid. Later on I repeated the experiment –in the presence of friends, who could have given me help had I needed it.

The result was always exactly the same; that is, it exceeded and even overwhelmed my expectations by bursting the limits of the possible and by projecting me brutally into another world.

First came the ordinary phenomena of asphyxiation: arterial palpitation, buzzings, sounds of heavy pumping in the temples, painful repercussions from the tiniest exterior noises, flickering lights. Then, the distinct feeling: ‘This is getting serious. The game is up,’ followed by a swift recapitulation of my life up to that moment. If I felt any slight anxiety, it remained indistinguishable from a bodily discomfort that did not affect my mind.

And my mind kept repeating to itself : ‘Careful, don’t doze off. This is just the time to keep your eyes open.’

The luminous spots that danced in front of my eyes soon filled the whole of space, which echoed with the beat of my blood- sound and light overflowing space and fusing in a single rhythm. By this time I was no longer capable of speech, even of interior speech; my mind travelled too rapidly to carry any words along with it.

I realized, in a sudden illumination, that I still had control of the hand which held the handkerchief, that I still accurately perceived the position of my body, and that I could hear and understand words uttered nearby–but that objects, words, and meanings of words had lost any significance whatsoever. It was a little like having repeated a word over and over until it shrivels and dies in your mouth: you still know what the word ‘table’ means, for instance, you could use it correctly, but it no longer truly evokes its object.

In the same way everything that made up ‘the world’ for me in my ordinary state was still there, but I felt as if it had been drained of its substance. It was nothing more than a phantasmagoria-empty, absurd, clearly outlined, and necessary all at once.

This ‘world’ lost all reality because I had abruptly entered another world, infinitely more real, an instantaneous and intense world of eternity, a concentrated flame of reality and evidence into which I had cast myself like a butterfly drawn to a lighted candle.

Then, at that moment, comes the certainty; speech must now be content to wheel in circles around the bare fact.

Certainty of what?

Words are heavy and slow, words are too shapeless or too rigid. With these wretched words I can put together only approximate statements, whereas my certainty is for me the archetype of precision. In my ordinary state of mind, all that remains thinkable and formulable of this experiment reduces to one affirmation on which I would stake my life: I feel the certainty of the existence of something else, a beyond, another world, or another form of knowledge.

In the moment just described, I knew directly, I experienced that beyond in its very reality.

It is important to repeat that in that new state I perceived and perfectly comprehended the ordinary state of being, the latter being contained within the former, as waking consciousness contains our unconscious dreams, and not the reverse. This last irreversible relation proves the superiority (in the scale of reality or consciousness) of the first state over the second.

I told myself clearly: in a little while I shall return to the so-called ‘normal state’, and perhaps the memory of this fearful revelation will cloud over; but it is in this moment that I see the truth.

All this came to me without words; meanwhile I was pierced by an even more commanding thought. With a swiftness approaching the instantaneous, it thought itself so to speak in my very substance: for all eternity I was trapped, hurled faster and faster toward ever imminent annihilation through the terrible mechanism of the Law that rejected me.

‘That’s what it is. So that’s what it is.’

My mind found no other reaction. Under the threat of something worse, I had to follow the movement.

It took a tremendous effort, which became more and more difficult, but I was obliged to make that effort, until the moment when, letting go, I doubtless fell into a brief spell of unconsciousness. My hand dropped the handkerchief, I breathed air’, and for the rest of the day I remained dazed and stupefied-with a violent headache.•


“Nothing in your education or experience can have prepared you for this film.”

Tags:

the-insane-life-of-former-fugitive-and-eccentric-cybersecurity-legend-john-mcafee

John McAfee, who’s never been charged for murder, is a Philip K. Dick character of his own making, speeded-up and paranoid. The erstwhile anti-virus emperor says he’s returning to the field of security software but who the fuck knows. McAfee’s apparently found financial backing, but he seems better suited to manning a gunboat in the proximity of a banana republic. From Richard Waters in the Financial Times:

John McAfee, the controversial former software boss, has made a move to win back a leading role in the security software industry that he helped to pioneer, taking the helm of a tiny public investment vehicle and declaring his aim of turning it into “a successful and major force in the space”.

Mr McAfee, creator of the widely used antivirus software that bears his name, sold his first company to Intel for $7.6bn six years ago, in one of the biggest software transactions ever. But he made international headlines four years ago when he went on the run after becoming the focus of a manhunt in Belize following the murder of his neighbour there. He fled over the border into Guatemala, before being deported back to the US at his request. He was never arrested or charged in the murder.

Mr McAfee’s erratic behaviour and claims that he was afraid for his safety if he was arrested by the local police prompted the Belize prime minister to suggest he was “bonkers.” He has since maintained an outspoken public stance on tech policy issues, including putting himself forward as an independent candidate in this year’s US presidential elections and denouncing the FBI’s attempt to force Apple to grant access to one of its iPhones this year as “the beginning of the end of the US as a world power.”•

Tags: ,

Donald Trump, the dunce cap on America’s pointy head, has been enabled by traditional media, new media and a besieged American middle class, as he’s attempted to become our first Twitter President. Mostly, though, I think he’s been abetted by the large minority of racist citizens who want someone to blame, especially in the wake of our first African-American President and recent myriad examples of social progress.

Trump is no mastermind. He seems to have gotten into the race impetuously to burnish his idiotic brand–you know, Mussolini as an insult comic. His main asset in this campaign season has been an utter shamelessness, a willingness to stoop as low as he needs to go. Whether that’s a prescription for general-election victory, we’ll soon see.

It’s true that in a more centralized media and political climate, the hideous hotelier would have likely been squeezed from the process by gatekeepers, but the more unfettered new normal only gave him opportunity, not the nomination. I don’t think dumb tweets and smartphones made the troll a realistic contender for king. It was we the people.

In a pair of pieces, Nick Bilton of Vanity Fair and Rory Cellan-Jones of the BBC see technology as the main cause for the rise of Trump, if in different ways. Excerpts from each follow.


From Bilton:

I’ve heard people say that if it wasn’t for CNN, FOX, and a dozen other television outlets that have “handed Trump the microphone,” there would be no Trump. But with all due respect to the television media, they’re just not that important anymore. Perhaps his popularity is a result of a broken political system, others suggest. But let’s be realistic, people have always believed the system is broken. (It’s that same broken system, it should be noted, that has helped create many of the disruptive unicorns in Silicon Valley.)

The only thing that’s really changed between Trump’s other attempts to run for office and now is the advent of social media. And Trump, who has spent his life offending people, knows exactly how to bend it to his will. Just look at what happens if someone says something even remotely politically incorrect today: the online immune system, known famously as a Twitter mob, sets in to hold that person accountable. These mobs demand results, like seeing someone fired, making them shamefully apologize, or even seeing their life torn to shreds.

Yet someone like Donald Trump doesn’t get fired, or apologize, which only makes the mobs grow more fervent and voluble. And the louder they get, the more the news media covers the backlash. The more the TV shows talk about him, the more we all talk about him. If you want to truly comprehend why Trump is so popular, you just have to behold what people are saying in 140 characters or less. It’s the same thing Kim Kardashian and Kanye West, and anyone else who wants attention, understand. If we’re talking about them, they’re winning the war for attention. No one knows this better than Trump. Prod the social-media tiger, you get attention: say Mexicans are rapists, make fun of the disabled, pick a fight with the Pope, attack women, call the media dumb, and social media shines a big, bright spotlight on Donald.

Arianna Huffington may have once famously decided to cover Trump in the entertainment section of the Huffington Post, but the reality is we now live in a world where there is no line between entertainment, politics, and media. And I know Silicon Valley knows this, because they are the ones that helped eviscerate it.•


From Cellan-Jones:

Over the past year we have seen plenty of warnings about the potential impact of robots and artificial intelligence on jobs.

Now one of the leading prophets of this robot revolution has told the BBC he is already seeing another side-effect of automation – the rise of politicians such as Donald Trump and the Democratic presidential hopeful Bernie Sanders.

Martin Ford’s Rise of the Robots won all sorts of awards for its compelling account of a wave of automation sweeping through every area of our lives, posing a serious threat to our economic well-being. But there has also been plenty of pushback from economists who reckon his conclusion is wrong and that, as in previous industrial revolutions, the overall impact on jobs will be positive.

In London to speak at a conference on robots held by the Bank of America, he told me that he didn’t think this latest technology upheaval would be as benign as in the past: “The thing is that this time machines are now in some sense beginning to think. And what that means is we’re seeing machines encroach on the kind of capabilities that set humans apart.”

He sees the robots moving up the value chain, threatening any jobs which involve humans sitting in front of screens dealing with information – the kind of work which we used to think offered security to middle-class people with average skills.•

Tags: , ,

George-Lawnmower-1950-1 (3)

Whether we’re talking about baseball umpires or long-haul truckers, I’m not so concerned about machines ruining the “romance” of traditional human endeavor, but I am very worried about technological unemployment destabilizing Labor. Perhaps history will repeat itself and more and better jobs will replace the ones likely to be disappeared in the coming decades, but even just the perfection of driverless cars will create a huge pothole in society. The Gig Economy is a diminishing of the workforce, and even those positions are vulnerable to automation. Maybe things will work themselves out, but it would be far better if we’re prepared for a worst-case scenario.

Excerpts from two articles follow: 1) Mark Karlin’s Truthout interview with Robert McChesney, co-author of People Get Ready, and 2) a Manu Saadia Tech Insider piece, “Robots Could Be a Big Problem for the Third World.”


From Truthout:

Question:

Let me start with the grand question raised by your book written with John Nichols. I think it is safe to say that the conventional thinking of the “wisdom class” for decades has been that the more advanced technology becomes (including robots and automated means of production, service and communication), the more beneficial it will be for humans. What is the basic challenge to that concept at the center of the new book by you and John?

Robert W. McChesney:

The conventional wisdom, embraced and propagated by many economists, has been that while new technologies will disrupt and eliminate many jobs and entire industries, they would also create new industries, which would eventually have as many or more new jobs, and that these jobs would generally be much better than the jobs that had been lost to technology.

And that has been more or less true for much of the history of industrial capitalism. Vastly fewer people were needed to work on farms by the 20th century and many ended up in factories; less are now needed in factories and they end up in offices. The new jobs tended to be better than the old jobs.

But we argue the idea that technology will create a new job to replace the one it has destroyed is no longer operative. Nor is the idea that the new job will be better than the old job, in terms of compensation and benefits. Capitalism is in a period of prolonged and arguably indefinite stagnation.•


From Tech Insider:

The danger lies in the transition to an economy where the cost of making stuff—industry—has become more or less like agriculture today (with very few people employed and a very low share of GDP). With appropriate policies in place, developed countries can probably manage that transition. They have in the past, and therefore it is safe to assume they most likely will in the future. It does not mean that we will not experience dislocations and conflicts, but we do have old and established institutions—government, the press, the public sphere— that allow us to resolve such conflicts over time for the greater benefit of all.

The real challenge will be beyond our comfortable borders, in the developing world. In both nineteenth-century Europe and twentieth-century Asia, national development has followed a similar pattern. People moved from the countryside to urban centers to take advantage of higher-paying jobs in factories and services. Again, South Korea offers a startling, fast-forward example of that: it underwent a complete transformation from a poor, rural country to a postindus trial, hyperurban powerhouse in less than fifty years. It was so rapid that most visible traces of the past have been erased and forgotten. The national museum in Seoul has a life-size reconstruction of a Seoul street in the 1950s, just like we have over here, but for the colonial era. And imagine this, China went down that very same path at an even faster clip. Half a billion impoverished people turned into middle-class consumers in three decades.

However, this may not happen again if manufacturing is reduced to the status of agriculture, a highly rationalized activity (read: employing very few people). The historically proven path to economic growth and prosperity taken by Korea and China might no longer be available to the next countries.•

Tags: , ,

Babe Ruth Slides Home

Count me among those wholeheartedly ready for robots to replace home-plate baseball umpires. Ball-and-strike calls are wrong about 10% of the time even with the best of umpires, and that leaves an awful lot of wiggle room for not only honest fallibility but even chicanery. To err is human, I know, but perhaps so is coming up with solutions to reduce incompetence? Experiments with robot umps begun in 1950 should be worked on today in the minor leagues. Then the buckets of bolts should be promoted.

Jason Gay, a talented writer for the Wall Street Journal, isn’t so sure. He believes something will be lost as something’s gained in the transfer of duties from carbon to silicon, not only because machines also malfunction (though less often, most likely), but also because of bigger-picture issues. An excerpt that pivots off of David Ortiz’s disputed strikeout at Yankee Stadium this weekend:

Disputed calls like that invariably provoke chatter about a surprisingly doable proposal: robot umps. Precise camera tech to pinpoint balls and strikes has existed for years. Even if the pitch tech at Yankee Stadium showed the calls against Ortiz were not so egregious, the suggestion is clear: Had a “robo-ump” been on ball-and-strikes duty, Big Papi may have marched to first base and tied a game the Red Sox instead wound up losing.

Seems reasonable, right? Whenever possible, shouldn’t tech be used to make the proper call? There are loads of examples of technology improving accuracy in sports—Hawk-Eye line-calling in tennis, for one, is crisp, quick and enjoyably theatrical (fans clap in anticipation!). The NFL, meanwhile, uses an oddball system in which an official crawls under Dracula’s cape to review replays. It mostly works, even if it often takes longer than a bus trip to Maine, and no one on earth seems to know what a catch is in the NFL anymore.

That’s a good reminder that technology isn’t a guaranteed savior. Not every play is reviewable. Machines falter. Software glitches. Some inevitabilities in life are utterly resistant to modernization, like making the bed, or LaGuardia Airport.•

Tags: ,

leary2 (1)

Although lysergic acid diethylamide, was, early in its discovery phase, considered a possible treatment for serious mental-health issues, it came to be seen during the ’60s, through the urging of Richard Alpert and Dr. Timothy Leary and others, as a societal powerwash of sorts, a tonic to radically remove the corrupting, conforming influences of gods and governments, a way to awaken the soporific, a means of cleansing the doors of perception. 

Revolutions are messy, however, and freakouts and flying teenagers did not stamp a smiley face on the “medicine.” It was just plain dangerous to unloose such unregulated experimentation into the world. Even Leary himself, who proselytized at campuses and correctional facilities alike, thought all along that the drug was a short-term panacea with diminishing returns, that soon something else would have to wake up the “beloved robots”–perhaps it would be computer software. Serious academic interest in the drug unsurprisingly idled.

Decades later, there are fewer flashbacks of the dosing and overdosing, and LSD is gaining currency again as a legitimate means of medical treatment. But will it ever shake off its bad reputation? And can its very real dangers be sufficiently neutralized?

From Jon Kelly at the BBC:

Mention LSD and you might think of the 1960s counterculture – kaftanned hippies in San Francisco, or the more adventurous end of the Beatles’ back catalogue, or the tragedy of Pink Floyd singer Syd Barrett losing his grip on reality.

But for the first time, researchers say they have visualised how LSD alters the way the brain works.

A team at Imperial College London says they found it broke down barriers between areas that control functions like vision, hearing and movement. The study was with a small group – 20 subjects – but theresearchers say it could lead to a revolution in the way addiction, anxiety and depression are treated.

For the past decade and a half, academics around the world have been studying whether psychedelic substances that cause hallucinations, changes in perception and mind-altering states could have medical benefits.

But this isn’t the first time we’ve been here. Back in the 1960s there were high hopes for the therapeutic potential of psychedelics, too. Four major scientific conferences were held on the subject. Thousands of papers were published.

But soon enough fears over the recreational use of LSD – or lysergic acid diethylamide, to give its full title – ensured research all but ground to a halt.•

Tags:

« Older entries § Newer entries »