Science/Tech

You are currently browsing the archive for the Science/Tech category.

The Man From Mars.

It wasn’t a commercial triumph like his namesake organ, but Laurens Hammond’s “Teleview” projection system for early 3-D films was critically acclaimed. The set-up was installed in Manhattan’s Selwyn Theater in the early 1920s, and moviegoers were treated to screenings of The Man From Mars, a stereoscopic film made especially for Teleview, which was shown on a large screen and on individual viewing devices attached at each seat. It apparently looked pretty great. Alas, the equipment and installation was costly, and no other cinemas adopted the technology. From the December 17, 1922 Brooklyn Daily Eagle:

In a great, wide-ranging Edge piece, Martin Rees meditates on everything from the Big Bang to a potential post-human age in space, when genetic modification and cyborgism could make for a comfortable life in what are currently severely inclement conditions. We will live beyond Earth, but it won’t really be us.

I’m optimistic that within ten years or so, we will have an understanding of how life began on the Earth,” he writes, which will enable us to understand how likely life is on the billions of planets in our galaxy. The astronomer argues that any life in the inhospitable environs of outer space has probably already successfully transitioned into that of conscious machines, and that earthlings will have to master something similar to get anyone beyond the “crazy pioneers” to purchase a one-way ticket to Mars.

The shrinking of technological hardware makes it most sensible for us, certainly for now and likely in the long run, to send space probes with smart machines rather than half-mad humans, though I don’t doubt some of the latter will make their way out there.

From Rees:

Even though the rate of progress is uncertain, the direction of travel is pretty well agreed. It’s almost certainly going to be towards a posthuman world, where our intelligences would be surpassed by something genetically engineered from us or, more likely, it will be some sort of artificial electronic device that has robotic abilities and intelligence.

Some people say that will happen within a century, others say it will happen within a few hundred years. Even if it takes a few hundred years, that is a tiny instant compared to the past history of the Earth. More importantly, it’s a tiny instant compared to a long-range future. There are billions of years ahead for our solar system, and maybe even more for the universe.

If you imagine a time chart for what’s happened on the Earth, there’s been 4 billion years where there’s been no manifestation of any technology. Then, a few millennia of gradually expanding technology generated by human beings. After that, maybe there will be billions of years more when the dominant technology, the dominant non-natural things, will be entirely inorganic. That means the following: If we were to detect some other planet on which life had taken a course similar to what happened here on Earth, it’s unlikely that its development there would be sufficiently synchronized with development here that we would catch it in those few millennia in which we’ve got technology that is controlled by organic beings like us. If it’s lagging behind what’s happened on Earth, then we’ll see no evidence for anything artificial.

On the other hand, if it’s ahead, then what we will detect—if we detect any evidence that that civilization existed—will be something mechanical, machines. Those machines maybe will not be on the planet because they may not want gravity, they may not want water, et cetera. They may be in space. If the Yuri Milner program detects anything, then it’s likely to be some artifact created by some long-dead civilization. It’s unlikely that there would be any coded message intended for us, but it might be something we could clearly see was not something that emerged naturally. That in itself would be very exciting.

To expand on what’s going to happen here on Earth that might lead to this takeover by posthumans in some form leads to another fascinating topic: the future of manned spaceflight. …

I don’t think Elon Musk is realistic when he imagines sending people a hundred at a time for normal life because Mars is going to be far less clement than living at the South Pole, and not many people want to do that. I don’t think there will be many ordinary people who want to go, but there will be some crazy pioneers who will want to go, even if they have one-way tickets.

The reason that’s important is the following: Here on Earth, I suspect that we are going to want to regulate the application of genetic modification and cyborg techniques on grounds of ethics and prudence. This links with another topic I want to come to later about the risks of new technology. If we imagine these people living as pioneers on Mars, they are out of range of any terrestrial regulation. Moreover, they’ve got a far higher incentive to modify themselves or their descendants to adapt to this very alien and hostile environment.

They will use all the techniques of genetic modification, cyborg techniques, maybe even linking or downloading themselves into machines, which, fifty years from now, will be far more powerful than they are today. The posthuman era is probably not going to start here on Earth; it will be spearheaded by these communities on Mars.•

Tags:

I’m given pause when someone compares the Internet to the printing press because the difference of degree between the inventions is astounding. For all the liberty Gutenberg’s contraption brought to the printed word, it was a process that overwhelmingly put power into the hand of disparate professionals. Sure, eventually with Xeroxes, anyone could print anything, but the vast majority of reading material produced was still overseen by professional gatekeepers (publishers, editors, etc.) who, on average, did the bidding of enlightenment.

By 1969, Glenn Gould believed the new technologies would allow for the sampling, remixing and democratization of creativity, that erstwhile members of the audience would ultimately ascend and become creators themselves. He hated the hierarchy of live performance and was sure its dominance would end. “The audiences [will] become the performer to a large extent,” he predicted. He couldn’t have known how right he was.

The Web has indeed brought us a greater degree of egalitarianism than we’ve ever possessed, as the centralization of media dissipated and the “fans” rushed the stage to put on a show of their own. Now here we all are crowded into the spotlight, a turn of events that’s been both blessing and curse. The utter democratization and the filter bubbles that have attended this phenomenon of endless channels have proven paradoxically (thus far) a threat to democracy. It’s acknowledged even those who’ve been made billionaires by these new tools that “the Internet is the largest experiment involving anarchy in history,” though they never mention when some semblance of order might return.

In Stephen Fry’s excellent recent Hay Festival lecture “The Way Ahead” (h/t The Browser), the writer and actor spoke on these same topics and other aspects of the Digital Age that are approaching with scary velocity. Like a lot of us, he was an instant convert to Web 1.0, charmed by what it delivered and awed by its its staggering potential. Older, wiser and sadder for his knowledge of what’s come to pass, Fry tries to foresee what is next in a world in which 140 characters cannot only help topple tyrants but can create them as well, knowing that the Internet of Things will only further complicate matters. Odds are life may be greater and graver. He offers one word of advice: Prepare.

An excerpt: 

Gutenberg’s printing revolution, by way of Das Kapital and Mein Kampf, by way of smashed samizdat presses in pre-Revolutionary Russia, by way of The Origin of Species and the Protocols of the Elders of Zion, by way of the rolling offset lithos of Fleet Street, Dickens, Joyce, J. K. Rowling, Mao’s Little Red Book and Hallmark greetings cards brought us to the world into which all of us were born, it brought us, amongst other things – quite literally – here to Hay-on-Wye. I started coming to this great festival before the word Kindle had a technological meaning, when an “e-book” might be a survey of 90s Rave drug Culture, or possibly an Ian McMillan glossary of Yorkshire Dialect.

Printed books haven’t gone away, indeed, we are most of us I suspect, pleased to learn how much they have come roaring back, in parallel with vinyl records and other instances of analogue refusal to die. But the difference between an ebook and a printed book is as nothing when set beside the influence of digital technology as a whole on the public weal, international polity and the destiny of our species. It has embedded itself in our lives with enormous speed. If you are not at the very least anxious about that, then perhaps you have not quite understood how dependent we are in every aspect of our lives – personal, professional, health, wealth, transport, nutrition, prosperity, mind, body and spirit.

The great Canadian Marshall McLuhan –– philosopher should one call him? – whose prophetic soul seems more and more amazing with each passing year, gave us the phrase the ‘Global Village’ to describe the post-printing age that he already saw coming back in the 1950s. Where the Printing Age had ‘fragmented the psyche’ as he put it, the Global Village – whose internal tensions exist in the paradoxical nature of the phrase itself: both Global and a village – this would tribalise us, he thought and actually regress us to a second oral age. Writing in 1962, before even ARPANET, the ancestor of the internet existed, this is how he forecasts the electronic age which he thinks will change human cognition and behaviour:

“Instead of tending towards a vast Alexandrian library the world will become a computer, an electronic brain, exactly as in an infantile piece of science fiction. And as our senses go outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. […] Terror is the normal state of any oral society, for in it everything affects everything all the time. […] In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture.”

Like much of McLuhan’s writing, densely packed with complex ideas as they are, this repays far more study and unpicking than would be appropriate here, but I think we might all agree that we have arrived at that “phase of panic terrors” he foresaw.•

Tags:

America desperately needs to win the race in AI, robotics, driverless, supercomputers, solar and other next-level sectors if the nation is to maintain its place in the world. If a powerful and wealthy democracy were to invest wisely and boldly, it would have a great advantage in such competitions with an autocracy like China. Unfortunately, we’ve never had a government less-equipped or less willing to pull off this feat. Trump wants to make coal great again, and Mnuchin can’t see AI on his radar.

If the U.S. and the European states are lose in these areas to China, infamous only a decade ago for its knockoff Apple Stores, the latter nation’s technological might and soft power will increase, further imperiling liberty.

The opening of a New York Times piece by Paul Mozur and John Markoff:

HONG KONG — Soren Schwertfeger finished his postdoctorate research on autonomous robots in Germany , and seemed set to go to Europe or the US, where artificial intelligence was pioneered and established.

Instead, he went to China.

“You couldn’t have started a lab like mine elsewhere,” Schwertfeger said.

The balance of power in technology is shifting. China, which for years watched enviously as the west invented the software and the chips powering today’s digital age, has become a major player in artificial intelligence, what some think may be the most important technology of the future. Experts widely believe China is only a step behind the US.

China’s ambitions mingle the most far-out sci-fi ideas with the needs of an authoritarian state: Philip K Dick meets George Orwell. There are plans to use it to predict crimes, lend money, track people on the country’s ubiquitous closed-circuit cameras, alleviate traffic jams, create self-guided missiles and censor the internet.

Beijing is backing its artificial intelligence push with vast sums of money. Having already spent billions on research programs, China is readying a new multibillion-dollar initiative to fund moonshot projects, start-ups and academic research, all with the aim of growing China’s A.I. capabilities, according to two professors who consulted with the government on the plan.•

Tags: ,

hugh-hefner-chicago-playboy-townhouse-bed

During the heyday of the Magazine Age, when Playboy was still based in Chicago, Hugh Hefner thought most people would soon be enjoying his lifestyle. Well, not exactly his lifestyle.

The mansion, grotto and Bunnies were to remain largely unattainable, but he believed technology would help us remove ourselves from the larger world so that we each could create our own “little planet.” The gadgets he used five decades ago to extend his adolescence and recuse himself are now much more powerful and affordable. Hefner believed our new, personalized islands would be our homes, not our phones, but he was right in thinking that tools would make life more remote in some fundamental way.

In 1966, Oriana Fallaci interviewed Hefner for her book, The Egotists. Her sharp introduction and the first exchange follow.

_________________________

First of all, the House. He stays in it as a Pharaoh in a grave, and so he doesn’t notice that the night has ended, the day has begun, a winter passed, and a spring, and a summer–it’s autumn now. Last time he emerged from the grave was last winter, they say, but he did not like what he saw and returned with great relief three days later. The sky was then extinguished behind the electronic gate, and he sat down again in his grave: 1349 North State Parkway, Chicago. But what a grave, boys! Ask those who live in the building next to it, with their windows opening onto the terrace on which the bunnies sunbathe, in monokinis or notkinis. (The monokini exists of panties only, the notkini consists of nothing.) Tom Wolfe has called the house the final rebellion against old Europe and its custom of wearing shoes and hats, its need of going to restaurants or swimming pools. Others have called it Disneyland for adults. Forty-eight rooms, thirty-six servants always at your call. Are you hungry? The kitchen offers any exotic food at any hour. Do you want to rest? Try the Gold Room, with a secret door you open by touching the petal of a flower, in which the naked girls are being photographed. Do you want to swim? The heated swimming pool is downstairs. Bathing suits of any size or color are here, but you can swim without, if you prefer. And if you go into the Underwater Bar, you will see the Bunnies swim as naked as little fishes. The House hosts thirty Bunnies, who may go everywhere, like members of the family. The pool also has a cascade. Going under the cascade, you arrive at the grotto, rather comfortable if you like to flirt; tropical plants, stereophonic music, drinks, erotic opportunities, and discreet people. Recently, a guest was imprisoned in the steam room. He screamed, but nobody came to help him. Finally, he was able to free himself by breaking down the door, and when he asked in anger, why nobody came to his help–hadn’t they heard his screams?–they answered, “Obviously. But we thought you were not alone.”

At the center of the grave, as at the center of a pyramid, is the monarch’s sarcophagus: his bed. It’s a large, round and here he sleeps, he thinks, he makes love, he controls the little cosmos that he has created, using all the wonders that are controlled by electronic technology. You press a button and the bed turns through half a circle, the room becomes many rooms, the statue near the fireplace becomes many statues. The statue portrays a woman, obviously. Naked, obviously. And on the wall there TV sets on which he can see the programs he missed while he slept or thought or made love. In the room next to the bedroom there is a laboratory with the Ampex video-tape machine that catches the sounds and images of all the channels; the technician who takes care of it was sent to the Ampex center in San Francisco. And then? Then there is another bedroom that is his office, because he does not feel at ease far from a bed. Here the bed is rectangular and covered with papers and photos and documentation on Prostitution, Heterosexuality, Sodomy. Other papers are on the floor, the chairs, the tables, along with tape recorders, typewriters, dictaphones. When he works, he always uses the electric light, never opening a window, never noticing the night has ended, the day begun. He wears pajamas only. In his pajamas, he works thirty-six hours, forty-eight hours nonstop, until he falls exhausted on the round bed, and the House whispers the news: He sleeps. Keep silent in the kitchen, in the swimming pool, in the lounge, everywhere: He sleeps.

He is Hugh Hefner, emperor of an empire of sex, absolute king of seven hundred Bunnies, founder and editor of Playboy: forty million dollars in 1966, bosoms, navels, behinds as mammy made them, seen from afar, close up, white, suntanned, large, small, mixed with exquisite cartoons, excellent articles, much humor, some culture, and, finally, his philosophy. This philosophy’s name is “Playboyism,” and, synthesized, it says that “we must not be afraid or ashamed of sex, sex is not necessarily limited to marriage, sex is oxygen, mental health. Enough of virginity, hypocrisy, censorship, restrictions. Pleasure is to be preferred to sorrow.” It is now discussed even by theologians. Without being ironic, a magazine published a story entitled “…The Gospel According to Hugh Hefner.” Without causing a scandal, a teacher at the School of Theology at Claremont, California, writes that Playboyism is, in some ways, a religious movement: “That which the church has been too timid to try, Hugh Hefner…is attempting.”

We Europeans laugh. We learned to discuss sex some thousands of years ago, before even the Indians landed in America. The mammoths and the dinosaurs still pastured around New York, San Francisco, Chicago, when we built on sex the idea of beauty, the understanding of tragedy, that is our culture. We were born among the naked statues. And we never covered the source of life with panties. At the most, we put on it a few mischievous fig leaves. We learned in high school about a certain Epicurus, a certain Petronius, a certain Ovid. We studied at the university about a certain Aretino. What Hugh Hefner says does not make us hot or cold. And now we have Sweden. We are all going to become Swedish, and we do not understand these Americans, who, like adolescents, all of a sudden, have discovered that sex is good not only for procreating. But then why are half a million of the four million copies of the monthly Playboy sold in Europe? In Italy, Playboy can be received through the mail if the mail is not censored. And we must also consider all the good Italian husbands who drive to the Swiss border just to buy Playboy. And why are the Playboy Clubs so famous in Europe, why are the Bunnies so internationally desired? The first question you hear when you get back is: “Tell me, did you see the Bunnies? How are they? Do they…I mean…do they?!?” And the most severe satirical magazine in the U.S.S.R., Krokodil, shows much indulgence toward Hugh Hefner: “[His] imagination in indeed inexhaustible…The old problem of sex is treated freshly and originally…”

Then let us listen with amusement to this sex lawmaker of the Space Age. He’s now in his early forties. Just short of six feet, he weighs one hundred and fifty pounds. He eats once a day. He gets his nourishment essentially from soft drinks. He does not drink coffee. He is not married. He was briefly, and he has a daughter and a son, both teen-agers. He also has a father, a mother, a brother. He is a tender relative, a nepotist: his father works for him, his brother, too. Both are serious people, I am informed.

And then I am informed that the Pharaoh has awakened, the Pharaoh is getting dressed, is going to arrive, has arrived: Hallelujah! Where is he? He is there: that young man, so slim, so pale, so consumed by the lack of light and the excess of love, with eyes so bright, so smart, so vaguely demoniac. In his right hand he holds a pipe: in his left hand he holds a girl, Mary, the special one. After him comes his brother, who resembles Hefner. He also holds a girl, who resembles Mary. I do not know if the pipe he owns resembles Hugh’s pipe because he is not holding one right now. It’s a Sunday afternoon, and, as on every Sunday afternoon, there is a movie in the grave. The Pharaoh lies down on the sofa with Mary, the light goes down, the movie starts. The Bunnies go to sleep and the four lovers kiss absentminded kisses. God knows what Hugh Hefner thinks about men, women, love, morals–will he be sincere in his nonconformity? What fun, boys, if I discover that he is a good, proper moral father of Family whose destiny is paradise. Keep silent, Bunnies. He speaks. The movie is over, and he speaks, with a soft voice that breaks. And, I am sure, without lying.

Oriana Fallaci:

A year without leaving the House, without seeing the sun, the snow, the rain, the trees, the sea, without breathing the air, do you not go crazy? Don’t you die with unhappiness?

Hugh Hefner:

Here I have all the air I need. I never liked to travel: the landscape never stimulated me. I am more interested in people and ideas. I find more ideas here than outside. I’m happy, totally happy. I go to bed when I like. I get up when I like: in the afternoon, at dawn, in the middle of the night. I am in the center of the world, and I don’t need to go out looking for the world. The rational use that I make of progress and technology brings me the world at home. What distinguishes men from other animals? Is it not perhaps their capacity to control the environment and to change it according to their necessities and tastes? Many people will soon live as I do. Soon, the house will be a little planet that does not prohibit but helps our relationships with the others. Is it not more logical to live as I do instead of going out of a little house to enter another little house, the car, then into another little house, the office, then another little house, the restaurant or the theater? Living as I do, I enjoy at the same time company and solitude, isolation from society and immediate access to society. Naturally, in order to afford such luxury, one must have money. But I have it. And it’s delightful.•

That Mark Zuckerberg’s self-described religious conversion and his 50-states “listening tour” have been carefully managed, documented and publicized for public consumption is undeniable, but let’s not suppose that something so staged will be unsuccessful. After all, there’s never been a more obvious con man than Donald Trump, so let us never, ever again underestimate the propensity of Americans to be impressed by fabulously wealthy celebrities going through the motions. Enough of us assume they have to be brilliant and special. 

Maybe the founder of Facebook, the platform of choice for Alt-Reich enthusiasts, is really prepping for a 2020 Presidential run that will be aided by his media holdings–like Berlusconi minus all the fascinating bunga bunga?–or perhaps he’s just trying on a new style like when he was killing the animals he ate or being a proud Atheist or saying idiotic pseudo-philosophical about dying Africans. Sure, it’s possible he’s truly changed and grown, but real personal development is not usually connected to the end of a selfie stick.

Regardless, there are many Americans who’d be far better in the Oval Office and at least one who’s way worse.

From Mike Isaac of the New York Times:

CAMBRIDGE, Mass. — In March, Mark Zuckerberg visited the Emanuel African Methodist Episcopal Church in Charleston, S.C., the site of a mass murder by a white supremacist.

Last month, he went to Dayton, Ohio, to sit down with recovering opioid addicts at a rehabilitation center.

And he spent an afternoon in Blanchardville, Wis., with Jed Gant, whose family has owned a dairy and beef cattle farm for six generations.

These were all stops along a road trip by Mr. Zuckerberg, Facebook’s chief executive, across the United States this year. His goal: to visit every state in the union and learn more about a sliver of the nearly two billion people who regularly use the social network.

On Thursday, in a commencement speech at Harvard, from which he dropped out in 2005, Mr. Zuckerberg discussed how his views on how people live and work with one another had broadened, partly as a result of what he has seen on the tour. He said he had come to realize that churches, civic centers and other organized meeting places are integral to building and maintaining a strong sense of community.

“As I’ve traveled around, I’ve sat with children in juvenile detention and opioid addicts, who told me their lives could have turned out differently if they just had something to do, an after-school program or somewhere to go,” said Mr. Zuckerberg, who also received an honorary doctoral degree at the ceremony. “I’ve met factory workers who know their old jobs aren’t coming back and are trying to find their place.”

To his critics, Mr. Zuckerberg’s road trip is a stunt and has taken on the trappings of a political campaign. His every pit stop — eating with a farming family in Ohio; feeding a baby calf at a farm in Wisconsin — has been artfully photographed and managed, and then posted to Mr. Zuckerberg’s Facebook page.

“He has all of the mechanics needed for a massive, well-staged media operation,” said Angelo Carusone, president of Media Matters for America, a nonprofit media watchdog group. “Photographers, handlers, its size, scope and scale — all the ingredients are there. And he’s appearing in an environment where there’s no sole Democratic leader or counterbalance to Trump, who’s consuming all the oxygen in media.”

Mr. Zuckerberg has publicly denied that he is using the visits as a platform to run for public office.•

Tags: ,

Social mobility as it relates to geography, gender, integration, education and other factors is at the heart of much of the research conducted by Stanford economist Raj Chetty. An erstwhile wunderkind who’s still very young at 37, the academic, an immigrant from New Dehli whose family relocated to Milwaukee when he was a child, has often wondered what allowed his success. Certainly native genius was a key component and having a father who was an economist and mother a pulmonologist didn’t hurt, but how much did physical location and primary and secondary schools matter?  

It’s a topic I consider often not only because the American Dream has been dragging for many for decades, but because I grew up in a lower-income, blue-collar neighborhood that didn’t have a bookstore. It was hard to get from here to there, and part of the problem went beyond money, location and access, though those factors undoubtedly loomed large. The problem was also cultural, as scholarly achievements–even a mere love of reading–was viewed as a “sellout” or sorts. Don’t know if that’s still the situation where I’m from, but I bet it stubbornly persists in other quarters of the country. 

Certainly the nativism and scapegoating of the most recent Presidential election was so shockingly acceptable to so many citizens in part because of our ever-widening economic segregation. The terrible outcome of that race will likely only exacerbate the issue.

Tyler Cowen just interviewed Chetty. Three excerpts follow.


Tyler Cowen:

It’s a common view, derived from William Baumol and Bowen, that education is subject to a kind of cost disease, that it’s harder and harder to augment productivity, wages rise in other sectors of the economy, education takes a rising share of GDP but doesn’t really get much better. Do you accept that story, or, if not, how would you modify it? Are we doomed to low productivity growth in K–12 education?

Raj Chetty:

I don’t think so because, while in some limited case that might end up being true, at the moment I see so many opportunities within the US K–12 education system to potentially have significantly higher productivity without dramatically higher cost. Let me give you an example. Coming back to the case of teachers, my sense is, if we were to try to keep the most effective teachers in the classroom and either retrain or dismiss the teachers who are less effective, we could substantially increase productivity without significantly increasing cost.

Tyler Cowen:

But say we do that. What do we do next?

Raj Chetty:

I think eventually it’s conceivable that you move up the quality ladder, and you’ve got everybody getting a very good primary school education. Then you need to work on secondary education and so forth. But there again, I would say there are lots of bargains to be found.

In our most recent work looking at colleges and upward mobility, we see that there are a number of colleges where kids seem to be doing extremely well that are not all that expensive. Also, I think, here a macroeconomic perspective is useful. If you look at countries that have some of the best educational outcomes, like Scandinavian countries, they’re not actually spending dramatically more than the United States.

At some abstract level, I think that logic has to be right, that eventually, in order to raise the level of education beyond some point, we’re going to have to spend more and more on that, but I don’t think we’re close enough empirically to such a point that that is really a critical consideration at the moment.


Tyler Cowen:

If you told the story about molecules impinging on your body and impelling you to action, what’s the best story you can come up with for Iowa, say, or Utah?

Raj Chetty:

Yeah, a few different things. Iowa is known for having very good public schools for a long time.

Tyler Cowen:

But that too is arguably just part of the package.

Raj Chetty:

Yes. Where did that come from? Why does Iowa have good public schools?

Tyler Cowen:

Right.

Raj Chetty:

One of the strong correlates we find is that places that are more integrated across socioeconomic groups, that have lower segregation, tend to have better outcomes for kids. And that kind of thing in a rural area — you can see why that occurs and why it might lead to better outcomes.

If you live in a big city, it’s very easy to self-segregate in various ways. You live in a gated community, you send your kids to a private school. You essentially don’t interact with people from different socioeconomic classes. If you live in a small town in Iowa, pretty much there’s one place your kids are going to go to school. There’s one set of activities that you can all participate in. And that is likely to lead to more integration.


Tyler Cowen:

As I’m sure you know, since the 1990s, segregation by income has been rising in this country. And here, Silicon Valley is one of the most extreme cases of that. So seeing that, are you on net a segregation optimist or pessimist? If I may ask.

Raj Chetty:

I think current trends suggests that segregation will continue to grow in the US. Take the case of driverless cars, for example. One way that could go is, if you have access to driverless cars, it makes it all the more easy to go live further away in a secluded place, further reduce interaction, right?

So I think it’s very important to think about social policy in the context of that type of technology. How do you set cities up? How do you do urban planning and architecture in a way such that you don’t actually just facilitate more segregation? Such that you make it attractive to live in a more mixed-income community? That’s a key challenge, I think.•

Tags: ,

Overall I enjoyed Garry Kasaprov’s Deep Thinking. Have philosophical disagreements with it, for sure, and there is some revisionism in regards to his personal history, but the author’s take on his career developing parallel to the rise of the machines and his waterloo versus IBM is fascinating. It’s clear that if there had been a different World Chess Champion during Kasparov’s reign, one who lacked his significant understanding of the meaning of computers and maverick mindset, the game would have been impoverished for it. I’ll try to make time this weekend to write a long review.

The 20-year retrospective on Deep Blue’s 1997 victory would be incomplete without reflection by Steven Levy, who penned the famous Newsweek cover story “The Brain’s Last Stand” as a preface to the titanic match in which humanity sunk. (It turns out Levy himself composed that perfectly provocative cover line that no EIC could refuse.)

The writer focuses in part on the psychological games that Deep Blue was programmed to play, an essential point to remember as computers are integrated into every aspect of life–when nearly every object becomes “smart.” Levy points out that no such manipulations were required for DeepMind to conquer Go, but those machinations might be revisited when states and corporations desire to nudge our behaviors.

An excerpt:

The turning point of the match came in Game Two. Kasparov had won the first game and was feeling pretty good. In the second, the match was close and hard fought. But on the 36th move, the computer did something that shook Kasparov to his bones. In a situation where virtually every top-level chess program would have attacked Kasparov’s exposed queen, Deep Blue made a much subtler and ultimately more effective move that shattered Kasparov’s image of what a computer was capable of doing. It seemed to Kasparov — and frankly, to a lot of observers as well — that Deep Blue had suddenly stopped playing like a computer (by resisting the catnip of the queen attack) and instead adopted a strategy that only the wisest human master might attempt. By underplaying Deep Blue’s capabilities to Kasparov, IBM had tricked the human into underestimating it. A few days later, he described it this way: “Suddenly [Deep Blue] played like a god for one moment.” From that moment Kasparov had no idea what — or who — he was playing against. In what he described as “a fatalistic depression,” he played on, and wound up resigning the game.

After Game Two, Kasparov was not only agitated by his loss but also suspicious at how the computer had made a move that was so…un-computer like. “It made me question everything,” he now writes. Getting the printouts that explained what the computer did — and proving that there was no human intervention — became an obsession for him. Before Game Five, in fact, he implied that he would not show up to play unless IBM submitted printouts, at least to a neutral party who could check that everything was kosher. IBM gave a small piece to a third party, but never shared the complete file.

Kasparov was not the same player after Game Two.•


“It was very easy, all the machines are only cables and bulbs.”

Tags: ,

The day of the ransomware WannaCry attack, I wrote that a “world in which everything is a computer–even our brains–is a fraught one.” We live in a time when we hold what are essentially supercomputers in our hands, but more and more we’re in their grip. When the Internet of Things becomes the thing, linking all items and enabling them to incessantly collect information, pretty much everything from refrigerators to roads will be hackable. A permanent cat and (computer) mouse game will begin in earnest, and this time we’ll be inside the machine.

As Bruce Schneier writes in his wise and wary Washington Post essay on the subject: “Solutions aren’t easy and they’re not pretty.” An excerpt:

Everything is becoming a computer. Your microwave is a computer that makes things hot. Your refrigerator is a computer that keeps things cold. Your car and television, the traffic lights and signals in your city and our national power grid are all computers. This is the much-hyped Internet of Things (IoT). It’s coming, and it’s coming faster than you might think. And as these devices connect to the Internet, they become vulnerable to ransomware and other computer threats.

It’s only a matter of time before people get messages on their car screens saying that the engine has been disabled and it will cost $200 in bitcoin to turn it back on. Or a similar message on their phones about their Internet-enabled door lock: Pay $100 if you want to get into your house tonight. Or pay far more if they want their embedded heart defibrillator to keep working.

This isn’t just theoretical. Researchers have already demonstrated a ransomware attack against smart thermostats, which may sound like a nuisance at first but can cause serious property damage if it’s cold enough outside. If the device under attack has no screen, you’ll get the message on the smartphone app you control it from.•

Tags:

In a 1979 Omni interview, Dr. Christopher Evans spoke with chess player, businessman and AI enthusiast David Levy, who defeated a computer-chess competitor that year but was unnerved by his hard-fought victory. Just six years earlier, he had confidently said: “I am tempted to speculate that a computer program will not gain the title of International Master before the turn of the century and that the idea of an electronic world champion belongs only in the pages of a science fiction book.” Levy knew before the matches at the end of the ’70s were over that our time of dominance was nearing completion.

An excerpt:

Omni:

When did you first begin to feel that computer chess programs were really getting somewhere?

David Levy:

I think it was at the tournament in Stockholm in 1974. One of the things that struck me was a game in which one of the American programs made the sacrifice of a piece, in return for which it got a very good positional advantage. Now, programs don’t normally give up pieces unless they can see something absolutely concrete, but in this case the advantages that it got were not concrete but rather in the structure or nature of the position. It wasn’t a difficult sacrifice for a human player to see, but it was something ! hadn’t expected from a computer program. I was giving a running commentary on the game, and I remember saying to the audience that i would be very surprised indeed if the program made this sacrifice, whereupon it went and made it. I was very, very impressed, because this was the first really significant jump that I’d seen in computer chess.

Omni:

So somewhere around that time things began to stir. To what do you attribute this?

David Levy:

Interest in computer chess generally was growing at a very fast rate, for a number of reasons. First of all, there were the annual tournaments in the United States at the ACM conferences, and these grew in popularity They inspired interest partly because there was now a competitive medium in which the programs could take part. Also, there was my bet, which had created a certain amount of publicity and, I suppose, made people wish that they could write the program that would beat me.

Omni:

How much of this has gone hand in hand with the gradually greater availability of computers and the fact that it no longer costs the earth to get access to one?

David Levy:

Quite a lot. As recently as 1972, in San Antonio, I met some people who were actually writing a clandestine computer program to play chess. They hadn’t dared tell their university department about it because they would have been accused of wasting computer time. They were even unable to enter their program in the tournament, because. If they had they would have lost their positions at the university. Today the situation is dramatically changed, because it is so much easier to get machine time. Now, with the advent of home computers, I think it’s only a matter of time before everyone interested in computer chess will have the opportunity to write a personal chess program.

Omni:

Times have changed, haven’t they? Not very long ago you’d see articles by science journalists saying that computers could never be compared with brains, because they couldn’t play a decent game of chess. There was even some jocular correspondence about what would happen if two computers played each other, and it was argued that if white opened with pawn to king four, black would immediately resign.

David Levy:

This presupposes thai chess is, in practical terms, a finite game. In theoretical terms it is because there is a limit to the number of moves you can make in any position, and the rules of the game also put an upper limit on the total number of moves that any game can involve. But the number of possible different chess games is stupendous — greater than the number of atoms in the universe, in fact. Even if each atom in the universe were a very, very fast computer and they were all working together, they still would not be able to play the perfect game of chess. So the idea that pawn to king four as an opening move could be proved to be a win for white by force is nonsense. One reason you hear these kinds of things is that most people do not understand either the nature of computer programs or the nature of chess. The man in the street tends to think that because chess grand masters are geniuses, their play is beyond the comprehension of a computer. What they don’t understand is that when a computer plays chess, it is just performing a large number ol arithmetic operations. Okay, the end result is typed out and constitutes a move in a game of chess. But the program isn’t thinking. It is just carrying out a series of instructions.

Omni:

One sees some very peculiar, almost spooky moves made by computers, involving extraordinary sacrifices and almost dashing wins, Could they be just chance?

David Levy:

No. Wins like that are not chance. They are pure calculation, The best way to describe the situation is to divide the game of chess into two spheres, strategy and tactics. When I talk about tactics I mean things such as sacrifices with captures, checks, and threats on the queen or to force mate, When I talk about strategy I mean subtle maneuvering to try and gradually improve position. In the area of tactics, programs are really very powerful because of their ability to calculate deeply and accurately. Thus, where a program makes a spectacular move and forces mate two moves later, it is quite possible that the program has calculated the whole of that variation. These spectacular moves look marvelous, of course, to the spectator and to the reader of chess magazines’ because they are things one only expects from strong players. In fact, they’re the easiest things for a program to do.

What is very difficult for a. program is to make a really good, subtle, strategic move, because that involves long-range planning and a kind of undefinable sixth sense for what is ‘right in the position.’ This sixth sense, or instinct, is really one of the things that sorts out the men from the boys on the chessboard. The top chess programs may look at as many as two million positions every time they make a move. Chess masters, on the other hand, look at maybe lifty, so it’s evident that the nature of their thought processes, so to speak, are completely different. Perhaps the best way to put it is that Ihe human knows what he’s doing and the computer doesn’t.

I can explain this with an example from master chess. The Russian ex-world champion Mikhail Tal was. explaining after one game his reasons behind particular moves. In one position his- king was in check on king’s knight one. and he had a choice between moving it to. the corner or moving it nearer to the center of the board. Most players, without very much hesitation, would immediately put the king in the corner, because it’s safer there. But he rejected this move, and somebody in the audience said, ‘Please, Grand Master, can you tell us, Why did you move the king to the middle of the board when everybody knows, that it is safer in the comer?’ And he said, ‘Well, I thought that when we reached the sort of end game- which I anticipated, it would be very important to have my king near the center of the board.’ When they reached the end game, he won it by one move, because his king was one square nearer the vital part of the board than his opponent’s. Now this was something that he couldn’t have seen through blockbusting analysis and by looking ten or even twenty moves ahead. It was just feel.

Omni:

This brings us up against the question of whether or not a computer will ever play a really great game of chess. How do you feel about I. J. Good’s suggestion that a computer could one day be world champion?

David Levy:

Well, ten years ago I would have said, ‘Nonsense.’ Now I am absolutely sure that in due course a computer will be a really outstanding and terrifyingly good world champion. It’s almost inevitable that within a decade computers will be maybe a hundred thousand or a million times faster than they are now. And with many, many computers working in parallel, one could place enormous computer resources at the disposal of chess programs. This will mean that the best players in the world will be wiped out by sheer force of computer power. Actually, from an aesthetic and also an emotional point of view, it would be very unfortunate if the program won the world championship by brute force. I would be much happier to see a world-champion program that looked at very small combinations of moves but looked at them intelligently. This would be far more meaningful, because it would mean that the programmer had mastered the technique of making computer programs ‘think’ in rather the same way that human beings do, which would be a significant advance in artificial intelligence.

Omni:

Which brings us around to the tactics you adopt when playing computers. When did you play your first game against a chess program?

David Levy:

The first one that I remember was against an early version of the. Northwestern University program, and it presented no problems at all. These early programs were rather dull opponents, actually.

The latest ones, of course, are much more intelligent, particularly as they exhibit what you might also describe as psychological characteristics or even personal traits.

Omni:

Could you give an example?

David Levy:

Well, there is this thing called the horizon effect. Say a program is threatened with the loss of a knight which it does not want lo lose. No matter what it does, it cannot see a way to avoid losing the knight within the horizon that it is looking at — say, four moves deep. Suddenly it spots a variation where by sacrificing a pawn it is not losing the knight anymore. It will go into this variation and sacrifice the pawn, but what it does not realize is that after it has lost the pawn, the loss of the knight is still inevitable. The pawn was merely a temporary decoy. But the program is thinking only four moves ahead and the loss of the knight has been pushed beyond its horizon of search, so it is content. Later on, when the pawn has been lost, it will see once again that the knight is threatened and it will once again try to avoid losing the knight and give up something else. By the time it finally does lose the knighl, il has lost so many other things as well that it wishes it had really given up the piece at the beginning. This often brings about a reeling in the program that can best be described as ‘apathy.’ If a program gets into a position that is, extremely difficult because–it is absolutely bound to lose something, it starts to make moves of an apparently reckless kind. It appears to be saying, ‘Oh, damn you! You’re smashing me off the board. I don’t care anymore. I’m just going to sacrifice all my pieces.’ Actually, the program is fighting as hard as it can to avoid the inevitable.

Omni:

That sounds very much like The way beginners get obsessed with defending pieces. But it also sounds as though you’re saying that you feel the program has a mood.

David Levy:

Almost. One tends.to come to regard these things as being almost human, particularly when you can see that they have understood what you. are doing or you can see they are trying to do something clever; In fact, as with human beings, certain tendencies repeat themselves time and again. For example, there are definite idiosyncrasies of the Northwestern University program that one soon comes to recognize. In a particular variation of the Sicilian defense, white often has a knight on his queen four square and black often has a knight on black’s queen bishop three square. Now, it’s quite well known among stronger players that white does not exchange knights, because black can launch a counterattack along the queen-knight tile. Now, I noticed quite often that when playing against the Sicilian defense, the Northwestern University program- would exchange knights. Its main reason was that this maneuver leads to black having what we call an isolated pawn, which, as a general principle, is a ‘bad thing,’ So the Northwestern University program, when in doubt, used to say, ‘I’ll take his knight. And when he recaptures with the knight’s pawn, he has got an isolated rook’s pawn. Goody.’ What it didn’t realize is that in the Sicilian defense, the. isolated rook’s pawn doesn’t actually matter, but having the majority of pawns in the center for black does. So when I played my first match against CHESS 4.5 in Pittsburgh, on April 1, 1977, I deliberately made an inferior move in the opening, so that the program would no longer be following its opening book and wouldn’t know what to do. I was confident that after I made this inferior move the program would exchange knights., which it did, and this presented me with the sort of position that I wanted.•

A really intelligent, though perhaps not tech-sector savvy, friend recently insisted that Google is just a company that sells ads. People who work there, I was told, shouldn’t think they’re doing anything important.

Well, no.

The Larry Page-Sergey Brin Silicon Valley megapower was born as an Artificial Intelligence company, one that just so happens to collect information online that helps it with dual goals of, yes, making money from ads today, but also in building the smart tools of tomorrow that can make an impact exponentially beyond savvy search results. To that end, the X division is an attempt at a latter-day Bell Labs, a highly ambitious division dedicated to moonshots, though one that isn’t working in concert with Washington D.C. as its predecessor did.

As I’ve said in the past, if Google is mainly a search engine in the future, the company has failed and will decline into its dotage, if, likely, a still highly profitable one. It’s also fair to say that if the company succeeds, it will probably be a mixed blessing for society, yielding improvements that come at a cost that may be dear. That’s because Google’s far-flung ambitions are similar to its more-mundane ones in that they rely on surveilling us and pulling information from our brains. Eventually, you’ll have the implant.

Next-level research is also being earnestly conducted by Musk, Bezos, Zuckerberg and other titans of the Information Age. Absent from that list is the U.S. federal government, that lumbering giant which now during the Trump Administration is more inept and dysfunctional than at any point in modern history–maybe in our entire history. That failing of the public sphere, which isn’t adequately investing in AI research, leaves us in a prone position before tech behemoths that will have to increase their profits while building our future.

The opening of Farhad Manjoo’s perceptive New York Times column on the government ceding AI to Silicon Valley:

One persistent criticism of Silicon Valley is that it no longer works on big, world-changing ideas. Every few months, a dumb start-up will make the news — most recently the one selling a $700 juicer — and folks outside the tech industry will begin singing I-told-you-sos.

But don’t be fooled by expensive juice. The idea that Silicon Valley no longer funds big things isn’t just wrong, but also obtuse and fairly dangerous. Look at the cars, the rockets, the internet-beaming balloons and gliders, the voice assistants, drones, augmented and virtual reality devices, and every permutation of artificial intelligence you’ve ever encountered in sci-fi. Technology companies aren’t just funding big things — they are funding the biggest, most world-changing things. They are spending on ideas that, years from now, we may come to see as having altered life for much of the planet.

At the same time, the American government’s appetite for funding big things — for scientific research and out-of-this-world technology and infrastructure programs — keeps falling, and it may decline further under President Trump.

This sets up a looming complication: Technology giants, not the government, are building the artificially intelligent future. And unless the government vastly increases how much it spends on research into such technologies, it is the corporations that will decide how to deploy them.•

Tags:

From the June 5, 1888 Brooklyn Daily Eagle:

In his latest Medium piece, Matt Chessen writes about a near-term scenario in which machine-driven communications (MADCOMS), essentially indistinguishable from human communications, will dominate social media with the aid of AI, influencing the thoughts of all those carbon beings who come into contact with it. Extrapolating this brave new world a little further, he envisions different political and cultural factions waging wars for hearts and minds via an onslaught of machine-based messaging. It will be, perhaps, like the elections of 2016 to the nth degree.

God help us all. 

Unlike other technological innovations, which usually are a mix of boon and bane, it’s hard to see much good being delivered by such a framework. The downside, of course, is enormous.

Chessen knows his vision of tomorrow is incredibly fraught, asking: “Will this be the new Renaissance, or the next Inquisition?” Almost definitely, a realization of his prediction will provoke the latter.

The opening:

We’re on the verge of a revolution — very soon, computers are going to start programming us, through ideas, culture, and eventually, our DNA.

We may have no idea this is happening to us.

To understand this, you really should start with the article “Artificial intelligence chatbots will overwhelm human speech online; the rise of MADCOMs.” There, I explain how emerging AI technologies will enable machine-driven communication tools (MADCOMs) that dynamically generate content for marketing, influence, politics, and manipulation. These MADCOMs will be running influence campaigns 24/7/365 all across the social web. But since the MADCOMs won’t be able to differentiate the human accounts from the machine-driven accounts, MADCOMs will run information ops on machines and people. The machines will talk back and run their own influence campaigns. The end result is the Internet being swamped by machines talking to other machines.

Much of this content will be dynamically generated. Sure, humans will configure the AI tools and give them objectives, but their content will evolve based on machine learning. And as they communicate and influence other machine-driven accounts, the MADCOMs behind them will evolve their content as well.

The end result could be machines becoming the driving force in our culture.

AIs are already creating news articles, novels, music and screenplays. Soon they will create memes, write jokes, drive political conversations, and promote celebrities. They will probably be jabbering away on Reddit and 4Chan, trying to convince humans that Coke is the real thing or that 9/11 was a coverup. They will be spinning all sorts of wild tales.

And in doing so, our creations will be programming us, through culture.•

Tags:

Despite the robot apocalypse we’ve been promised, statistics don’t show an increase in productivity or decrease in employment. Many of the jobs recently created have been lesser ones, but even wages have shown some rise at times over the last year. Perhaps the decline of the American middle class over the last 50 years has been largely a political result rather than a technological one? It would be tough to convince people living in former manufacturing strongholds, but it may be so.

Three possible reasons the numbers don’t reveal a coming widespread technological unemployment:

  1. The numbers aren’t able to accurately capture the new automated economy. Doubtful.
  2. Automation may be overhyped for the moment the way computers or the Internet or smartphones originally were, but soon enough it will make a dent on society that will be felt deeply. Possible.
  3. The impact of automation will be gradual and manageable, improving society while not creating what Yuval Harari indelicately describes as a “useless class.” Possible.

In a Rough Type post, Nicholas Carr thinks machines may be depressing wages but have otherwise been overstated. An excerpt:

I’m convinced that computer automation is changing the way people work, often in profound ways, and I think it’s likely that automation is playing an important role in restraining wage growth by, among other things, deskilling certain occupations and reducing the bargaining power of workers. But the argument that computers are going to bring extreme unemployment in coming decades — an argument that was also popular in both the 1950s and the 1990s, it’s worth remembering — sounds increasingly dubious. It runs counter to the facts. Anyone making the argument today needs to provide a lucid and rational explanation of why, despite years of rapid advances in robotics, computer power, network connectivity, and artificial intelligence techniques, we have yet to see any sign of a broad loss of jobs in the economy.•

Tags:

The Singularitarians’ time frames are largely risible, but attention should be paid to their goals. What they suggest is often only a furthering what we already have. Looking at their predictions for tomorrow can tell us something about today.

For better or worse, humans are more united by technology than they ever have been before, and for some this is merely prelude. A new Futurism article looks at Peter Diamandis’ dream of “meta-intelligence,” which would require far more radical person-to-person connectedness as well as humans being tethered brain to cloud. His overly ambitious ETA may prove false, but paramount concerns about such an arrangement go far beyond hacking and privacy. Marshall McLuhan dreaded the Global Village he predicted, believing it could be our downfall.

Gary Wolf wrote in Wired in 1996:

McLuhan did not want to live in the global village. The prospect frightened him. Print culture had produced rational man, in whom vision was the dominant sense. Print man lived in a world that was secular rather than sacred, specialized rather than holistic.

But when information travels at electronic speeds, the linear clarity of the print age is replaced by a feeling of “all-at-onceness.” Everything everywhere happens simultaneously. There is no clear order or sequence. This sudden collapse of space into a single unified field ‘dethrones the visual sense.’ This is what the global village means: we are all within reach of a single voice or the sound of tribal drums. For McLuhan, this future held a profound risk of mass terror and sudden panic.•

Print has certainly been eclipsed, and the Internet and its social media have presented specific outsize problems even a visionary could never have seen coming. These tools can help topple regimes, and all the closeness has allowed those with good or evil intentions to pool their resources and mobilize.

Garry Kasparov is relatively hopeful about what this new normal means for us, but would his arch-nemesis Vladimir Putin have been able to effect the U.S. election without the wires that now run through us all? We have to accept at least the possibility that a highly technological society will be an endlessly chaotic one.

From Futurism:

CHANGE IS COMING

Diamandis outlines the next stages of humanity’s evolution in four steps, each a parallel to his four evolutionary stages of life on Earth. There are four driving forces behind this evolution: our interconnected or wired world, the emergence of brain-computer interface (BCI), the emergence of artificial intelligence (AI), and man reaching for the final frontier of space.

In the next 30 years, humanity will move from the first stage—where we are today—to the fourth stage. From simple humans dependent on one another, humanity will incorporate technology into our bodies to allow for more efficient use of information and energy. This is already happening today.

The third stage is a crucial point.

Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.•

Tags: ,

From the December 13, 1936 Brooklyn Daily Eagle:

Tags: ,

As we build a society that resembles a machine, we can’t assume it will be one of loving grace.

I don’t subscribe to John Markoff’s idea that we can coolly decide the path forward. These decisions will be made in the heat of battle–state versus state, corporation versus corporation. Nor am I completely deterministic about the outcome. Miracles will intermingle with malice, and constant attention and intervention will be required to mitigate the latter. 

As today’s widespread, pernicious ransomware attack of European and Asian countries the globe reminds, a world in which everything is a computer–even our brains–is a fraught one.

The opening of a New York Times article by Dan Bilefsky and Nicole Perlroth:

LONDON — An extensive cyberattack struck computers across a wide swath of Europe and Asia on Friday, and strained the public health system in Britain, where doctors were blocked from patient files and emergency rooms were forced to divert patients.

The attack involved ransomware, a kind of malware that encrypts data and locks out the user. According to security experts, it exploited a vulnerability that was discovered and developed by the National Security Agency.

The hacking tool was leaked by a group calling itself the Shadow Brokers, which has been dumping stolen N.S.A. hacking tools online beginning last year. Microsoft rolled out a patch for the vulnerability last March, but hackers took advantage of the fact that vulnerable targets — particularly hospitals — had yet to update their systems.

The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets.

By then, it was already too late. As the disruptions rippled through hospitals, doctors’ offices and ambulance companies across Britain on Friday, the health service declared the attack as a “major incident,” a warning that local health services could be overwhelmed by patients.•

Tags: ,

Intelligent doesn’t necessarily mean good, in humans or machines.

I doubt I’ve come across any public figure who’s read more books than Tyler Cowen, yet in the country’s darkest hour, he’s pulled his punches with his fellow Libertarian Peter Thiel, who’s behaved abysmally, dangerously, in his ardent Trump support. The Administration, a gutter-level racist group, has apparently allowed Russian espionage to snake its way into the U.S. and is working in earnest to undo American democracy, to put itself beyond the reach of the law. Those who’ve gone easy on its enablers are complicit.

Maybe the machines will behave more morally than us when they’ve turned away from our lessons to teach themselves? Maybe less so?

· · · 

The pro-seasteading economist just interviewed Garry Kasparov, whose new book, Deep Thinking, I’m currently reading. Likely history’s greatest chess player, the Russian was turned deep blue by IBM during the interval between Cold Wars, when he could conjure no defense for the brute force of his algorithmically advantaged opponent.

Initially, Kasparov was too skeptical, too weighed down by human ego, to fully appreciate the powers of computers, but sometimes those who’ve most fiercely resisted religion become the most ardent believers, redirecting their fervent denial into a passionate embrace. That’s where Kasparov seems to be now in his unbridled appreciation for what machines will soon do for us, though I can comment more once I’ve completed his book.

He’s certainly right that much of what will happen with AI over the course of this century is inevitable given the way technologies evolve and the nature of human psychology. With those developments, we’ll enjoy many benefits, but with all progress comes regress, a situation heightened as the tools become more powerful. It’s clear to me that we’re not merely building machines to aid us but permanently placing ourselves inside of one with no OFF switch.

An excerpt:

Tyler Cowen:

A lot of humans don’t play chess, but we’re looking at a future where AI will make decisions about who gets a monetary loan, who is diagnosed as being schizophrenic or bipolar. How cars drive on the road increasingly is controlled by software.

The fact that the decisions of the software are not so transparent — and you see this also in computer chess — how will ordinary human beings respond to the fact that more and more of their lives will be “controlled” by these nontransparent processes that are too smart for them to understand? Because in your book, you have emotional conflict with Deep Blue, right?

Garry Kasparov:

Exactly. I’m telling you that it’s inevitable. There are certain things that are happening, and it’s called progress. This is the history of human civilization. The whole history is a steady process of replacing all forms of labor by machines. It started with machines replacing farm animals and then manual laborers, and it kept growing and growing and growing.

There was a time I mentioned in the book, people didn’t trust elevators without operators. They thought it would be too dangerous. It took a major strike in the city of New York that was equal a major disaster. You had to climb the Empire State Building with paralyzed elevators.

I understand that today, people are concerned about self-driving cars, absolutely. But now let us imagine that there was a time, I’m sure, people were really concerned, they were scared stiff of autopilots. Now, I think if you tell them that autopilot’s not working in the plane, they will not fly because they understand that, in the big numbers, these decisions are still more qualitative.

While I understand also the fear of people who might be losing jobs, and they could see that machines are threatening their traditional livelihood, but at the same time, even these people whose jobs are on chopping block of automation, they also depend on the new wave of technology to generate economic growth and to create sustainable new jobs.

This is a cycle. The only difference with what we have been seeing throughout human history is that now, machines are coming after people with college degrees, political influence, and Twitter accounts.•

Tags: ,

Smarter, stronger and healthier are just a few of the advantages bioengineering and informatic machines will deliver to us, likely sometime this century. By then, our hands will have taken control of evolution, and our heads will be in the cloud. These miracle tools will also be attended by a raft of ethical issues and unintended consequences.

In an excellent Vox Q&A conducted by Sean Illing, Michael Bess, author of Our Grandchildren Redesignedbelieves the ETA for this brave new world is 2050 or so. He fears the possibility of a whole new different level of wealth inequality, but he doesn’t think we should be overly deterministic about the effects of these technologies, arguing we can consciously direct their course despite not really having the time to get ahead of the onrushing problems.

In a perfectly flat world, sure. In a globe filled with competing states and corporations and groups and individuals, however, there will be no consensus. Some actors will push the envelope, hoping for an edge, and others may react in kind. This dynamic will be especially true since the machinery and materials won’t be rare, expensive and closely held, like in the case of nuclear weaponry. As Freeman Dyson has written: “These games will be messy and possibly dangerous.”

An excerpt:

Sean Illing:

And this revolution in biotechnology, in the ability to tinker with the human genome and alter our own biology, is coming whether we want it to or not, right?

Michael Bess:

It is, but I’m always careful about saying that, because I don’t want to fall into technological determinism. Some of the writers like Ray Kurzweil, the American inventor and futurist, have tended to do that. They say it’s coming whether we like it or not, and we need to adapt ourselves to it.

But I don’t see technology that way, and I think most historians of technology don’t see it that way either. They see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

And I think that is important to emphasize — that we still have agency. We may not be able to stop the river from flowing, but we can channel it down pathways that are more or less aligned with our values. I think that’s a very important point to make when we talk about this.

What’s happening is bigger than any one of us, but as we communicate with each other, we can assert our values and shape it as it unfolds over time, and channel it on a course that we’d prefer.

Sean Illing:

Whatever shape it does take, we’re not talking about some distant future here — we’re talking about the middle years of this century, right?

Michael Bess:

Absolutely.

Sean Illing:

How will human life improve as a result of this revolution?

Michael Bess:

I think it’s going to improve in countless ways. These are going to be technologies that are hard to resist because they’re going to be so awesome. They’re going to make us live longer, healthier lives, and they’re going to make us feel younger.

So some of the scientists and doctors are talking about rejuvenation technologies so that people can live — have a longer, not only life span, but health span — which would mean that you could be 100 years old but feel like a 45-year-old, and your mind and body would still be young and vigorous and clear. So one aspect has to do with just quality of basic health and having that for a longer period of time.

Some of these chemicals — maybe some of the new bioelectronic devices — will allow us to improve our cognitive capacities. So we’ll be able to have probably augmented memory, maybe greater insight, maybe we’ll be able to boost some of the analytical functions that we have with our minds. And, in other words, sort of in a broad-spectrum way, make ourselves smarter than we have tended to be.

There will also be a tendency for us to merge our daily lives, our daily activities, ever more seamlessly with informatic machines. It’s science fiction now to talk about Google being accessible by thought, but that’s not as farfetched as many people think. In 30 or 40 years, it’s possible to envision brain-machine interfaces that you can wear, maybe fitted to the outside of your skull in a sort of nonintrusive way, that’ll allow you to connect directly with all kinds of machines and control them at a distance, so your sphere of power over the world around you could be greatly expanded.

And then there’s genetic technologies. I imagine that some of them will be a resistance to cancer — or perhaps to certain forms of cancer — that could be engineered into our DNA at the time of conception. What’s more exciting to me is going beyond the whole concept of designer babies and this whole new field of epigenetics that is coming out.

What I see there as a possibility is that you’ll be able to tinker with the genetic component of what makes us who we are at any point in your life. One of the most awful aspects of designer babies is somebody’s shaping you before you’re born — there’s a loss of autonomy that’s deeply morally troubling to many people. But if you’re 21 years old and you decide, okay, now I’m going to inform myself and make these choices very thoughtfully, and I’m going to shape the genetic component of my being in precise, targeted ways.

The way it’s looking with epigenetics is we’re going to have tools that allow us to modify our character, the way our body works, the way our mental processes work, in very profound ways at any point in our lives, so we become a genetic work in progress.

Sean Illing:

What you’re describing is utterly transformative, and in many ways terrifying.•

Tags: ,

Epochs pass, cultures rise and fall, but if they do so between a telephone call and the reply, they can cause a shock to the system of individuals and societies that are difficult to withstand.

Despite the racist scapegoating of the recent Presidential election, most jobs that have been disappeared from Middle America’s manufacturing sector have vanished into the zeros and ones of automation rather than through offshoring. Many have puzzled over why this transition hasn’t resulted in a productivity spike. Is there not enough demand because of the decline of wages? Is there another inscrutable reason? 

Tough to say, but while economists are working out the fine points, more jobs, and even industries, will be placed in robotic hands, and the pace of the changeover will quicken as the tools become more powerful. If the process happens too rapidly, however, the driverless cars will handle smoothly but our ride will be bumpy. In an Atlantic article by Alana Samuels about the regions of America most likely to be upended by algorithms in the near term, there’s this harrowing passage:

Previously, automation had hurt middle-class jobs such as those in manufacturing. Now, it’s coming for the lower-income jobs. When those jobs disappear, an entire group of less-educated workers who already weren’t making very much money will be out of work. [Johannes] Moenius worries about the possibility of entire regions in which low earners are competing for increasingly scarce jobs. “I wasn’t in L.A. when the riots happened, but are we worried about this from a social perspective?” he said. “Not for tomorrow, but for 10 years from now? It’s quite frankly frightening.”•

That’s a particularly dystopic view, and maybe technological progress will be slower than expected, but sooner or later, we’ll be forced to change our focus as we’re relieved of our traditional duties. As Kevin Kelly says: “We’re constantly redefining what humans are here for.”

In a clever Guardian essay, Yuval Noah Harari wonders about the future of the post-work “useless class.” In the piece, the historian tries to divine what we’ll be using our wetware for should intelligent machines permanently displace a wide swath of the citizenry. He believes we’ll subsist on Universal Basic Income and occupy ourselves playing video games enhanced by VR and AR. An endless, mass participation version of Pokémon Go, would be, god forbid, the new religion, though Harari is contrarian in believing it won’t be much different from the life we already know.

Hundreds of millions already spend countless, unpaid hours creating free content for Facebook, so I suppose his vision is possible if not plausible. Either way, let’s hope tomorrow will involve more than Taylor Swift and an Oculus Rift.

The opening:

Most jobs that exist today might disappear within decades. As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs. Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if the ex-insurance agent somehow makes the transition into a virtual-world designer, the pace of progress is such that within another decade he might have to reinvent himself yet again.

The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.

The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income. The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?

One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions.”

What is a religion if not a big virtual reality game played by millions of people together?•

Tags: ,

Exactly a century ago, people were treated to an early glimpse of what would eventually become the changeover from the Industrial Era to the Information Age when Marcel Duchamp took a crude if useful manufactured fixture of the age (the urinal) and reinvented its meaning simply by presentation. All he added was an idea, pure information. Nothing had changed but perspective, which, of course, can be everything. It was artful, and it was art.

Now that we exist in a data-rich world and are constantly lowering ourselves deeper and deeper into the machine, our emotions, a key component of the artistic experience, are increasingly being played by social networks and search engines. In a Bloomberg View essay, historian Yuval Noah Harari considers a time when data, rather than human inspiration, will inform art. He believes biometrics and algorithms will combine to read our moods and feed us music, which will eventually be composed by computers.

An excerpt:

If art defined by human emotions,  what might happen once external algorithms are able to understand and manipulate human emotions better than Shakespeare, Picasso or Lennon? After all, emotions are not some mystical phenomenon — they are a biochemical process. Hence, given enough biometric data and enough computing power, it might be possible to hack love, hate, boredom and joy.

In the not-too-distant future, a machine-learning algorithm could analyze the biometric data streaming from sensors on and inside your body, determine your personality type and your changing moods, and calculate the emotional impact that a particular song — or even a particular musical key — is likely to have on you.

Of all forms of art, music is probably the most susceptible to Big Data analysis, because both inputs and outputs lend themselves to mathematical depiction. The inputs are the mathematical patterns of soundwaves, and the outputs are the electrochemical patterns of neural storms. Allow a learning machine to go over millions of musical experiences, and it will learn how particular inputs result in particular outputs.  

Supposed you just had a nasty fight with your boyfriend. The algorithm in charge of your sound system will immediately discern your inner emotional turmoil, and based on what it knows about you personally and about human psychology in general, it will play songs tailored to resonate with your gloom and echo your distress. These particular songs might not work well with other people, but are just perfect for your personality type. After helping you get in touch with the depths of your sadness, the algorithm would then play the one song in the world that is likely to cheer you up — perhaps because your subconscious connects it with a happy childhood memory that even you are not aware of. No human DJ could ever hope to match the skills of such an AI.•

Tags:

Like Steve Jobs during his walkabout between stints as Apple’s visionary, Google’s Larry Page grew as a businessperson in the years he spent in the shadow of Eric Schmidt, the CEO whom investors forced him to hire as “adult supervision.” Although Page still has none of the late Apple co-founder’s charisma and communication skills, that social shortcoming might be a blessing some ways, since his vision of a future automated enough to satisfy Italo Balbo might give many pause, despite Page’s seemingly good intentions.

In 2013, he expressed his desire to partition some land to be used for potentially dangerous experiments that would otherwise be illegal. Like Burning Man with robots or flying cars or something. The Verge reported the technologist as saying:

There are many exciting things you could do that are illegal or not allowed by regulation. And that’s good, we don’t want to change the world. But maybe we can set aside a part of the world…some safe places where we can try things and not have to deploy to the entire world.•

His vision falls in line with H.G. Wells’ definition of Utopia as a place that would separate pristine living spaces from the despoiled, industrialized areas that would be exploited to support them. 

It’s not a particularly honest or self-aware argument, however, because Google and other Silicon Valley superpowers are conducting experiments every day on the general public in regards to widespread surveillance, psychological manipulation and communications, all of which may be antithetical to stable democracies, the Internet being, what Schmidt himself termed that same year, the “largest experiment involving anarchy in history.” Indeed.

According a Statescoop article by Jake Williams, Google is moving forward with its plans to build Experiment City. Whatever explosions may occur in Page’s testropolis, they will likely be less dangerous than the eruptions the company is enabling every day in our “pristine” world.

An excerpt:

Sidewalk Labs, which Google kickstarted almost two years ago, may soon develop a “large-scale district” to serve as a living laboratory for urban innovation technologies, Dan Doctoroff, founder and CEO of the company, said at the Smart Cities NYC conference Thursday.

The company is having conversations now with city leaders across the country, Doctoroff said. While nothing is final, Sidewalk Labs could hold a competition — similar to the one held by the U.S. Department of Transportation last year — to spur excitement from leaders who want to make their cities smarter, while also providing a national model for what the cities of tomorrow look like.

“The future of cities lies in the way these urban experiences fit together and improve quality of life for everyone living, working and growing up in cities across the world,” Doctoroff said. “Yet there is not a single city today that can stand as a model — or even close — for our urban future.”

This city would be “built from the internet up,” Doctoroff said, and would test the theories and models that the company has asserted since its creation. …

While all these ideas include technology, the concept is about more than just connecting the physical space with sensors, internet and data, he said — it’s about making an impact on society.

“I’m sure many of you are thinking this is a crazy idea: building a city new — the most innovative, urban district in the world, something at scale that can actually have the catalytic impact among cities around the world,” Doctoroff said. “We don’t think it’s crazy at all. People thought it was crazy when Google decided to connect all the world’s information, people thought it was crazy to think about the concept of a self-driving car.”•

Tags: , ,

In 1989, six years before her murder, Madalyn Murray O’Hair, the Carrie Nation of holy water, was profiled by Lawrence Wright, then of Texas Monthly. The outrageously quotable, oft-jailed atheist activist was no doubt a welcome assignment for a budding journalistic talent like Wright, who visited her Austin offices a quarter century after her strident efforts had removed compulsory prayer from American public schools.

In the twilight of the Reagan years, O’Hair thought the country was headed toward a Neo-fascism enabled by a confluence of plutocracy, technology and religion. In retrospect, not a bad prediction.

An excerpt from the Texas Monthly piece is followed by some other articles and videos about her.


From Wright:

As with most Americans my age, my life already had been given a good shaking by Madalyn Murray O’Hair. For the first ten years of my schooling, I listened to prayers and Scripture every morning following the announcements on the P.A. system. I don’t recall ever questioning the propriety of such action or wondering what my Jewish classmates, for instance, might think about hearing Christian prayers in public school. But in the fateful fall of 1963 we began classes amid the enormous hubbub that followed the Supreme Court decision. The absence of morning prayers was widely seen as a prelude to the fall of the West. And the woman who had toppled civilization as we knew it was some loudmouthed Baltimore housewife—that was my impression—who then proceeded to wage another legal campaign to tax church property. She was the first person I had ever heard called a heretic. She jumped out of the front pages with one outrageous statement after another; indeed, the era of dissent in the sixties really began with Madalyn Murray, who styled herself as the “most hated woman in America.”

Certainly she was the most provocative. Soon after the school-prayer decision, Mrs. Murray, as she called herself then, was charged with assaulting 10 Baltimore policemen (she has inflated the number of policemen to 14, then 22, and then 26). She fled first to Hawaii, where she took refuge in a Unitarian church. Then she went to Mexico, which summarily deported her to Texas in 1965. Her odyssey ended in Austin, where she successfully fought extradition to Maryland, married an ex-FBI informer named Richard O’Hair, and remained long after the Maryland charges were dropped.

Over the years I followed Madalyn O’Hair in the way one keeps tabs on celebrities, as she bantered with Johnny Carson, sued the pope, or burst into a church and turned over bingo tables. When I was in college, she came to speak. By then she had achieved a kind of sainthood status with the undergraduate intelligentsia. True to her billing, she raked over capitalism and Christianity and especially Catholicism, unsettling if not actually insulting every person in the auditorium. Afterward she repaired to the student center and held forth in the lobby, giving an explicit and highly titillating seminar on the variations of sexual intercourse. I had never seen anyone with such a breathtaking willingness to endure public hatred. “I love a good fight,” she boasted to the press. “I guess fighting God and God’s spokesmen is sort of the ultimate, isn’t it?”

Neutrality is never present around Madalyn O’Hair; she polarizes everyone. …

“I do think we’re in a steady retreat. There’s an absolute steady retreat into what I call a neofascism—but it’s really old-time fascism—into a robber-baron society and a religiously dominated society, and that’s not cyclical, because they have new weapons at hand now, mainly communications technology with which they can rapidly disperse ideas…”•


The atheist crusader was right that children should not be forced to pray in public school, but that doesn’t mean she was an ideal parent. O’Hair had dissent in her family that she would not brook: Her eldest son, William, became a religious and social conservative in 1980. His mother, showing characteristic outrage, labeled him a “postnatal abortion” and cut off all communication. From a 1980 People article about the familial rift:

He traces her atheism to that self-absorption and hubris and to an aggressive antiestablishment streak that led her (with her two sons) into a variety of left-wing causes—even, he claims, to the Soviet embassy in Paris in search of exile. Rejected by Moscow, she retreated angrily back home to Baltimore where, as he puts it, “The rebel found a cause in prayer at school.”

As the pawn of her crusade, Bill was excoriated by fellow students, given extra homework by his teachers and baited into schoolyard fights; once, he remembers, some classmates tried to push him in front of a bus. “While Madalyn was busy with her rhetoric, newsletters, fund raising and publicity,” he says, “I was fighting for my life.” At 17, Murray ran afoul of the law. He eloped with a girl despite an injunction won by her parents that prohibited him from seeing her. Police intervened, and both Bill and his mother were charged with assaulting them. (The young woman left Bill and their infant daughter two years later.) 

Throughout Bill’s life his mother’s reputation has been a millstone. Drafted a year after his marriage broke up, he was subjected to grueling Army interrogation about Madalyn’s activist causes—and asked to sign a statement repudiating her left-wing politics (he did). After discharge he took a series of jobs in airline management and remembers living in fear that his employers would find out who his mother was and fire him. He complains she even threatened to expose him herself when he balked at giving her discounted airplane tickets that were due him as an employee. 

In 1969 he asked Madalyn for his daughter, whom she had kept while he was in the Army. She refused, they fought a custody suit and Madalyn won. Still, in 1974, when her second husband was ailing and the AAC foundering, Bill agreed to come to Austin and help out. He did so with great success—and increasing doubts. He multiplied the AAC’s annual income, which underwrote a flurry of new lawsuits—over church tax exemptions, the words “under God” in the Pledge of Allegiance and ‘In God We Trust’ on coins. But Bill says he began to wonder: “Why couldn’t we buy a new X-ray machine for a hospital? Why did we have to buy a new Cadillac and mobile home for Madalyn, or sue somebody to prevent prayer in outer space? I started to think it was because my mother was basically negative and destructive.’ He began to drink too much—”diving into the bottle to forget,” as he describes it. Six months after he came to Austin, Madalyn turned her animus on him once too often. “I told her to get f——-,” he recalls, ‘and got the hell out.”

By that time Bill was an alcoholic. He had a new marriage and a new job as an airline management consultant, but felt his life was falling apart.”•


From the 1965 Playboy interview with the “most hated woman in America”:

Playboy:

What led you to become an atheist?

Madalyn Murray O’Hair:

Well, it started when I was very young. People attain the age of intellectual discretion at different times in their lives — sometimes a little early and sometimes a little late. I was about 12 or 13 years old when I reached this period. It was then that I was introduced to the Bible. We were living in Akron and I wasn’t able to get to the library, so I had two things to read at home: a dictionary and a Bible. Well, I picked up the Bible and read it from cover to cover one weekend — just as if it were a novel — very rapidly, and I’ve never gotten over the shock of it. The miracles, the inconsistencies, the improbabilities, the impossibilities, the wretched history, the sordid sex, the sadism in it — the whole thing shocked me profoundly. I remember l looked in the kitchen at my mother and father and I thought: Can they really believe in all that? Of course, this was a superficial survey by a very young girl, but it left a traumatic impression. Later, when I started going to church, my first memories are of the minister getting up and accusing us of being full of sin, though he didn’t say why; then they would pass the collection plate, and I got it in my mind that this had to do with purification of the soul, that we were being invited to buy expiation from our sins. So I gave it all up. It was too nonsensical.•


A 30-minute documentary about O’Hair, and a 1970 Donahue episode in which she debated Rev. Bob Harrington (voice and picture not properly synced.)

Tags: ,

The moon landing was supposed to be our greatest triumph, Homo sapiens having made the giant leap from living in cave systems to conquering the solar system, but as Norman Mailer wrote presciently at the time: “Space travel proposed a future world of brains attached to wires.” The macho author knew machine intelligence had won, and boxing matches, bullfights and other human struggles were crude pantomimes compared to a space odyssey. Even Mailer’s ample intelligence and elephantine ego, however, couldn’t have known how right he was.

He further wrote:

He had no intimations of what was to come, and that was conceivably worse than any sentiment of dread, for a sense of the future, no matter how melancholy, was preferable to none–it spoke of some sense of the continuation in the projects of one’s life. He was adrift. If he tried to conceive of a likely perspective in the decade before him, he saw not one structure to society but two: if the social world did not break down into revolutions and counterrevolutions, into police and military rules of order with sabotage, guerrilla war and enclaves of resistance, if none of this occurred, then there certainly would be a society of reason, but its reason would be the logic of the computer. In that society, legally accepted drugs would become necessary for accelerated cerebration, there would be inchings toward nuclear installation, a monotony of architectures, a pollution of nature which would arouse technologies of decontamination odious as deodorants, and transplanted hearts monitored like spaceships–the patients might be obliged to live in a compound reminiscent of a Mission Control Center where technicians could monitor on consoles the beatings of a thousand transplanted hearts. But in the society of computer-logic, the atmosphere would obviously be plastic, air-conditioned, sealed in bubble-domes below the smog, a prelude to living on space stations. People would die in such societies like fish expiring on a vinyl floor.•

Okay, fish on a vinyl floor may be melodramatic, but Elon Musk and others wants to go much further than accelerating cerebration via pills, aiming, with Neuralink, to implant electrodes in our brains in order to link us directly to the cloud. Musk thinks “we need brain-computers to avoid becoming ‘house cats’ to artificial intelligence.”

Hmm, that’s an odd way to add it all up. Becoming a computer (to a good degree) in order to avert the dominance of computers is sort of like killing yourself to prevent death.

It’s very possible that tomorrow’s challenges may require such drastic measures for our species, but let’s not pretend we’re maintaining humanity when we’re drastically altering it.

From Christopher Markou at The Conversation:

Depending on who you ask, the human story generally goes like this. First, we discovered fire and developed oral language. We turned oral language into writing, and eventually we found a way to turn it into mechanised printing. After a few centuries, we happened upon this thing called electricity, which gave rise to telephones, radios, TVs and eventually personal computers, smart phones – and ultimately the Juicero.

Over time, phones lost their cords, computers shrunk in size and we figured out ways to make them exponentially more powerful and portable enough to fit in pockets. Eventually, we created virtual realities, and melded our sensate reality with an augmented one.

But if Neuralink were to achieve its goal, it’s hard to predict how this story plays out. The result would be a “whole-brain interface” so complete, frictionless, bio-compatible and powerful that it would feel to users like just another part of their cerebral cortex, limbic and central nervous systems.

A whole-brain interface would give your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone who has a similar interface in their head. This flow of information between your brain and the outside world would be so easy it would feel the same as your thoughts do right now.

But if that sounds extraordinary, so are the potential problems.•

Tags: ,

Nations that embraced the Industrial Age became far wealthier, but there were considerable hidden costs. The environmental damage has been profound, and we’ve been unable thus far to wean ourselves from the substances that could mark our doom. It may be what we’re experiencing is a slowly unfolding Pyrrhic victory.

The Digital Age is even more challenging since the tools are more powerful. Robots will make us richer financially, but distribution won’t be easy since industries will rise and fall rapidly. For instance, compact discs were the most profitable medium in music history until, suddenly, they were valueless. It may not quite be Freeman Dyson’s more long-term outlook that “whole epochs will pass, cultures rise and fall, between a telephone call and the reply,” but it will be increasingly jarring nonetheless. And that’s not even considering the other thorny aspects of a more algorithmic age, including endless surveillance with no opt-out button.

In a really insightful NYT piece by Daisukaisuke Wakabayashi, a quintet of American workers who are training AI to complement them–replace them?–discuss the process. They seem mostly in denial, as Garry Kasparov was in 1989 when he said that he couldn’t conceive of a time when a “computer is stronger than the human mind.” Of course, AI doesn’t have to play by the rules of our gray matter to win, and in many non-dangerous fields, it doesn’t even have to be as good as humans to consume jobs. If it’s almost there and far cheaper, the transition will happen. 

Embrace of intelligent machines will, as the Industrial revolution did, make us wealthier in the aggregate, but the path to a just society in a time of this new normal will be daunting.

An excerpt:

‘It made me feel competitive’

Rachel Neasham, travel agent

Ms. Neasham, one of 20 (human) agents at the Boston-based travel booking app Lola, knew that the company’s artificial intelligence computer system — its name is Harrison — would eventually take over parts of her job. Still, there was soul-searching when it was decided that Harrison would actually start recommending and booking hotels.
 
At an employee meeting late last year, the agents debated what it meant to be human, and what a human travel agent could do that a machine couldn’t. While Harrison could comb through dozens of hotel options in a blink, it couldn’t match the expertise of, for example, a human agent with years of experience booking family vacations to Disney World. The human can be more nimble — knowing, for instance, to advise a family that hopes to score an unobstructed photo with the children in front of the Cinderella Castle that they should book a breakfast reservation inside the park, before the gates open.

Ms. Neasham, 30, saw it as a race: Can human agents find new ways to be valuable as quickly as the A.I. improves at handling parts of their job? “It made me feel competitive, that I need to keep up and stay ahead of the A.I.,” Ms. Neasham said. On the other hand, she said, using Harrison to do some things “frees me up to do something creative.” …

Lola was set up so that agents like Ms. Neasham didn’t interact with the A.I. much, but it was watching and learning from every customer interaction. Over time, Lola discovered that Harrison wasn’t quite ready to take over communication with customers, but it had a knack for making lightning-fast hotel recommendations.

At first, Harrison would recommend hotels based on obvious customer preferences, like brands associated with loyalty programs. But then it started to find preferences that even the customers didn’t realize they had. Some people, for example, preferred a hotel on the corner of a street versus midblock.

And in a coming software change, Lola will ask lifestyle questions like “Do you use Snapchat?” to glean clues about hotel preferences. Snapchat users tend to be younger and may prefer modern but inexpensive hotels over more established brands like the Ritz-Carlton.

While Harrison may make the reservations, the human agents support customers during the trip. Once the room is booked, the humans, for example, can call the hotel to try to get room upgrades or recommend how to get the most out of a vacation.

“That’s something A.I. can’t do,” Ms. Neasham said.•

Tags: ,

« Older entries § Newer entries »