Science/Tech

You are currently browsing the archive for the Science/Tech category.

At his Financial Times blog, Andrew McAfee talks about the plunging “red line” of Labor’s portion of earnings, accelerating in the wrong direction as automation permeates the workplace. Robotics, algorithms and AI will make companies more profitable and shareholders richer, but job seekers (and holders) will suffer. And going Luddite will help no one in the long run. An excerpt:

“I expect the red line to continue to fall as robots, artificial intelligence, 3D printing, autonomous vehicles, and the many other technologies that until recently were the stuff of science fiction, permeate industry after industry. Policies intended to keep these advances out of a country might halt the decline of the labour share for a while, but they’d also halt competitiveness pretty quickly, thus leaving both capital and labour worse off.

I think the continued decline of the labour share, brought on by tech progress, will be a central dynamic, if not the central dynamic, of the world’s economies and societies in the 21st century. It’s a story much more about what will happen to the livelihood of the 50th percentile worker than to the wealth of the 1 per cent. And a much more important story.”

Tags:

In a Reddit AMA, philosopher A.C. Grayling has an interesting take on the Transhumanist Age, believing we might already be living in its embryonic stage. An excerpt:

Question:

Was wondering your thoughts on Transhumanism? At what point do we cease being human when we start giving ourselves synthetic upgrades?

AC Grayling:

Interesting question! – but perhaps we are already “trans” with respect to our forebears, given the way we modify ourselves, survive and flourish as a result of surgery, antibiotics, the survival and reproduction of people who in earlier times would have died in childhood, medical prostheses, airplanes and electronic communications. Of course you mean (say) brain implants and intelligence-enhancing drugs, in vitro selection of superior genetic endowments, electronic replacements for organs and muscles…well, at a certain point we will have crossed a grey area between human beings as we now know them, and something more electronic or genetically modified than that: and perhaps those future beings will have a clearer grasp (because they will be far smarter!) than we do about where the line lay.•

Tags:

Novelist William Gibson has always seemed to exist in two moments at once, ours and the one about to occur. He comes by the duality naturally, having been raised with a foot in two temporal realities. A couple of quick passages follow from a new Gibson Q&A conducted by David Kushner of Rolling Stone.

____________________________

Question: 

You also lost your father when you were a kid. How did that affect your development as a writer?

William Gibson:

Well, in the first place, I think there’s simply the mechanism of trauma in early life, which as an adult having watched other people go through that now, I can understand as being profoundly destabilizing. But the other thing it did was it caused my mother to return to the small town in Virginia from which both she and my father were originally from. So my earliest childhood memories were of living in a 1950s universe of Fifties stuff, as the construction company my father worked for built infrastructure projects across the South. . .lots of Levittown-style subdivisions. After my father’s death we returned to this little place in the mountains where you look out the window and in one direction, you might see tailfins and you’d know you were in the early Sixties. In the other, you’d see a guy with a straw hat using a mule to plow a field — and it could have been like 1890 or 1915. It felt to me like being exiled in the past; I was taken away from this sort of modern world, and partially emerged in this strange old place that, perhaps because of the traumatic circumstances of my arrival, I never entirely came to feel a part of. I observed the people around me as though I was something else. I didn’t feel that I was what they were. I can see that as the beginning of the novelistic mind.

____________________________

Question: 

At the time you coined “cyberspace,” you’d supposedly barely spent any time on a computer. That’s hard to believe.

William Gibson:

Oh no, I had scarcely seen one. Personal computers were not common objects at all, and I had been writing short fiction on the kind of manual portable that hipsters are starting to pay really good money for now. And then a friend of mine called from Texas and said, “My dad just gave me this machine called an Apple IIc, and, like, it automates the writing of fiction — you’ve gotta get one.” So I went down to a department store, which was the only Apple dealership in town. I bought the IIc and the printer and the bits you needed to make it work and took it all home in a box, and never looked back. It was a godsend for me because I can’t type, and having this endlessly correctable, effortlessly correctable way to write was fantastic.

Question: 

In fact, you came up with the idea of cyberspace after seeing some kids playing video games in an arcade. What was it about them that inspired you?

William Gibson:

It was their body language, this physical manifestation of some kind of intense yearning. And it seemed to me that had they been able to, they would have reached through the screen — like, reached through the glass — and directly manipulated the pixels to get the result they wanted. It was the combination of that seeing these gamers and those ads for early laptops. I made the imaginative leap that behind the screen of each personal computer, there was a notional space. And what if the notional space behind the screen of each computer was a shared notional space? And that was all it took to have the cyberspace idea. I had some vague, vague sense of what the Internet then consisted of, because I knew a few people in Seattle who worked for very, very early iterations of the Seattle digital tech scene. They talked about DARPA, they talked about the Internet. The idea that there was an Internet was less a part of what I was doing than my sense that there could be a shared notional space and that it would be extra-geographical. The space behind the screen was the same space behind the screen in Vancouver or Nairobi.•

Tags: ,

E.O.Wilson has suggested we set aside half the Earth for non-human species, to protect against their extinction and our own, but the “land grant” will probably be far less generous, and architecturally progressive zoos that replicate natural habitats may ultimately be a remembrance of things past not just for elephants and otters and their environs but perhaps for us and ours as well. From “The Dark Side of Zootopia,” Charles Siebert’s New York Times essay about a Danish zoo being redesigned to represent a vanishing reality:

“Zootopia, as it will be known, is still some five years from completion. A 300-acre reconfiguration of the Givskud Zoo in southern Denmark, it is among the latest visions of the Danish architectural wunderkind Bjarke Ingels, who is a crafter of low-sweeping, undulant structures that hew so closely to the contours of their natural surroundings that they at times appear subsumed by them. Eschewing the anthropocentric architecture of traditional zoos — the pagodalike pavilions of native Asian animal exhibits or the thatched jungle-lodge verandas of the African — Zootopia will secrete visitors in those airborne pods or in unseen quarters within the habitats: cratered lodges for viewing the savanna; subterranean bunkers and huts for watching the tigers or lions; cabins concealed by bamboo or stacks of lumber, allowing viewers to all but nuzzle up to pandas and grizzlies. The design enfolds the buildings and us humans into the landscape as a means of sparing the animals from our obtrusive gawking, if not fully freeing them. …

Ultimately Zootopia is not a reinvention of the zoo as much as a prefigurement of its inhabitants’ only possible future, at least on our relatively brief watch. That is, a wilderness with us lurking at its very heart, seated at open-air cafe tables, before we venture back out toward a dimly remembered past and steal our glimpses of it from discreet encampments designed to hide us not from the animals but from our own irrepressible need to spy on them. By the time its gates open circa 2020, Zootopia could well be one of the singular achievements of the anthropocene, a time when human representations of the wild threaten to become the wild’s reality.”

Tags: ,

For John Wanamaker, being America’s first great merchant wasn’t merely about ringing cash registers. It was also about innovation in a number of ways, many of which weren’t directly reflected in the bottom line.

The owner and operator of a pair of humongous department stores, one opened in 1876 in Philadelphia and the other 20 years later in Manhattan, Wanamaker believed that rather than than looking at your customer as a short-term mark, you should cultivate a long-standing relationship based on trust and satisfaction–not the conventional wisdom at the time–and introduced the price tag and allowed money-back guarantees. He was the first to wisely exploit the power of print advertising, but he sold you what he’d promised.

He also turned his emporiums into experiments in communications and technology, having telephones in his stores as early as 1879, allowing his roofs to be used as launching pads for balloonists in aviation’s pioneering days and installing into his sprawling shops wireless radio stations (customers listened to live reports of the sinking of the Titanic). Having the world’s largest playable pipe organ in his on-site theater and a working train car suspended from the ceiling to carry children around the toy department were nice flourishes as well. Wanamaker didn’t spoil his customers by starving his employees: He paid them holiday bonuses and gave them medical care and athletic facilities and other benefits. His passing was reported in an article in the December 12, 1922 Brooklyn Daily Eagle, an excerpt from which follows.

Tags:

The Sharing Economy is still a very small portion of the GDP, but it will likely grow, whether or not Uber, one of its biggest current stars, exists or not. Credit Peter Thiel with saying early and loudly that the dominant rideshare company might Napster itself out of existence, so flagrant it is in flouting laws and even common sense. Travis Kalanick’s outfit might survive these early bumps, but they’re getting to be frequent and embarrassing. None of these dubious methods are new to Silicon Valley, of course, Microsoft itself having been just as aggressive in its heyday. Of course, that company was already a giant when its behavior came to light, and it was ultimately punished by the government for its actions. Uber is in a much more vulnerable position. From Scott Austin and Douglas MacMillan at WSJ:

“In his bid to upend the taxicab industry, Uber CEO Travis Kalanick has simultaneously declared war on taxi drivers, governments around the world and smaller rivals like Lyft.

But on its war path to a $30 billion valuation, Uber continues to battle itself with questionable comments and tactics that are in danger of harming the company’s reputation and becoming a liability.

The latest controversy came Monday night when Buzzfeed reported that an Uber executive suggested the company should invest $1 million in an opposition research team to dig up dirt on media critics’ personal lives and families. The comments from Emil Michael, Uber’s senior vice president of business, were made at a dinner in New York that included Kalanick, celebrities and some journalists.

Michael was directing his words in particular at PandoDaily editor Sarah Lacy, who he believes has written harsh words about Uber including accusing the company of ‘sexism and misogyny.’ As Buzzfeed reported:

At the dinner, Michael expressed outrage at Lacy’s column and said that women are far more likely to get assaulted by taxi drivers than Uber drivers. He said that he thought Lacy should be held ‘personally responsible’ for any woman who followed her lead in deleting Uber and was then sexually assaulted.

Then he returned to the opposition research plan. Uber’s dirt-diggers, Michael said, could expose Lacy. They could, in particular, prove a particular and very specific claim about her personal life.”

Tags: , , , ,

AI has not traditionally excelled at pattern recognition, capable of recognizing only single objects but unable to decipher the meaning of actions or interactions. Such an advance would make driverless cars and other oh-so-close wonders a reality. Stanford and Google have just announced breakthroughs. From John Markoff at the New York Times:

“During the past 15 years, video cameras have been placed in a vast number of public and private spaces. In the future, the software operating the cameras will not only be able to identify particular humans via facial recognition, experts say, but also identify certain types of behavior, perhaps even automatically alerting authorities.

Two years ago Google researchers created image-recognition software and presented it with 10 million images taken from YouTube videos. Without human guidance, the program trained itself to recognize cats — a testament to the number of cat videos on YouTube.

Current artificial intelligence programs in new cars already can identify pedestrians and bicyclists from cameras positioned atop the windshield and can stop the car automatically if the driver does not take action to avoid a collision.

But ‘just single object recognition is not very beneficial,’ said Ali Farhadi, a computer scientist at the University of Washington who has published research on software that generates sentences from digital pictures. ‘We’ve focused on objects, and we’ve ignored verbs,’ he said, adding that these programs do not grasp what is going on in an image.

Both the Google and Stanford groups tackled the problem by refining software programs known as neural networks, inspired by our understanding of how the brain works. Neural networks can ‘train’ themselves to discover similarities and patterns in data, even when their human creators do not know the patterns exist.”

___________________________

“Why not devote your powers to discerning patterns?”

Tags:

Michael Lewis has a different kind of take on wealth disparity in the U.S. In a New Republic review of Darrell M. West’s book Billionaires, Lewis remains circumspect that ridiculously prosperous Americans can win elections or influence issues, even in a nation defined by Citizens United and growing income inequality. (I don’t know that we’ve yet arrived at the endgame on that issue.) But he still thinks superwealth may make people assholes, or at the very least, uncaring and unhappy–that apart from money, they aren’t very rich. It’s a generalization, sure, though it’s difficult to imagine that being cosseted by a fortune wouldn’t alter a person’s worldview, wouldn’t allow them to arrange their reality as they wish, minus that helpful friction the rest of us encounter when we want our own way regardless of how it effects others. At any rate, Lewis comes armed with a trove of research by social scientists, psychologists and neuroscientists. An excerpt:

“What is clear about rich people and their moneyand becoming ever cleareris how it changes them. A body of quirky but persuasive research has sought to understand the effects of wealth and privilege on human behaviorand any future book about the nature of billionaires would do well to consult it. One especially fertile source is the University of California, Berkeley, psychology department lab overseen by a professor named Dacher Keltner. In one study, Keltner and his colleague Paul Piff installed note-takers and cameras at city street intersections with four-way stop signs. The people driving expensive cars were four times more likely to cut in front of other drivers than drivers of cheap cars. The researchers then followed the drivers to the city’s cross walks and positioned themselves as pedestrians, waiting to cross the street. The drivers in the cheap cars all respected the pedestrians’ right of way. The drivers in the expensive cars ignored the pedestrians 46.2 percent of the timea finding that was replicated in spirit by another team of researchers in Manhattan, who found drivers of expensive cars were far more likely to double park. In yet another study, the Berkeley researchers invited a cross section of the population into their lab and marched them through a series of tasks. Upon leaving the laboratory testing room the subjects passed a big jar of candy. The richer the person, the more likely he was to reach in and take candy from the jarand ignore the big sign on the jar that said the candy was for the children who passed through the department.

Maybe my favorite study done by the Berkeley team rigged a game with cash prizes in favor of one of the players, and then showed how that person, as he grows richer, becomes more likely to cheat. In his forthcoming book on power, Keltner contemplates his findings: 

If I have $100,000 in my bank account, winning $50 alters my personal wealth in trivial fashion. It just isn’t that big of a deal. If I have $84 in my bank account, winning $50 not only changes my personal wealth significantly, it matters in terms of the quality of my lifethe extra $50 changes what bill I might be able to pay, what I might put in my refrigerator at the end of the month, the kind of date I would go out on, or whether or not I could buy a beer for a friend. The value of winning $50 is greater for the poor, and, by implication, the incentive for lying in our study greater. Yet it was our wealthy participants who were far more likely to lie for the chance of winning fifty bucks.

There is plenty more like this to be found, if you look for it. A team of researchers at the New York State Psychiatric Institute surveyed 43,000 Americans and found that, by some wide margin, the rich were more likely to shoplift than the poor. Another study, by a coalition of nonprofits called the Independent Sector, revealed that people with incomes below twenty-five grand give away, on average, 4.2 percent of their income, while those earning more than 150 grand a year give away only 2.7 percent. A UCLA neuroscientist named Keely Muscatell has published an interesting paper showing that wealth quiets the nerves in the brain associated with empathy: if you show rich people and poor people pictures of kids with cancer, the poor people’s brains exhibit a great deal more activity than the rich people’s.”

Tags: , , , ,

Sure, it’s nice that Tim Cook and Mark Zuckerberg and the like are philanthropic, but you have to pause and wonder why there’s such a desperate demand for CEO largesse. How much do corporate tax loopholes lead to the need? From Suzanne McGee at the Guardian:

“How liberal, really, are these boardroom liberals?

Mind you, these are the same people who squawk, very loudly, at any suggestion that the fees they collect for managing their funds should be taxed as ordinary income, instead of as capital gains.

They’ve been fighting for years any suggestion that their primary source of income should be taxed at the same higher rates as those people whom their philanthropy helps.

If the tax rate changes, those millionaires and billionaires would be paying an effective rate of 39%, rather than 20%. With that kind of tax revenue, perhaps the government wouldn’t be slashing away at social programs that now have to come, cap in hand, to the Robin Hood Foundation to ask for some of those refrigerator-sized checks. Then the philanthropists can make their decisions based on whatever personal criteria they find most compelling.

That’s not to take away from what the Robin Hood Foundation’s do-gooders accomplish. Without them, life would be a lot tougher for New York’s poorest citizens. Being a boardroom liberal may very well be better than being a boardroom tyrant, terrorizing your staff from the chief financial officer down to the guy in the mailroom.

But the reason boardroom liberals need to exist at all is the fact that the social safety net that once existed has collapsed, and while some of that can probably be traced to waste and mismanagement, another giant chunk is simply due to lack of resources.

Consider: US tax revenues are at their lowest rate since 1950, which means less money to fund government programs. At the same time, US income inequality is at its highest point since the Great Depression, meaning the rich are richer than they were even a few years go.”

Tags:

Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.

Tags: ,

I get along famously with New York security guards, and at some point pretty much all one of them tell me about the stint they did in prison. They know the industry from inside out, so to speak. Robots, conversely, have a clean record, and while they won’t devastate every industry in the near term, security is a natural fit for their functions. From Rachel Metz at Technology Review:

“As the sun set on a warm November afternoon, a quartet of five-foot-tall, 300-pound shiny white robots patrolled in front of Building 1 on Microsoft’s Silicon Valley campus. Looking like a crew of slick Daleks imbued with the grace of Fred Astaire, they whirred quietly across the concrete in different directions, stopping and turning in place so as to avoid running into trash cans, walls, and other obstacles.

The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools.

Knightscope, a startup based in Mountain View, California, has been busy designing, building, and testing the robot, known as the K5, since 2013. Seven have been built so far, and the company plans to deploy four before the end of the year at an as-yet-unnamed technology company in the area. The robots are designed to detect anomalous behavior, such as someone walking through a building at night, and report back to a remote security center.

‘This takes away the monotonous and sometimes dangerous work, and leaves the strategic work to law enforcement or private security, depending on the application,’ Knightscope cofounder and vice president of sales and marketing Stacy Stephens said as a K5 glided nearby.”

Tags: ,

This strange 1975 photo captures Ingmar Bergman in Hollywood enjoying a tender moment with the Jaws prop shark nicknamed “Bruce.” Before that film was a big-screen game changer helmed by Steven Spielberg, it was a 1974 bestseller by Peter Benchley, and before that still it was a 1967 Holiday magazine article (“Shark!“) from the same writer. Here’s the opening of the first, journalistic version:

ONE WARM SUMMER DAY I was standing on a beach near Tom Never’s Head on Nantucket. Children were splashing around in the gentle surf as their mothers lay gabbing by the Styrofoam ice chests and the Scotch Grills. About thirty yards from shore, a man paddled back and forth, swimming in a jerky, tiring, head-out-of-the-water fashion. I had just remarked dully that the water was unusually calm, when I noticed a black speck cruising slowly up the beach some twenty yards beyond the lone swimmer. It seemed to dip in and out of the water, staying on the surface for perhaps five seconds, then disappearing for one or two, then reappearing for five. I ran down to the water and waved my arms at the man. At first he paid no attention, and kept plodding on. Then he noticed me. I pointed out to sea, cupped my hands over my mouth, and bellowed, ‘Shark!’ He turned and saw the short, triangular fin moving al­most parallel with him. Immediately he lunged for the shore in a frantic sprint. The fish, which had taken no notice of the swimmer, became curious at the sudden disturbance in the water, and I saw the fin turn inshore. It moved lazily, but not aimlessly.

By now the man had reached chest-deep water, and while he could probably have made better time by swimming, he elected to run. Running in five feet of water is something like trying to skip rope in a vat of peanut butter, and I could see his eyes bug and his face turn bright cerise as he slogged along. He didn’t look around, which was probably just as well, for the fish was no more than fifty yards behind him. At waist depth, the terrified man assumed Messianic talents. He seemed to lift out of the water, his legs churning wildly, his arms flailing. He hit the beach at a dead run and fled as far as the dunes, where he collapsed. The shark, discovering that whatever had roiled the water had disappeared, turned back and resumed his idle cruise just beyond the small breakers.

During the man’s race for land, the children had miraculously vanished from the surf, and now they were being bundled into towels by frenzied mothers. One child was bawling, “But I want to play!” His mother snapped, “No! There’s a shark out there.” The shark was out of sight down the beach, and for a time the ladies stood around staring at the water, evidently expecting the sea to regurgitate a mass of unspeakable horrors. Then, as if on mute cue, they all at once packed their coolers, grills, rafts, inner tubes and aluminum beach chairs and marched to their cars. The afternoon was still young, and the shark had obviously found this beach unappetizing (dining is poor for sharks closer than a half a mile off the beach at that part of the south shore of Nantucket). But to the mothers, the whole area—sand as well as water—was polluted.

Irrational behavior has always been man’s reaction to the presence of sharks.•

Tags: ,

PTSD and other disorders that result from historical horrors (wars, slavery, etc.) seem to be intergenerational not just because of nurture but due to nature as well, with the hormone cortisol playing a significant role in perpetuating the pain. So, it’s not just the ghosts making mayhem but also a heritable biological reordering which victims unknowingly pass on to descendants. Can this phenomenon be neutralized? From “The Science of Suffering,” by Judith Shulevitz at The New Republic:

“In the early ’80s, a Lakota professor of social work named Maria Yellow Horse Brave Heart coined the phrase ‘historical trauma.’ What she meant was ‘the cumulative emotional and psychological wounding over the lifespan and across generations.’ Another phrase she used was ‘soul wound.’ The wounding of the Native American soul, of course, went on for more than 500 years by way of massacres, land theft, displacement, enslavement, thenwell into the twentieth centurythe removal of Native American children from their families to what were known as Indian residential schools. These were grim, Dickensian places where some children died in tuberculosis epidemics and others were shackled to beds, beaten, and raped.

Brave Heart did her most important research near the Pine Ridge Reservation in South Dakota, the home of Oglala Lakota and the site of some of the most notorious events in Native American martyrology. In 1890, the most famous of the Ghost Dances that swept the Great Plains took place in Pine Ridge. We might call the Ghost Dances a millenarian movement; its prophet claimed that, if the Indians danced, God would sweep away their present woes and unite the living and the dead. The Bureau of Indian Affairs, however, took the dances at Pine Ridge as acts of aggression and brought in troops who killed the chief, Sitting Bull, and chased the fleeing Lakota to the banks of Wounded Knee Creek, where they slaughtered hundreds and threw their bodies in mass graves. (Wounded Knee also gave its name to the protest of 1973 that brought national attention to the American Indian Movement.) Afterward, survivors couldn’t mourn their dead because the federal government had outlawed Indian religious ceremonies. The whites thought they were civilizing the savages.

Today, the Pine Ridge Reservation is one of the poorest spots in the United States. According to census data, annual income per capita in the largest county on the reservation hovers around $9,000. Almost a quarter of all adults there who are classified as being in the labor force are unemployed. (Bureau of Indian Affairs figures are darker; they estimate that only 37 percent of all local Native American adults are employed.) According to a health data research center at the University of Washington, life expectancy for men in the county ranks in the lowest 10 percent of all American counties; for women, it’s in the bottom quartile. In a now classic 1946 study of Lakota children from Pine Ridge, the anthropologist Gordon Macgregor identified some predominant features of their personalities: numbness, sadness, inhibition, anxiety, hypervigilance, a not-unreasonable sense that the outside world was implacably hostile. They ruminated on death and dead relatives. Decades later, Mary Crow Dog, a Lakota woman, wrote a memoir in which she cited nightmares of slaughters past that sound almost like forms of collective memory: ‘In my dream I had been going back into another life,” she wrote. “I saw tipis and Indians camping … and then, suddenly, I saw white soldiers riding into camp, killing women and children, raping, cutting throats. It was so real … sights I did not want to see, but had to see against my will; the screaming of children that I did not want to hear. … And the only thing I could do was cry. … For a long time after that dream, I felt depressed, as if all life had been drained from me.’

Brave Heart’s subjects were mainly Lakota social-service providers and community leaders, all of them high-functioning and employed. The vast majority had lived on the reservation at some point in their lives, and evinced symptoms of what she called unmourned loss. Eighty-one percent had drinking problems. Survivor guilt was widespread. In a study of a similar population, many spoke about early deaths in the family from heart disease and high rates of asthma. Some of her subjects had hypertension. They harbored thoughts of suicide and identified intensely with the dead.”

Tags: , , , ,

GoogleX, the Bell Labs-ish moonshot division of the search giant, may pay off financially in the long run, but it’s likely producing a short-term profit in non-obvious ways. From Ezra Klein’s new Vox interview with Peter Thiel:

Ezra Klein:

I want to try to draw out this idea of a company’s mission a bit more. Imagine two versions of Google. The non-mission oriented Google is, ‘We want to build a search engine that’ll be the best search engine in the world. If we’re dominant in that market, we’re going to be able to extract huge advertising revenues.’ The mission-oriented one is, ‘Our goal as a company is to categorize and make accessible all the world’s information.’

Peter Thiel:

Yes.

I think the second description is certainly far more inspiring. Maybe it starts by building a much better search engine, but then maybe over time, you have to develop mapping technology, maybe you start building self-driving cars as a way to see how well your mapping technology works. It certainly, I think, feels very different to the people working at the company. I think Google still is a very charismatic company for a company of its size.

Ezra Klein:

That’s an interesting point. Google does all of these things that are not obvious profit drivers. The massive effort to digitize books, the decision to send camels across the Sahara to work on mapping the desert. A lot of that, they’re losing money on. But it’s partially a recruitment tool — it makes them, in your word, more charismatic than their competitors.

Peter Thiel:

One level in which these companies do still compete very much is for talent. Silicon Valley is very competitive with Wall Street banks. And there’s a way in which the day-to-day jobs are similar: people sit in front of computers, the people went to similar colleges and universities, even the office floor-planning is kind of similar. There are more similarities than one might think. But the narrative at Google is much, much better than at Goldman. That’s why they’re beating a place like Goldman incredibly in this talent war.”

Tags: ,

Despite the best efforts of the Immortality-Industrial Complex, I think it very likely that you and I and Ray Kurzweil and Hans Moravec and Michio Kaku and Marshall Brain and Aubrey de Grey will pass away this century, without the opportunity to choose forever. But that doesn’t mean that an everlasting arrangement of some sort–of many different sorts?–won’t be possible in the future. That might get interesting. From John G. Messerly at Salon:

Now more than ever, the topic of death is marked by no shortage of diverging opinions. 

On the one hand, there are serious thinkers — Ray Kurzweil, Hans Moravec, Michio Kaku, Marshall Brain, Aubrey de Grey and others — who foresee that technology may enable humans to defeat death. There are also dissenters who argue that this is exceedingly unlikely. And there are those like Bill Joy who think that such technologies are technologically feasible but morally reprehensible.

As a non-scientist I am not qualified to evaluate scientific claims about what science can and cannot do. What I can say is that plausible scenarios for overcoming death have now appeared. This leads to the following questions: If individuals could choose immortality, should they? Should societies fund and promote research to defeat death?

The question regarding individuals has a straightforward answer: We should respect the right of autonomous individuals to choose for themselves. If an effective pill that stops or reverses aging becomes available at your local pharmacy, then you should be free to use it. (My guess is that such a pill would be wildly popular! Consider what people spend on vitamins and other elixirs on the basis of little or no evidence of their efficacy.) Or if, as you approach death, you are offered the opportunity to have your consciousness transferred to a younger, cloned body, a genetically engineered body, a robotic body, or into a virtual reality, you should be free to do so.

I believe that nearly everyone will use such technologies once they are demonstrated as effective. But if individuals prefer to die in the hope that the gods will revive them in a paradise, thereby granting them reprieve from everlasting torment, then we should respect that too. Individuals should be free to end their lives even after death has become optional for them.

However, the argument about whether a society should fund and promote the research relevant to eliminating death is more complex.•

Tags:

In writing disapprovingly in the New York Review of Books of Naomi Klein’s “This Changes Everything: Capitalism vs. the Climate,” Elizabeth Kolbert points out that the truth about climate change isn’t only inconvenient, it’s considered a deal-breaker, even by the supposedly green. An excerpt follows.

_____________________________

What would it take to radically reduce global carbon emissions and to do so in a way that would alleviate inequality and poverty? Back in 1998, which is to say more than a decade before Klein became interested in climate change, a group of Swiss scientists decided to tackle precisely this question. The plan they came up with became known as the 2,000-Watt Society.

The idea behind the plan is that everyone on the planet is entitled to generate (more or less) the same emissions, meaning everyone should use (more or less) the same amount of energy. Most of us don’t think about our energy consumption—to the extent we think about it at all—in terms of watts or watt-hours. All you really need to know to understand the plan is that, if you’re American, you currently live in a 12,000-watt society; if you’re Dutch, you live in an 8,000-watt society; if you’re Swiss, you live in a 5,000-watt society; and if you’re Bangladeshi you live in a 300-watt society. Thus, for Americans, living on 2,000 watts would mean cutting consumption by more than four fifths; for Bangladeshis it would mean increasing it almost by a factor of seven.

To investigate what a 2,000-watt lifestyle might look like, the authors of the plan came up with a set of six fictional Swiss families. Even those who lived in super energy-efficient houses, had sold their cars, and flew very rarely turned out to be consuming more than 2,000 watts per person. Only “Alice,” a resident of a retirement home who had no TV or personal computer and occasionally took the train to visit her children, met the target.

The need to reduce carbon emissions is, ostensibly, what This Changes Everything is all about. Yet apart from applauding the solar installations of the Northern Cheyenne, Klein avoids looking at all closely at what this would entail. She vaguely tells us that we’ll have to consume less, but not how much less, or what we’ll have to give up. At various points, she calls for a carbon tax. This is certainly a good idea, and one that’s advocated by many economists, but it hardly seems to challenge the basic logic of capitalism. Near the start of the book, Klein floats the “managed degrowth” concept, which might also be called economic contraction, but once again, how this might play out she leaves unexplored. Even more confoundingly, by end of the book she seems to have rejected the idea. “Shrinking humanity’s impact or ‘footprint,’” she writes, is “simply not an option today.”

In place of “degrowth” she offers “regeneration,” a concept so cheerfully fuzzy I won’t even attempt to explain it. Regeneration, Klein writes, “is active: we become full participants in the process of maximizing life’s creativity.”

To draw on Klein paraphrasing Al Gore, here’s my inconvenient truth: when you tell people what it would actually take to radically reduce carbon emissions, they turn away. They don’t want to give up air travel or air conditioning or HDTV or trips to the mall or the family car or the myriad other things that go along with consuming 5,000 or 8,000 or 12,000 watts. All the major environmental groups know this, which is why they maintain, contrary to the requirements of a 2,000-watt society, that climate change can be tackled with minimal disruption to “the American way of life.” And Klein, you have to assume, knows it too. The irony of her book is that she ends up exactly where the “warmists” do, telling a fable she hopes will do some good.•

Tags: ,

I’ve recently been reading a lot of the old-school Holiday magazine, that wonderful thing, and one of the most fun pieces centers on far-flung travel indeed, Arthur C. Clarke’s 1953 prognostication of Mars as a residential address and as a pleasure destination. An excerpt:

So you’re going to Mars? That’s still quite an adventure—though I suppose that in another ten years no one will think twice about it. Sometimes it’s hard to remember that the first ships reached Mars scarcely more than half a century ago, and that our settlement on the planet is less than thirty years old.

You’ve probably read all the forms and literature they gave you at the Department of Extraterrestrial Affairs. But here are some additional pointers and background information that may make your trip more enjoyable. I won’t say it’s right up to date—things change so rapidly, and it’s a year since I got back from Mars myself—but on the whole you’ll find it pretty reliable.

Presumably you’re going just for curiosity and excitement; you want to see what life is like out on the new frontier. It’s only fair, therefore, to point out that must of your fellow passengers will he engineers, scientists or administrators traveling to Mars— some of them not for the first time—because they have a job to do. So whatever your achievements are here on Earth, it’s advisable not to talk too much about them, for you’ll be among people who’ve had to tackle much tougher problems.

If you haven’t booked your passage yet, remember that the cost of the ticket varies considerably according to the relative positions of Mars and Earth. That’s a complication we don’t have to worry about when we’re traveling from country to country on our own planet, but Mars can be seven times farther away at one time than at another. Oddly enough, the shortest trips are the most expensive, since they involve the greatest changes of speed as you hop from one orbit to the other. And in space, speed, not distance, is what costs money.

The most economical routes go halfway around the Sun and take eight months, but as no one wants to spend that long in space they’re used only by robot-piloted freighters. At the other extreme are the little super-speed mail ships, which sometimes do the trip in a month. The fastest liners take two or three times as long as this.

Whether you’re taking the bargain $30,000 round trip or one of the de luxe passages, I don’t know. But you must be O.S. physically. The physical strain involved in space flight is negligible, but you’ll be spending at least two months on the trip, and it would be a pity if your appendix started to misbehave.•

Tags:

Mars One, that promised interplanetary Truman Show, has always looked like a longshot, a seeming space fugazi. For the uninitiated, the plan, hatched in Holland by entrepreneur Bas Lansdorp, is to send a quartet of Earthlings to our neighboring orb in 2025 on a one-way mission, and to largely sponsor it with a reality TV show, a Big Brother from another planet. I’d be shocked it it ever gets off the ground. From “All Dressed Up For Mars and Nowhere to Go,” a Matter article by the remarkably named Elmo Keep which focuses on an Australian would-be astronaut, an aspiring Armstrong who’s been shortlisted for a ticket to the unknown:

“Despite not being a space-faring agency, it claims that by 2025 it will send four colonists to the planet. Ultimately, it says, there will be at least six groups of four, a mix of men and women, who will train on Earth for 10 years until they are ready to be shot into space strapped to a rocket, never to return.

It estimates the mission will cost only about $6 billion, tens if not hundreds of billions less than any manned Mars mission so far proposed by NASA. Mars One openly admits that it is ‘not an aerospace company and will not manufacture mission hardware. All equipment will be developed by third-party suppliers and integrated in established facilities.’ That’s how it will keep costs down, by outsourcing everything to private enterprise.

It is, essentially, a marketing campaign with two goals: first, to raise enough interest among the global community in a manned Mars mission so that crowd-funding and advertising revenues will be generated to the tune of billions of dollars; and, second, to use this money — largely to be raised through a reality television series documenting the training process and journey to Mars from Earth — to pay for the mission itself.

The mission is open to anyone in the world who wants to volunteer. These people don’t have to have any special qualifications whatsoever; they need only be in robust physical and mental health and willing to undertake the mission at their own risk. As the proposed program progresses, they will have to prove themselves adept and nimble learners, able to amass an enormous amount of new practical knowledge, not only in the high-pressure intricacies of spaceflight, but in learning how to perform rudimentary surgery and dentistry, how to recycle resources, how to take commands, and maintain a harmonious team dynamic for the rest of their natural lives.

Two hundred thousand applicants would seem to suggest that the plan has solid legs — a staggering number of people willing to sacrifice their lives on Earth to take part in an open-source, crowd-powered, corporately sponsored mission into deep space. A huge amount of interest in this endeavor clearly demonstrated right off the bat.

If only any of it were true.”

Tags: ,

Of the new wave of self-designated digital worriers, Jaron Lanier always makes the most sense to me. In his latest Edge essay, “The Myth of AI,” he draws a neat comparison between the religionist’s End of Days and the technologist’s Singularity, the Four Horseman supposedly arriving in driverless cars. An excerpt:

“To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All of the damages are essentially mirror images of old damages that religion has brought to science in the past.

There’s an anticipation of a threshold, an end of days. This thing we call artificial intelligence, or a new kind of personhood… If it were to come into existence it would soon gain all power, supreme power, and exceed people.

The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there’s this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it.

That particular idea has been dysfunctional in human history. It’s dysfunctional now, in distorting our relationship to our technology. It’s been dysfunctional in the past in exactly the same way. Only the words have changed.

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, ‘Well, but they’re helping the AI, it’s not us, they’re helping the AI.’ It reminds me of somebody saying, ‘Oh, build these pyramids, it’s in the service of this deity,’ but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”

Tags:

An article in the November 22, 1939 Brooklyn Daily Eagle tells of technological unemployment coming to the kissing sector in the late 1930s, when Max Factor Jr., scion of the family cosmetics fortune and creator of Pan-Cake make-up which was favored by early film stars, created robots which would peck to perfection all day, allowing him test out new lipsticks to his heart’s content. Bad news for professional puckerers Joseph Roberts and Miss June Baker, of course, but such is the nature of progress. The brand-new robots were capable of kissing 1,200 times an hour. Ah, young love!

The top two photos show the senior Max Factor demonstrating his Beauty Micrometer device and touching up French silent-film star Renée Adorée. The last one captures two actresses wearing the make-up Junior created especially for black-and-white TV. 

Tags: , , , , ,

In the 1950s, MIT computer scientist John McCarthy coined the term “Artificial Intelligence,” and in the next decade he organized a transcontinental telegraph computer chess match, pitting an American program versus a Soviet counterpart. In this video, he’s interviewed by psychologist Jeffrey Mishlove. Without mentioning it by name, they wonder over Moravec’s paradox, and McCarthy says that computer programs as intelligent as humans may have already been (stealthily) created or perhaps they will require another 500 years of work.

Tags: ,

John C. Lilly, neuroscientist, psychonaut and dolphin procurer, is remembered for the isolation tank, LSD experimentation and computerized interspecies communication attempts. In 1998, three years before his death, Lilly and his coonskin cap were interviewed by Jeffrey Mishlove about the “human biocomputer,” sensory isolation, altered states, ketamine usage, the multiverse and hallucinations focused on penis removal. The sound from the guest’s microphone isn’t great.

Tags: ,

Malcolm Gladwell thinks American football is a “moral abomination,” and it’s hard to argue, though I wonder about his self-termed “intuition” telling him that European football (or soccer) “can’t possibly compare” in terms of brain injuries. Anyone repeatedly heading a soccer ball that’s been kicked from 50 yards away would seem to me to be at great risk, and that’s not even considering the repetitive heading that all pro soccer players practice from when their small children. Perhaps Jeff Astle was, in Gladwellian terms, an outlier, but probably not. Worthing thinking about, at any rate. From a new Gladwell interview conducted by Bloomberg’s Emily Chang:

“In a wide-ranging interview with Emily Chang, best-selling author and New Yorker writer Malcolm Gladwell continued his long-standing crusade against football with a harsh indictment. ‘Football is a moral abomination,’ he said and predicted that the sport — currently far and away the most popular and lucrative in America — would eventually ‘wither on the vine.’

The NFL recently revealed that nearly a third of retired players develop long-term cognitive issues much earlier than the general population. ‘We’re not just talking about people limping at the age of 50. We’re talking about brain injuries that are causing horrible, protracted, premature death,’ Gladwell told Chang, picking up a theme he first explored in a 2009 article for The New Yorker which likened football to dogfighting. ‘This…is appalling. Can you point to another industry in America which, in the course of doing business, maims a third of its employees?'”

Tags: ,

In a Reddit AMA conducted by new Los Angeles Clippers owner Steve Ballmer (who describes the acquisition as “not awesome and not bad” financially) and Harvard computer science professor David Parkes, the duo discuss the intersection of basketball and technology. An excerpt

Question:

I was wondering what you feel the future is for technology in basketball?

Steve Ballmer:

There is a lot more tech than I knew changing basketball and the sports fan experience broadly. My favorite is the use of machine learning technology to process game videos from the celling to understand, categorize and analyze game play. One of the ML experts at second spectrum was a 6″9″ Hooper from MIT so so cool ML rocks! The tech can help understand almost anything. Harvard CS will use it and other technologies to transform so many fields and maybe even more for sports.

David Parkes:

Harvard researchers in the school of engineering and applied sciences and statistics are working on probabilistic models to predict the outcome of a particular matchup of two players on the court. Just this week in my class we discussed the use of Markov chains to predict the outcome of NCAA games. Harvard rocks!”

____________________________

Don Knuth, the “Electronic Coach,” in 1959:

Tags: ,

Peter Thiel, wrong about the Wright brothers, believes we’re not living in a technological age because movies are mean to techies and people are concerned about losing their jobs to their silicon betters. Strange reasoning. I think Tony Stark of Iron Man, fleshed out with Thiel’s friend Elon Musk in mind, is portrayed as the hero of our time. If contemporary film is often unfriendly to technologists, that’s because it’s an easy conflict to sell and because Hollywood and Silicon Valley are currently vying for the title of the “Dream Factory” of California–and the world. At any rate, Thiel’s measures for our degree of our ensconcement in technology are anecdotal, whiny and inefficient. From Brian R. Fitzgerald at the Wall Street Journal:

“[Thiel] said Progress—that’s progress with a capital P—is at the core of any scientific or technological vision for the world. But that talk is counter-cultural right now, and so ‘in many ways we’re not actually living in a scientific or technological age.’

‘We live in a financial and capitalist age,’ Mr. Thiel said. ‘Most people don’t like science, they don’t like technology. You can see it in the movies that Hollywood makes. Tech kills people, it’s dysfunctional, it’s dystopian.’

Not that the PayPal co-founder claims to know how to change society. That used to be government’s role—the atom bomb was built in three and a half years, and Apollo got someone on the moon–but not so much anymore. Today, ‘a letter from Einstein would get lost in the White House mailroom,’ he said.”

Tags: , ,

« Older entries § Newer entries »