Science/Tech

You are currently browsing the archive for the Science/Tech category.

?????????

Really wonderful conversation between Sean Illing of Vox and the economist Tyler Cowen, whose thinking I always admire even when I disagree with him. The two discuss, among other topics, the Internet, biotech, politics, war, Kanye West and the “most dangerous idea in history.” An excerpt:

Question:

How do you view the internet and its impact on human life?

Tyler Cowen:

The internet is great for weirdos. The pre-internet era was not very good for weirdos. I think in some ways we’re still overrating the internet as a whole. It’s wonderful for manipulating information, which appeals to the weirdos. Now it’s going to start changing our physical reality in a way that will be another productivity boom. So I’m very pro-internet.

Question:

What do you think will be the next major technological breakthrough?

Tyler Cowen:

If you mean a single thing that you could put in a single headline, I would say self-driving vehicles. But I think a deeper and more important thing will be a subtle integration of software and hardware in way that will change everything and won’t have a single name.

Question:

Are you thinking here of the singularity or of something less radical?

Tyler Cowen:

No, nothing like the singularity. But software embedded in devices that will get better and smarter and more interactive and thoughtful, and we’ll be able to do things that we’ll eventually take for granted and we won’t even call them anything.

Question:

Do you think technology is outpacing our politics in dangerous, unpredictable ways?

Tyler Cowen:

Of course it is. And the last time technology outpaced politics, it ended in a very ugly manner, with two world wars. So I worry about that. You get new technologies. People try to use them for conquest and extortion. I’ve no predictions as to how that will play out, but I think there’s at least a good chance that we will look back on this era of relative technological stagnancy and say, “Wasn’t that wonderful?”•

Tags: ,

It surprises me that most of us usually think things are worse than they are in the big picture, because we’re awfully good at selective amnesia when it comes to our own lives. Homes in NYC that were demolished by Hurricane Sandy are mostly valued more highly now than right before that disaster, even though they’re located in the exact some lots near the ever-rising sea levels, in the belly of the beast. The buyers are no different than the rest of us who conveniently forget about investment bubbles that went bust and life choices that laid us low. When it comes to our own plans, we can wave away history as a fluke that wouldn’t dare interfere.

When we consider the direction of our nation, however, we often believe hell awaits our handbasket. Why? Maybe because down deep we’re suspicious about the collective, that anything so unwieldy can ever end up well, so we surrender to both recency and confirmations biases, which skew the way we view today and tomorrow. 

While I don’t believe the endless flow of information has made us more informed, it is true that by many measures we’re in better shape now than humans ever have been. On that topic, the Economist reviews Johan Norberg’s glass-half-full title, Progress: Ten Reasons to Look Forward to the Future. The opening:

HUMANS are a gloomy species. Some 71% of Britons think the world is getting worse; only 5% think it is improving. Asked whether global poverty had fallen by half, doubled or remained the same in the past 20 years, only 5% of Americans answered correctly that it had fallen by half. This is not simple ignorance, observes Johan Norberg, a Swedish economic historian and the author of a new book called “Progress”. By guessing randomly, a chimpanzee would pick the right answer (out of three choices) far more often.

People are predisposed to think that things are worse than they are, and they overestimate the likelihood of calamity. This is because they rely not on data, but on how easy it is to recall an example. And bad things are more memorable. The media amplify this distortion. Famines, earthquakes and beheadings all make gripping headlines; “40m Planes Landed Safely Last Year” does not. 

Pessimism has political consequences. Voters who think things were better in the past are more likely to demand that governments turn back the clock. A whopping 81% of Donald Trump’s supporters think life has grown worse in the past 50 years. Among Britons who voted to leave the European Union, 61% believe that most children will be worse off than their parents. Those who voted against Brexit tend to believe the opposite.

Mr Norberg unleashes a tornado of evidence that life is, in fact, getting better.

Tags:

hillary-clinton-debate

In last night’s Presidential debate, Donald Trump was slowed by a case of the sniffles and a case of being a complete fucking idiot. The sniffles may go away.

There might something to predictions made by artificial hive minds, though I wouldn’t bet the house on it, let alone the White House. But you do have to credit UNU for knowing before most of us that Trump was a legitimate contender. A few hours before yesterday’s debate, UNU conducted a Reddit AMA and made the predictions below. None of them really required a great leap of intelligence to answer, and only the hideous hotelier’s less-P.U.-than-usual behavior tripped up the AI.


Question:

Who wins the debate tonight? Does it matter for the election?

UNU:

Clinton wins the debate tonight ‘by a lot’ and ‘i believe’ the debate matters for the election.


Question:

Will Trump come off as A. Presidential, or B. Impeachable?

UNU:

Impeachable.


Question:

Will Donald Trump use the phrase “Crooked Hillary” during the debate?

UNU:

It’s likely.

Question:

Will Trump be aggressive, or on his best behavior?

UNU:

Aggressive 99%.


Question:

Lester Holt is doing a good job of controlling the debate?

UNU:

Totally disagree.


Question:

Will Trump blame the media if he does poorly?

UNU:

It’s likely.


Question:

Roughly speaking, how much impact could this debate have on the overall election?

UNU:

Roughly: A lot


Question:

Who would you rather have babysit your kid, the Donald or HRC?

UNU:

Clinton by a lot•

trumpparakeetivanka

Tags: , ,

nixon-laughing-with-astronauts-p

Even your average Silicon Valley billionaire would find it difficult to gain and maintain a footing on the Moon and Mars and more if playing by the established economics of Big Space. Fortunately, the increasing power and diminishing costs of components in the supercomputers in our pockets have enabled Little Space to compete, turning out satellites and such for a fraction of what it would cost NASA.

It’s this Space Race 2.0 at the heart of Freeman Dyson’s New York Review of Books piece in which he reviews a raft of recent titles on the topic. The scientist-writer frets about NASA and other government bodies for their excessive risk-aversion while acknowledging the cheap-enough-to-fail model may not be bold enough to enable us to fan out among the stars this century. He also analyzes the medium-size methods of deep-pocketed entrepreneurs like Elon Musk, who dream big while trying to trim costs like the little guys.

Ultimately, the reviewer is dissatisfied with all the books because they each focus on engineering to the exclusion of biotechnology, ignoring outré-but-not-impossible visions from the febrile mind of pioneering Russian rocket scientist Konstantin Tsiolkovsky, who prophesied there would be a time centuries in the future when we could alter and create species to enable them to assimilate in space. 

In one passage about more immediate matters, Dyson offers a common-sense retort to NASA’s fear of falling, arguing mistakes we’ve made on Earth with enclosed habitats like Biosphere 2 aren’t ones we’re destined to repeat. The excerpt::

Charles Wohlforth and Amanda Hendrix’s Beyond Earth describes the prospects for future manned space missions conducted within the Big Space culture. The prospects are generally dismal, for two reasons. The authors suppose that a main motivation for such missions is a desire of humans to escape from catastrophic climate change on Earth. They also suppose any serious risks to the life and health of astronauts to be unacceptable. Under these conditions, few missions are feasible, and most of them are unattractive. Their preferred mission is a human settlement on Titan, the moon of Saturn that most resembles Earth, with a dense atmosphere and a landscape of gentle hills, rivers, and lakes.

But the authors would not permit the humans to grow their own food on Titan. Farming is considered to be impossible because an enclosed habitat with the name Biosphere Two was a failure. It was built in Arizona and occupied in 1991 by eight human volunteers who were supposed to be ecologically self-sufficient, recycling air and water and food in a closed system. The experiment failed because of a number of mistakes in the design. The purpose of such an experiment should be to learn from the failure how to avoid such mistakes in the future. The notion that the failure of a single experiment should cause the abandonment of a whole way of life is an extreme example of the risk-averseness that has come to permeate the Big Space culture.

Farming is an art that achieved success after innumerable failures. So it was in the past and so it will be in the future. Any successful human settlement in space will begin as the Polynesian settlements in the Pacific islands began, with people bringing pigs and chickens and edible plants on their canoes, along with the skills to breed them. The authors of Beyond Earth imagine various possible futures for human settlement in various places, but none of their settlers resemble the Polynesians.

Tags:

eclectic-artwork

Extrapolating current economic trends into the future is a tricky business. Things change.

The American middle class, besieged for decades by tax codes, globalization, automation, Silicon Valley creative destruction, Washington gridlock and the Great Recession, seems more like a dinosaur each day. Men in the U.S. have particularly watched their opportunities crater, with millions more jobs poised to vanish as soon as driverless vehicles take the wheel in trucking, the taxi industry and delivery. (The last of those occupations will also be emptied out by air and ground drones.)

Nicholas Eberstadt’s Men Without Work suggests this story has been seriously under-reported, the subtitle being “America’s Invisible Crisis.” Like Charles Murray, whom I’m not fond of, the author believes misguided social safety nets have played a large role in creating this unintended consequence. I call bullshit on that theory, which seems more driven by ideology than reality.

In a Financial Times review of the book, Lawrence Summers also disagrees with Eberstadt on how we got into this mess, but  he sees a potentially even bleaker future for American males than the author does. Maybe that won’t come to pass, but this is exactly the type of possible outcome we should be discussing right now.

An excerpt:

Now comes Nicholas Eberstadt’s persuasive and important monograph Men Without Work, demonstrating that these issues are not just matters of futurology. Eberstadt, a political economist based at the American Enterprise Institute, marshals a vast amount of data to highlight trends that have been noticed but not adequately emphasised before in the work experience of men in the US. The share of the male population who are neither working, looking for work, in school or old enough to retire has more than doubled over the past 50 years, even though the population has become much healthier and more educated. Today, even with a low overall unemployment rate, roughly one in six men between the ages of 25 and 54 is out of work.

Eberstadt goes on to show that, as one might expect, non-work is a larger issue for those with less education, without spouses or dependent children, for African-Americans and for those who have been convicted of crimes. He finds little redeeming in what those without work are doing, noting that the primary contrast in time use between those in and out of work is in time spent watching TV.

Finally, he highlights that men in the US are doing considerably worse than men in the rest of the industrial world, where even countries with notoriously sclerotic labour markets and bloated welfare systems such as France, and even Greece, enjoy higher rates of prime age male labour force participation.

One can cavil with Eberstadt’s emphasis on labour force withdrawal as distinct from unemployment in looking at the data, particularly when it comes to international comparisons, but overall the evidence he marshals that non-work is currently a crisis is entirely persuasive. As he notes, the impact of non-work on economic growth is the least of it. A society where large numbers of adults in the prime of life are without vocation is unlikely to provide opportunity for all its children, to maintain strong communities or have happy, cohesive families. As we are seeing this fall, such a society is prone to embrace toxic populist politics.

Indeed, Eberstadt understates the significance of what he studies by not highlighting the fact that, if current trends continue, a quarter of men between 25 and 54 will be out of work by mid-century. I would expect Eberstadt’s sorry trends to accelerate as IT accelerates job destruction on the one hand, and developments such as virtual reality make non-work more attractive and addictive on the other, so I can imagine scenarios in which a third or more of men in this cohort are out of work in the US by 2050.

Why is this happening?•

Tags: ,

stepford-wives-1975-pic

Many dark fictions about technology focus on machines going rogue and running amok, but couldn’t things progress as planned and still lead to trouble if we have poor priorities and make the wrong decisions?

On a 1979 Dick Cavett Show, Ira Levin was asked how he dreamed up the scenario for his chilling novel The Stepford Wives. He answered that after reading about the possibility of robotic domestic servants in Alvin Toffler’s Future Shock, he wondered what would happen if we achieved that goal at a very high level. You know, if everything went according to plan.

Humanoid robots aren’t in our near future, but chatbots and digital assistants will be an increasing part of our lives in the short run. They may eventually get so good that we won’t know sometimes if we’re speaking to a human or not. Perhaps we will be aware, but that won’t stop us from speaking to them as if “they” were people. There will be a relationship. That’s the plan, anyhow.

Some excerpts on that topic from Alvin Toffler’s book:

Whether we grow specialized animals to serve us or develop household robots depends in part on the uneven race between the life sciences and the physical sciences. It may be cheaper to make machines for our purposes, than to raise and train animals. Yet the biological sciences are developing so rapidly that the balance may well tip within our lifetimes. Indeed, the day may even come when we begin to grow our machines. …

We are hurtling toward the time when we will be able to breed both super- and subraces. As Theodore J. Gordon put it in The Future, “Given the ability to tailor the race, I wonder if we would “create all men equal,’ or would we choose to manufacture apartheid? Might the races of the future be: a superior group, the DNA controllers; the humble servants; special athletes for the ‘games’; research scientists with 200 IQ and diminutive  bodies …” We shall have the power to produce races of morons or of mathematical savants. …

Technicians at Disneyland have created extremely life-like computer-controlled humanoids capable of moving their arms and legs, grimacing, smiling, glowering, simulating fear, joy and a wide range of other emotions. Built of clear plastic that, according to one reporter, “does everything but bleed,” the robots chase girls, play music, fire pistols, and so closely resemble human forms that visitors routinely shriek with fear, flinch and otherwise react as though they were dealing with real human beings. The purposes to which these robots are put may seem trivial, but the technology on which they are based is highly sophisticated. It depends heavily on knowledge acquired from the space program—and this knowledge is accumulating rapidly.

There appears to be no reason, in principle, why we cannot go forward from these present primitive and trivial robots to build humanoid machines capable of extremely varied behavior, capable even of “human” error and seemingly random choice—in short, to make them behaviorally indistinguishable from humans except by means of highly sophisticated or elaborate tests. At that point we shall face the novel sensation of trying to determine whether the smiling, assured humanoid behind the airline reservation counter is a pretty girl or a carefully wired robot.

The likelihood, of course, is that she will be both.

The thrust toward some form of man-machine symbiosis is furthered by our increasing ingenuity in communicating with machines. A great deal of much-publicized work is being done to facilitate the interaction of men and computers. But quite apart from this, Russian and American scientists have both been experimenting with the placement or implantation of detectors that pick up signals from the nerve ends at the stub of an amputated limb. These signals are then amplified and used to activate an artificial limb, thereby making a machine directly and sensitively responsive to the nervous system of a human being. The human need not “think out” his desires; even involuntary impulses are transmittable. The responsive behavior of the machine is as automatic as the behavior of one’s own hand, eye or leg.•

Tags:

newdealfdr

Unions have always represented workers, but perhaps the Digital Age will see the unionization of non-workers.

I don’t mean people who don’t want to toil but rather those who are simply squeezed from the system, with robotics and sensors thinning out the need for carbon-based employees. Since the Great Recession, American corporations have able to throw away any applicant with a gap on their résumé or any other blemish. If positions continue to diminish, we could be headed for a real-world Hunger Games.

In a Venturebeat essay by Flint Capital’s Artem Burachenok. the writer wonders if mass automation will bring about the return of a “feudalistic lack of social mobility.” If that comes to pass, he predicts potential reactions to this action. Like Baidu’s Andrew Ng, Burachenok believes a new New Deal may become necessary. Two of his entries:

2. The end of college. College loan debt has turned into one of the biggest budget black holes in the US today, with more and more students borrowing huge sums of money to finance degrees at all levels, in the hope of cracking an increasingly competitive job market. But in the near future, it won’t be their classmates that these grads will compete with – it will be the robots, and most will find themselves losing out. As a result, young people who aspire to more than a McJob will opt for trade schools where they can learn skills. Either way, they won’t need, or want, to go into hock financing a college education. The top-tier universities, of course, will be educating the children of the 1 percent; it’s the state and non-Ivy private colleges that will lose out.

4. A new New Deal? The last time there was a hollowing out of the middle class was during the Great Depression – and it took massive government intervention in the economy to set things right. The United States admittedly has a much more socialized economy now than it did in the 1930s, and special interests carry a great deal of weight in Washington nowadays – but as the full pain of AI is felt among the vast majority, there will be increasing pressure on politicians to do something.

eleanor-roosevelt-at-a-wpa-site-in-des-everett

Tags:

spielberg-story_647_072816032003

If there are aliens out there, Sir Martin Rees feels fairly certain they’re conscious machines, not oxygen-hoarding humans. It’s just too inhospitable for carbon beings to travel beyond our solar system. He allows that perhaps cyborgs, a form of semi-organic post-humans, could possibly make a go of it. But that’s as close a reflection of ourselves we may be able to see in space.

In the BBC Future piece “What If the Aliens We Are Looking For Are AI?” Richard Hollingham explores this theory, wondering if a lack of contact can be explained by the limits we put on our search by expecting a familiar face in the final frontier. The opening:

For more than a century we have been broadcasting our presence to the cosmos. This year, the faintest signals from the world’s first major televised event – the Nazi-hosted 1936 Olympics – will have passed several potentially habitable planets. The first season of Game of Thrones has already reached the nearest star beyond our Solar System.

So why hasn’t ET called us back?

There are plenty of obvious answers. Maybe there are no intelligent space aliens in our immediate cosmic vicinity. Perhaps they have never evolved beyond unthinking microbial slime or – based on our transmissions – aliens have concluded it is safer to stay away. There is, however, another explanation: ET is nothing like us.

“If we do find a signal, we shouldn’t expect it’s going to be some sort of soft squishy protoplasmic alien behind the microphone at the other end,” says Seth Shostak, senior astronomer for alien-hunting organisation Search for Extraterrestrial Intelligence (Seti).

Seti has been actively searching for signs of intelligent extraterrestrial life for more than half a century. Despite tantalising signals (such as this recent one), it has so far drawn a blank. But Shostak believes we should consider looking to our own future to imagine what aliens will be like.

“Perhaps the most significant thing we’re doing is to develop our own successors,” says Shostak. “If we can develop artificial intelligence within a couple of hundred years of inventing radio, any aliens we are likely to hear from have very likely gone past that point.”

“In other words,” he says, “most of the intelligence in the cosmos, I would venture, is synthetic intelligence and that may disappoint movie goers who expect little grey guys with big eyeballs, no clothes, no hair or sense of humour.”•

Tags: ,

nn_10lho_pope_130829

I occasionally joke that the 12,000-plus posts I’ve published on Afflictor over the years would be a good morning for Andrew Sullivan, but I win because I didn’t support the invasion of Iraq. 

Sullivan was, until recent times, a scary prolific blogger who seemed to turn himself into a machine in the process of working with them, running a sprint against a stream of information that never rested, never stopped. Finally, he stopped.

The writer wasn’t Zen enough to gladly accept the Bob Marley credo that “the day you stop racing is the day you win the race,” but he knew he needed help for what had become an addiction to a “constant dopamine bath for the writerly ego,” one that had begun to damage his physical as well as mental health. In the insightful New York magazine article “I Used to Be a Human Being,” Sullivan recounts the experience of using meditation, nature walks and quietism to break from this pernicious cycle, a piece which contains great lines like this one: “If the churches came to understand that the greatest threat to faith today is not hedonism but distraction, perhaps they might begin to appeal anew to a frazzled digital generation.”

The thing is, he began noticing that though he was an early adopter of the antic existence of living-inside-the web, social media, smartphones, and apps had soon enough ushered a huge chunk of the globe inside. I wonder sometimes if being caught in an endless maelstrom of information caused Sullivan to lack distance and perspective to the disaster that the Iraq War was likely to become. Now I ponder the same about us all when we’re making vital decisions in this world of cold wars and hot takes. And once the Internet of Things is ubuiquitous, we’ll all be inside of a machine with no OFF switch.

The opening:

I was sitting in a large meditation hall in a converted novitiate in central Massachusetts when I reached into my pocket for my iPhone. A woman in the front of the room gamely held a basket in front of her, beaming beneficently, like a priest with a collection plate. I duly surrendered my little device, only to feel a sudden pang of panic on my way back to my seat. If it hadn’t been for everyone staring at me, I might have turned around immediately and asked for it back. But I didn’t. I knew why I’d come here.

A year before, like many addicts, I had sensed a personal crash coming. For a decade and a half, I’d been a web obsessive, publishing blog posts multiple times a day, seven days a week, and ultimately corralling a team that curated the web every 20 minutes during peak hours. Each morning began with a full immersion in the stream of internet consciousness and news, jumping from site to site, tweet to tweet, breaking news story to hottest take, scanning countless images and videos, catching up with multiple memes. Throughout the day, I’d cough up an insight or an argument or a joke about what had just occurred or what was happening right now. And at times, as events took over, I’d spend weeks manically grabbing every tiny scrap of a developing story in order to fuse them into a narrative in real time. I was in an unending dialogue with readers who were caviling, praising, booing, correcting. My brain had never been so occupied so insistently by so many different subjects and in so public a way for so long.

I was, in other words, a very early adopter of what we might now call living-in-the-web. And as the years went by, I realized I was no longer alone. Facebook soon gave everyone the equivalent of their own blog and their own audience. More and more people got a smartphone — connecting them instantly to a deluge of febrile content, forcing them to cull and absorb and assimilate the online torrent as relentlessly as I had once. Twitter emerged as a form of instant blogging of microthoughts. Users were as addicted to the feedback as I had long been — and even more prolific. Then the apps descended, like the rain, to inundate what was left of our free time. It was ubiquitous now, this virtual living, this never-stopping, this always-updating. I remember when I decided to raise the ante on my blog in 2007 and update every half-hour or so, and my editor looked at me as if I were insane. But the insanity was now banality; the once-unimaginable pace of the professional blogger was now the default for everyone.•

Tags:

marage-1

Illustration of Marage’s talking machine from Scientific American.

The practical talking machine invented by Dr. R. Marage in fin de siècle Paris was a sensation for awhile, though it seems to have passed silently into the vortex of technological history.

A member of the French Academy of Medicine, Marage was attempting with his device (photo here) to outdo Thomas Edison and his phonograph, which reliably offered recorded sound, though it was the latter invention that ultimately found a market. It’s only in our time that chatbots and Siri have begun to scratch the surface of machine-conversation potential. While the science behind Marage’s apparatus was immaterial to those innovations, it does remind that the dream of non-human speech long predated Silicon Valley.

An article from the Scientific American touting his achievement was reprinted in the November 3, 1901 Brooklyn Daily Eagle.

marage4

Tags:

bigears12 (1)

2016%2f09%2f13%2fcc%2fwearingtheairpods-86b55

Among the litter of largely positive product reviews for AirPods is Michael Brandt’s deeper take on the product in Fast Co.Exist. The writer argues that the quality of the sound is a distant second in importance to the wireless tool being placed just inside the body, acting as a “gateway drug” to future technological implants. Apple is, of course, only one of the Silicon Valley behemoths that would like you to eventually have the implant. As Brandt asserts, “we’re seeing our prediction of humans as the next platform come true.”

The opening:

Apple’s newest headphones, the AirPods, are not actually headphones: they’re aural implants. Absent any wires, earpods are so unobtrusive that you may never take them out. Like the eyeglasses on your face, they’ll sit in your ears all day, and you’ll remove them only at night.

For Apple, this is more than just an incremental update to their headphone product. This is a paradigm shift, a Jedi move by Apple where the seam between human and computer is disappearing.

DEFAULT IN

At first blush, AirPods seem the same as today’s headphones, except with the wires gone. Sure, we’ll use them just like the old ones, to make phone calls and listen to music. But the crux of what’s interesting here is what we’re doing when we’re not using them. Without the lanky wire getting caught on clothes, doorknobs, and backpacks, persistently begging to be wound up and put away, there’s no longer a compelling reason to take them out when you’re done using them. So we won’t.

AirPods are default in, as opposed to traditional wired headphones, which are default out. Much of the criticism lobbied at the design is that you’ll put them down and lose them — but what if you never put them down in the first place?•

Tags:

future-self-driving-cars-e1424120051731

Many within the driverless sector think the technology is only five years from remaking our roads and economies. Perhaps they know enough to be sure of such a bold ETA, or perhaps they’re parked in an echo chamber. Either way, it appears this new tool will enter our lives sooner than later.

Beyond perfecting technology that works in all traffic situations and weather conditions are knotty questions about ethics, legislation, Labor, etc. Excerpts follow from two articles on the topic. 


From Anjana Ahuja’s smart FT piece on the problem of crowdsourcing driverless ethics:

Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle. The vehicle is packed with passengers, and heading towards pedestrians. The experiment depicts 13 variations of the “trolley problem” — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.

In MIT’s reformulation, the runaway is a self-driving car that can keep to its path or swerve; both mean death and destruction. The choice can be between passengers and pedestrians, or two sets of pedestrians. Calculating who should perish involves pitting more lives against fewer, young against old, professionals against the homeless, pregnant women against athletes, humans against pets.

At heart, the trolley problem is about deciding who lives, who dies — the kind of judgment that truly autonomous vehicles may eventually make. My “preferences” are revealed afterwards: I mostly save children and sacrifice pets. Pedestrians who are not jaywalking are spared and passengers expended. It is obvious: by choosing to climb into a driverless car, they should shoulder the burden of risk. As for my aversion to swerving, should caution not dictate that driverless cars are generally programmed to follow the road?

It is illuminating — until you see how your preferences stack up against everyone else.•


From Keith Naughton’s Businessweek article on legislating the end of human drivers:

This week, technology industry veterans proposed a ban on human drivers on a 150-mile (241-kilometer) stretch of Interstate 5 from Seattle to Vancouver. Within five years, human driving could be outlawed in congested city centers like London, on college campuses and at airports, said Kristin Schondorf, executive director of automotive transportation at consultant EY.

The first driver-free zones will be well-defined and digitally mapped, giving autonomous cars long-range vision and a 360-degree view of their surroundings, Schondorf said. The I-5 proposal would start with self-driving vehicles using car-pool lanes and expand over a decade to robot rides taking over the road during peak driving times.

“In city centers, you don’t even want non-automated vehicles; they would just ruin the whole point of why you have a smart city,” said Schondorf, a former engineer at Ford Motor Co. and Fiat Chrysler Automobiles NV. “It makes it a dumb city.”

John Krafcik, head of Google’s self-driving car project, said in an August interview with Bloomberg Businessweek that the tech giant is developing cars without steering wheels and gas or brake pedals because “we need to take the human out of the loop.” Ford Chief Executive Officer Mark Fields echoed that sentiment last month when he said the 113-year-old automaker would begin selling robot taxis with no steering wheel or pedals in 2021.•

Tags: ,

rv-ad325_darrow_g_20110624013836

20150102futurama-robot-lawyer-1

The past isn’t necessarily prologue. Sometimes there’s a clean break from history. The Industrial Age transformed Labor, moving us from an agrarian culture to an urban one, providing new jobs that didn’t previously exist: advertising, marketing, car mechanic, etc. That doesn’t mean the Digital Age will follow suit. Much of manufacturing, construction, driving and other fields will eventually fall, probably sooner than later, and Udacity won’t be able to rapidly transition everyone into a Self-Driving Car Engineer. That type of upskilling can take generations to complete.

Not every job has to vanish. Just enough to make unemployment scarily high to cause social unrest. And those who believe Universal Basic Income is a panacea must beware truly bad versions of such programs, which can end up harming more than helping. 

Radical abundance doesn’t have to be a bad thing, of course. It should be a very good one. But we’ve never managed plenty in America very well, and this level would be on an entirely different scale.

Excerpts from two articles on the topic.


From Giles Wilkes’ Economist review of Ryan Avent’s The Wealth of Humans:

What saves this work from overreach is the insistent return to the problem of abundant human labour. The thesis is rather different from the conventional, Malthusian miserabilism about burgeoning humanity doomed to near-starvation, with demand always outpacing supply. Instead, humanity’s growing technical capabilities will render the supply of what workers produce, be that physical products or useful services, ever more abundant and with less and less labour input needed. At first glance, worrying about such abundance seems odd; how typical that an economist should find something dismal in plenty.

But while this may be right when it is a glut of land, clean water, or anything else that is useful, there is a real problem when it is human labour. For the role work plays in the economy is two-sided, responsible both for what we produce, and providing the rights to what is made. Those rights rely on power, and power in the economic system depends on scarcity. Rob human labour of its scarcity, and its position in the economic hierarchy becomes fragile.

A good deal of the Wealth of Humans is a discussion on what is increasingly responsible for creating value in the modern economy, which Mr Avent correctly identifies as “social capital”: that intangible matrix of values, capabilities and cultures that makes a company or nation great. Superlative businesses and nation states with strong institutions provide a secure means of getting well-paid, satisfying work. But access to the fruits of this social capital is limited, often through the political system. Occupational licensing, for example, prevents too great a supply of workers taking certain protected jobs, and border controls achieve the same at a national level. Exceptional companies learn how to erect barriers around their market. The way landholders limit further development provides a telling illustration: during the San Fransisco tech boom, it was the owners of scarce housing who benefited from all that feverish innovation. Forget inventing the next Facebook, be a landlord instead.

Not everyone can, of course, which is the core problem the book grapples with. Only a few can work at Google, or gain a Singaporean passport, inherit property in London’s Mayfair or sell $20 cheese to Manhattanites. For the rest, there is a downward spiral: in a sentence, technological progress drives labour abundance, this abundance pushes down wages, and every attempt to fight it will encourage further substitution towards alternatives.•


From Duncan Jefferies’ Guardian article “The Automated City“:

Enfield council is going one step further – and her name is Amelia. She’s an “intelligent personal assistant” capable of analysing natural language, understanding the context of conversations, applying logic, resolving problems and even sensing emotions. She’s designed to help residents locate information and complete application forms, as well as simplify some of the council’s internal processes. Anyone can chat to her 24/7 through the council’s website. If she can’t answer something, she’s programmed to call a human colleague and learn from the situation, enabling her to tackle a similar question unaided in future.

Amelia is due to be deployed later this year, and is supposed to be 60% cheaper than a human employee – useful when you’re facing budget cuts of £56m over the next four years. Nevertheless, the council claims it has no plans to get rid of its 50 call centre workers.

The Singaporean government, in partnership with Microsoft, is also planning to roll out intelligent chatbots in several stages: at first they will answer simple factual questions from the public, then help them complete tasks and transactions, before finally responding to personalised queries.

Robinson says that, while artificially intelligent chatbots could have a role to play in some areas of public service delivery: “I think we overlook the value of a quality personal relationship between two people at our peril, because it’s based on life experience, which is something that technology will never have – certainly not current generations of technology, and not for many decades to come.”

But whether everyone can be “upskilled” to carry out more fulfilling work, and how many staff will actually be needed as robots take on more routine tasks, remains to be seen.•

Tags: , ,

H.G. Wells hoped the people of Earth would someday live in a single world state overseen by a benign central government–if they weren’t first torn apart by yawning wealth inequality abetted by technology. He was correctly sure you couldn’t decouple the health of a society with the machines it depended on, which could have an outsize impact on economics.

In a smart The Conversation essay, Simon John James advises that the author’s social predictions have equal importance to his scientific ones. The opening:

No writer is more renowned for his ability to foresee the future than HG Wells. His writing can be seen to have predicted the aeroplane, the tank, space travel, the atomic bomb, satellite television and the worldwide web. His fantastic fiction imagined time travel, alien invasion, flights to the moon and human beings with the powers of gods.

This is what he is generally remembered for today, 150 years after his birth. Yet for all these successes, the futuristic prophecy on which Wells’s heart was most set – the establishment of a world state – remains unfulfilled. He envisioned a Utopian government which would ensure that every individual would be as well educated as possible (especially in science), have work which would satisfy them, and the freedom to enjoy their private life.

His interests in society and technology were closely entwined. Wells’s political vision was closely associated with the fantastic transport technologies that Wells is famous for: from the time machine to the Martian tripods to the moving walkways and aircraft in When the Sleeper Wakes. In Anticipations (1900), Wells prophesied the “abolition of distance” by real-life technologies such as the railway. He stressed that since the inhabitants of different nations could now travel towards each other more quickly and easily, it was all the more important for them to do so peacefully rather than belligerently.•

Tags: ,

radiobatteryheadphonescreepy8

We’ve been plugging our heads into the Internet for 20 years, and so far the results have been mixed.

Unfettered information has not proven to be a path to greater truth. Conspiracists of all stripes are doing big business, Donald Trump is a serious contender for the Presidency and Americans think the country is a dangerous place when it’s never been safer. Something has been lost in translation.

Is the answer to go deeper into the cloud? In order to keep AI from obviating our species, Elon Musk wants us to connect our brains to a “benevolent AI.” The question is, would greater clarity attend greater intelligence? “Yes” doesn’t seem to be the definite answer.

From Joe Carmichael at Inverse:

Elon Musk says the key to preventing an artificial intelligence-induced apocalypse is to create an “A.I.-human symbiote.” It’s the fated neural lace, part of the “democratization of A.I. technology,” that connects our brains to the cloud. And when enough brains are connected to the cloud — when “we are the A.I. collectively” — the “evil dictator A.I.” will be powerless against us all, Musk told Y Combinator recently.

Yes, you read that right. Musk yearns for and believes in the singularity — the moment A.I. evolves beyond human control — so long as it comes out better for the humans than it does for the machines. Musk, the CEO of SpaceX and Tesla, is no stranger to out-there ideas: Among his many are that electric, autonomous cars are the future of transportation, that we can colonize Mars, that life is in all likelihood a grand simulation, and that Sundays are best spent baking cookies. (Okay, okay: He’s onto something with that last one.)

Along with running the show at SpaceX and Tesla, Musk co-chairs OpenAI, a nonprofit dedicated to precluding malicious A.I. and producing benevolent A.I. But that’s just one part of the equation; the other part, as he told Y Combinator CEO and fellow OpenAI chair Sam Altman on Thursday, is to incorporate this benevolent A.I. into the human brain. Once that works, he wants to incorporate it into all human brains — or at least those who wish to augment their au naturel minds.•

Tags: ,

jackjohnsonrobot6

dempbot

supermanrobot (1)

  • Somehow I don’t think toil will ever completely disappear from the human struggle, but then I’m a product of my time.
  • Limitless abundance is on the table as more and more work becomes automated as is societal collapse. Distribution is the key, especially in the near future.
  • If post-scarcity were to become reality in the next several hundred years, humans would have to redefine why we’re here. I’m not so worried about that possibility if it happens gradually. I think we’re good at redirecting ourselves over time. It’s the crash into the new that can cause us trouble, and right now a collision seems more likely than a safe landing.
  • When we decided to head to the moon, many, even the U.S. President, thought travel into space would foster peace on Earth. Maybe it has helped somewhat and perhaps it will do so more in the future as we fan out through the solar system, but it wasn’t a cure-all for what ails us as a species. Neither would a work-free world make things perfect. We’ll still fight amongst ourselves and struggle to govern. It will likely be better, but it won’t be utopia. 

In a Guardian piece, Ryan Avent, author of The Wealth of Humans, writes of the potential pluses and perils of a work-free world. An excerpt:

Despite impressive progress in robotics and machine intelligence, those of us alive today can expect to keep on labouring until retirement. But while Star Trek-style replicators and robot nannies remain generations away, the digital revolution is nonetheless beginning to wreak havoc. Economists and politicians have puzzled over the struggles workers have experienced in recent decades: the pitiful rate of growth in wages, rising inequality, and the growing flow of national income to profits and rents rather than pay cheques. The primary culprit is technology. The digital revolution has helped supercharge globalisation, automated routine jobs, and allowed small teams of highly skilled workers to manage tasks that once required scores of people. The result has been a glut of labour that economies have struggled to digest.

Labour markets have coped the only way they are able: workers needing jobs have little option but to accept dismally low wages. Bosses shrug and use people to do jobs that could, if necessary, be done by machines. Big retailers and delivery firms feel less pressure to turn their warehouses over to robots when there are long queues of people willing to move boxes around for low pay. Law offices put off plans to invest in sophisticated document scanning and analysis technology because legal assistants are a dime a dozen. People continue to staff checkout counters when machines would often, if not always, be just as good. Ironically, the first symptoms of a dawning era of technological abundance are to be found in the growth of low-wage, low-productivity employment. And this mess starts to reveal just how tricky the construction of a workless world will be. The most difficult challenge posed by an economic revolution is not how to come up with the magical new technologies in the first place; it is how to reshape society so that the technologies can be put to good use while also keeping the great mass of workers satisfied with their lot in life. So far, we are failing.

Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes.•

Tags:

scary-wtf-easter-bunny

6a00d83451ccbc69e201539280c2af970b

swimmingmask

Things that upset our expectations by presenting something far less desired than what was anticipated can cause us to become discombobulated, even overwhelmed by creepiness. Most of these things are completely harmless even if they’re distressing, but while their inefficacy means they pose no physical threat, they still reach deep inside us and instill terror. Why? Complicating matters even further is that some of us seek out these viscerally unsettling feelings in films and haunted houses.

In “A Theory of Creepiness,” an excellent Aeon essay, philosopher David Livingstone Smith attempts to explain this phenomenon, delving deeply into the history of theories on the topic. He ultimately believes the root cause lies in “psychological essentialism.”

The opening:

Imagine looking down to see a severed hand scuttling toward you across the floor like a large, fleshy spider. Imagine a dog trotting up to you, amiably wagging its tail – but as it gets near you notice that, instead of a canine head, it has the head of an enormous green lizard. Imagine that you are walking through a garden where the vines all writhe like worms.

There’s no denying that each of these scenarios is frightening, but it’s not obvious why. There’s nothing puzzling about why being robbed at knifepoint, pursued by a pack of wolves, or trapped in a burning house are terrifying given the physical threat involved. The writhing vines, on the other hand, can’t hurt you though they make your blood run cold. As with the severed hand or the dog with the lizard head, you have the stuff of nightmares – creepy.

And creepiness – Unheimlichkeit, as Sigmund Freud called it – definitely stands apart from other kinds of fear. Human beings have been preoccupied with creepy beings such as monsters and demons since the beginning of recorded history, and probably long before. Even today in the developed world where science has banished the nightmarish beings that kept our ancestors awake at night, zombies, vampires and other menacing entities retain their grip on the human imagination in tales of horror, one of the most popular genres in film and TV.

Why the enduring fascination with creepiness?•

Tags:

officespace

For decades, we’ve been promised a paperless office, yet my hands are still covered in bloody, weeping sores.

It’s not a surprise that reams and sheets and post-its have persisted into the era of tablets and smartphones, as horse-drawn carriages and trolleys shared the roads with automobiles during the early years of the latter’s introduction. (The final horse-driven tram in NYC was still on the streets in 1917.)

So far, the many descendants of papyrus have persevered, showing no sign of truly disappearing from desks and portfolios, though Christopher Mims of the Wall Street Journal believes the decline may finally have begun. Electronic signatures and the like have for the first time in history led to a “steady decline of about 1% to 2% a year in office use of paper,” Mims writes. The downside to clutter is that while paper leaves a trail, it isn’t prone to instantaneous surveillance like our newer techologies.

The opening:

Every year, America’s office workers print out or photocopy approximately one trillion pieces of paper. If you add in all the other paper businesses produce, the utility bills and invoices and bank statements and the like, the figure rises to 1.6 trillion. If you stacked all that paper up, it would be 18,000 times as high as Mount Everest. It would reach nearly halfway to the moon.

This is why HP Inc.’s acquisition of Samsung Electronics Co.’s printing and copying business last week makes sense. HP, says a company spokesman, has less than 5% of the market for big, high-throughput office copying machines. The company says the acquisition will incorporate Samsung’s technology in new devices, creating a big opportunity for growth.

Yet by all rights, this business shouldn’t exist. Forty years ago, at least, we were promised the paperless office. In a 1975 article in BusinessWeek, an analyst at Arthur D. Little Inc., predicted paper would be on its way out by 1980, and nearly dead by 1990.•

Tags:

David Frost was a jester, then a king. After that, he was somewhere in between but always closer to royalty than risible. The Frost-Nixon interview saw to that.

Below is an excerpt from a more-timely-than ever interview from Frost’s 1970 book, The Americansan exchange about privacy the host had with Ramsey Clark, who served as U.S. Attorney General, who is still with us, doing a Reddit Ask Me Anything just last year. At the outset of this segment, Clark is commenting about wiretapping, though he broadens his remarks to regard privacy in general.

Ramsey Clark:

[It’s] an immense waste, an immoral sort of thing.

David Frost:

Immoral in what sense?

Ramsey Clark:

Well, immoral in the sense that government has to be fair. Government has to concede the dignity of its citizens. If the government can’t protect its citizens with fairness, we’re in real trouble, aren’t we? And it’s always ironic to me that those who urge wiretapping strongest won’t give more money for police salaries to bring real professionalism and real excellence to law enforcement, which is so essential to our safety.

They want an easy way, they want a cheap way. They want a way that demeans the integrity of the individual, of all of our citizens. We can’t overlook the capabilities of our technology. We can destroy privacy, we really can. We have techniques now–and we’re only on the threshold of discovery–that can permeate brick walls three feet thick. 

David Frost:

How? What sorts of things?

Ramsey Clark:

You can take a laser beam and you put it on a resonant surface within the room, and you can pick up any vibration in that room, any sound within that room, from half a mile away.

David Frost:

I think that’s terrifying.

Ramsey Clark:

You know, we can do it with sound and lights, in other words, visual-audio invasion of privacy is possible, and if we really worked at it with the technology that we have, in a few years we could destroy privacy as we know it.

Privacy is pretty hard to retain anyway in a mass society, a highly urbanized society, and if we don’t discipline ourselves now to traditions of privacy and to traditions of the integrity of the individual, we can have a generation of youngsters quite soon that won’t know what it meant because it wasn’t here when they came.•

tumblr_lp242swcq91qb8vpuo1_500

Just under two weeks ago, the BBC’s economics editor, Kamal Ahmed, sat down for a fascinating 90-minute Intelligence Squared conversation with Sapiens historian Yuval Noah Harari, whose futuristic new book, Homo Deus, was just released in the U.K. and has an early 2017 publication date in the U.S.

The Israeli historian believes that during the Industrial Revolution, humans have intellectually, if not practically, figured out how to put under our control the triple threats of famine, plague and war. He says these things still bedevil us, if to a lesser degree, because of politics and incompetence, not due to ignorance. (I’ll also add madness, individual and the mass kind, to those causes.) That’s great should we continue to use knowledge to reduce counterproductive politics, incompetence, and, ultimately, to further mitigate suffering. 

For the first time in history, Harari asserts, wide abundance is now more of a threat to us than want, with obesity a greater threat than starvation. As he says, “McDonald’s and Coca-Cola pose a far greater threat to our lives than Al-Qaeda and the Islamic State.”

What will we do over the next century or two if we are able to shuffle off the old obstacles?

Harari says: “Try overcome sickness and death, find the keys to happiness and upgrade humans into gods…in the literal sense…to design life according to our wishes. The main products of the human economy will no longer be vehicles and textiles and food and weapons. The main products will be bodies and brains and minds. 

“The next phase will involve trying to gain mastery of what’s inside, of trying to decipher human biochemistry, our bodies, our brains, learning how to re-engineer them, learning how to manufacture them. This will require a lot of computing power. There’s no way the human brain has the capacity to decipher the secrets, to process the data that’s necessary to understand what’s happening inside. You need help from Artificial Intelligence and Big Data Systems, and this is what is happening already today. We see a merger of the biological sciences with computer sciences.”

Despite such promise, Harari doesn’t believe godliness is assuredly our ultimate destination. “The result may not be uploading humans into gods,” he says. “The result may be massive useless class…the end of humanity.”

The academic acknowledges he’s not an expert in AI and technology and when he makes predictions about the future, he takes for granted the accuracy of the experts in those fields. He argues that “you don’t really need to know how a nuclear bomb works” to understand its impact.

Also discussed: technological unemployment, a potentially new and radical type of wealth inequality, the poisonous American political season and how Native peoples selling Manhattan for colorful beads is recurring now with citizens surrendering private information for “free email and some cute cat videos.”•

Tags: ,

bd98734

The American military has long dreamed of an automated force, though it was only a publicity stunt during the Jazz Age when robots “joined” the army. Since then, powerful tools emerged and were miniaturized, ultimately sliding low-priced supercomputers into almost every pocket. Now when it comes to making our fighting machine an actual machine, anything seems possible–or may soon be.

Even if we remain in control of the strategic decisions governing these new warriors, the mere presence of “unkillable” battalions will likely come to bear on our thinking. Sooner or later, with several well-funded nations vying for supremacy, mission creep could remove the controls from human hands.

In “Our New War Machines,” Scott Beauchamp’s Baffler piece, the Army veteran and writer says this shift from carbon to silicon soldiers will result in “less democratic oversight of the American military.” The opening:

For an institution synonymous with tradition and continuity, the American military is in quite a radical state of flux. In just the six or so years since I left the Army, two major demographic shifts that might superficially appear unrelated (or even contradictory) have taken place within the Department of Defense. The first of these transformations involves opening up the ranks of service to previously excluded or marginalized populations: bringing women soldiers into all combat roles, allowing gay and lesbian personnel to serve openly, repealing the ban on trans people in the military.

The other major change, known in the defense industry and milblog enclaves as the Third Offset Strategy, involves taking the human element out of combat entirely. Third Offset focuses on using robots to automate warfare and reduce human (or at least American human) exposure to combat. So at the same moment that more people than ever are able to openly serve in the United States military and find the level of service best suited to their talent and abilities, fewer people are actually necessary for waging war. 

The “offset” terminology itself signals the projected scale this transformation. In Pentagon-ese, an offset denotes a strategy aimed at making irrelevant a strategic advantage held by enemy forces. The first modern offset was the exploitation of American’s nuclear arsenal in the 1950s to compensate for the Warsaw pact participants’ considerable manpower advantage. The second offset was likewise geared toward outsmarting the Soviet war machine once it had gained roughly equivalent nuclear capabilities; it involved things like stealth technology, precision-guided munitions, and ISR (intelligence, surveillance, and reconnaissance) platforms. But forty years on, our “near-peer” competitors, as the defense world refers to China and Russia, have developed their own versions of our second offset technologies. And so something new is needed; hence the Pentagon’s new infatuation with roboticized warfare.•

Tags:

Bernard Pomerance’s brilliant play The Elephant Man received equally bright stagings in 1979 in New York, from Jack Hofsiss, then a 28-year-old wunderkind who adeptly wrestled 21 short scenes about the 19th-century sideshow act John Merrick, who suffered from severe physical deformities, into a thing of moving beauty.

Sadly, Hofsiss, just died. He was adept at all media, working also in TV and film, and his career continued even after he was paralyzed from the waist down in a diving accident six years after his Elephant Man triumph. Here’s a piece from Richard F. Shepard’s 1979 New York Times profile of Hofsiss as he was readying to move the drama from Off-Broadway to on:

Mr. Hofsiss is a man of his generation, that is, a man who can call the action with equal ease in stage, film or television. Yet there is something about Broadway that stirs the blood and seizes the imagination, even though one knows that Broadway is just another stage, maybe one with a bigger budget and higher prices.

“Each production you do has its realities and necessities,” Mr. Hofsiss said. “These are compounded on Broadway because of the commercial nature of the beast. There is a pressing professionalism on Broadway.”

“The Elephant Man” is the story of John Merrick, a Briton who lived in the late 1800’s and was a fleshy, prehensile monster of a man whose awful‐looking body encased a sharp and inquiring mind that developed quickly as opportunity allowed. The opportunity came from a doctor who interested himself in Merrick and brought him to the attention of upper‐crust curiosity seekers. As played by Philip Anglim, an actor with good and regular features, the monstrous nature of the deformity is not spelled out by specific makeup, but the sense of it is conveyed by the manner in which Mr. Anglim can contort his body, although even this is not a constant distraction during a performance.

All of this is by way of saying that this is a show that leaves much to a director’s imagination, backed by a good deal of self‐discipline. Mr. Hofsiss, who was born and reared in Brooklyn and received a classical, old‐style educadon from the Jesuits at Brooklyn Prep and a more freeform one at Georgetown University, where he majored in English and theater, felt equipped for the situation.

“This is an episodic play, 21 scenes that constantly shift the characters,” he said. “The script itself is purely words, containing no production instructions. In that way, it reads like Shakespeare: enter, blackout, and that’s all. Like Shakespeare, Bernard Pomerance wrote it for a theater he knew in England, where it opened in 1977. Everything is there in the script, but it’s as though you’re carving a sculpture out of a beautiful piece of stone, frightening but rewarding.”

Mr. Pomerance, an American who lives in England, came to New York only briefly before each opening, the one Off Broadway and the one on Broadway, and one might wonder whether author and director who are oceans apart in the flesh might not be in the same condition spiritually. But, “Bernard and I worked it out by telephone — he’s trusting of directors,” Mr. Hofsiss said.•


The third John Merrick during the original run was David Bowie. A 1980 episode of Friday Night…Saturday Morning featured Tim Rice interviewing the rock star about the play.

Tags: , , , ,

William James Sidis amazed the world, and then he disappointed it.

A Harvard student in 1910 at just 11 years old, he was considered the most astounding prodigy of early 20th-century America, a genius of mathematics and much more, reading at two and typing at three, who had been trained methodically and relentlessly from birth by his father, a psychiatrist and professor. It was a lot to live up to. There was a dalliance with radical politics at the end of his teens that threw him off the path to greatness, resulting in a sedition trial. In the aftermath, he quietly disappeared into an undistinguished life.

When it was learned in 1937 that Sidis was living a threadbare existence of no great import, merely a clerk, he was treated to a public accounting which was laced with no small amount of schadenfreude. He sued the New Yorker over an article by Gerald L. Manley and James Thurber (gated) which detailed his failed promise. He was paid $3,000 to settle the case by the magazine’s publishers just prior to his death in 1944.

Three portraits below from the Brooklyn Daily Eagle chart Sidis’ uncommon life.


From March 20, 1910:


From May 5, 1919:

sidis234


From July 18, 1944:

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Aeon, which already presented a piece from Nicholas Carr’s new book, Utopia Is Creepy, has another, a passage about biotechnology which wonders if science will soon move too fast not only for legislation but for ethics as well.

The “philosophy is dead” assertion that’s persistently batted around in scientific circles drives me bonkers because we dearly need consideration about our likely commandeering of evolution. Carr doesn’t make that argument but instead rightly wonders if ethics is likely to be more than a “sideshow” when garages aren’t used to just hatch computer hardware or search engines but greatly altered or even new life forms. The tools will be cheap, the “creativity” decentralized, the “products” attractive. As Freeman Dyson wrote nearly a decade ago: “These games will be messy and possibly dangerous.”

From Carr:

If to be transhuman is to use technology to change one’s body from its natural state, then we are all already transhuman. But the ability of human beings to alter and augment themselves might expand enormously in the decades ahead, thanks to a convergence of scientific and technical advances in such areas as robotics, bioelectronics, genetic engineering and pharmacology. Progress in the field broadly known as biotechnology promises to make us stronger, smarter and fitter, with sharper senses and more capable minds and bodies. And scientists can already use the much discussed gene-editing tool CRISPR, derived from bacterial immune systems, to rewrite genetic code with far greater speed and precision, and at far lower cost, than was possible before. In simple terms, CRISPR pinpoints a target sequence of DNA on a gene, uses a bacterial enzyme to snip out the sequence, and then splices a new sequence in its place. The inserted genetic material doesn’t have to come from the same species. Scientists can mix and match bits of DNA from different species, creating real-life chimeras.

As long ago as 1923, the English biologist J B S Haldane gave alecturebefore the Heretics Society in Cambridge on how science would shape humanity in the future. ‘We can already alter animal species to an enormous extent,’ he observed, ‘and it seems only a question of time before we shall be able to apply the same principles to our own.’ Society would, Haldane felt sure, defer to the scientist and the technologist in defining the boundaries of the human species. ‘The scientific worker of the future,’ he concluded, ‘will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission, and proud of it.’

The ultimate benefit of transhumanism, argues Nick Bostrom, professor of philosophy at the University of Oxford, and one of the foremost proponents of radical human enhancement, is that it expands human potential, giving individuals greater freedom ‘to shape themselves and their lives according to their informed wishes’.Transhumanismunchains us from our nature. Critics take a darker view, suggesting that biological and genetic tinkering is more likely to demean or even destroy the human race than elevate it.

The ethical debate is profound, but it seems fated to be a sideshow.•

Tags:

321helmet1

Read the fine print. That’s always been good advice, but it’s never been taken seriously when it comes to the Internet, a fast-moving, seemingly ephemeral medium that doesn’t invite slowing down to contemplate. So companies attach a consent form to their sites and apps about cookies. No one reads it, and there’s no legal recourse from having your laptop or smartphone from being plundered for all your personal info. It quietly removes legal recourse from surveillance capitalism.

In an excellent and detailed Locus Magazine essay, Cory Doctorow explains how this oversight, which has already had serious consequences, will snake its way into every corner of our lives once the Internet of Things turns every item into a computer, cars and lamps and soda machines and TV screens. “Notice and consent is an absurd legal fiction,” he writes, acknowledging that it persists despite its ridiculous premise and invasive nature.

An excerpt:

The coming Internet of Things – a terrible name that tells you that its proponents don’t yet know what it’s for, like ‘‘mobile phone’’ or ‘’3D printer’’ – will put networking capability in everything: appliances, light­bulbs, TVs, cars, medical implants, shoes, and garments. Your lightbulb doesn’t need to be able to run apps or route packets, but the tiny, com­modity controllers that allow smart lightswitches to control the lights anywhere (and thus allow devices like smart thermostats and phones to integrate with your lights and home security systems) will come with full-fledged computing capability by default, because that will be more cost-efficient that customizing a chip and system for every class of devices. The thing that has driven computers so relentlessly, making them cheaper, more powerful, and more ubiquitous, is their flexibility, their character of general-purposeness. That fact of general-purposeness is inescapable and wonderful and terrible, and it means that the R&D that’s put into making computers faster for aviation benefits the computers in your phone and your heart-monitor (and vice-versa). So every­thing’s going to have a computer.

You will ‘‘interact’’ with hundreds, then thou­sands, then tens of thousands of computers every day. The vast majority of these interactions will be glancing, momentary, and with computers that have no way of displaying terms of service, much less presenting you with a button to click to give your ‘‘consent’’ to them. Every TV in the sportsbar where you go for a drink will have cameras and mics and will capture your image and process it through facial-recognition software and capture your speech and pass it back to a server for continu­ous speech recognition (to check whether you’re giving it a voice command). Every car that drives past you will have cameras that record your like­ness and gait, that harvest the unique identifiers of your Bluetooth and other short-range radio devices, and send them to the cloud, where they’ll be merged and aggregated with other data from other sources.

In theory, if notice-and-consent was anything more than a polite fiction, none of this would hap­pen. If notice-and-consent are necessary to make data-collection legal, then without notice-and-consent, the collection is illegal.

But that’s not the realpolitik of this stuff: the reality is that when every car has more sensors than a Google Streetview car, when every TV comes with a camera to let you control it with gestures, when every medical implant collects telemetry that is collected by a ‘‘services’’ business and sold to insurers and pharma companies, the argument will go, ‘‘All this stuff is both good and necessary – you can’t hold back progress!’’•

Tags:

« Older entries § Newer entries »