Excerpts

You are currently browsing the archive for the Excerpts category.

rv-ad325_darrow_g_20110624013836

20150102futurama-robot-lawyer-1

The past isn’t necessarily prologue. Sometimes there’s a clean break from history. The Industrial Age transformed Labor, moving us from an agrarian culture to an urban one, providing new jobs that didn’t previously exist: advertising, marketing, car mechanic, etc. That doesn’t mean the Digital Age will follow suit. Much of manufacturing, construction, driving and other fields will eventually fall, probably sooner than later, and Udacity won’t be able to rapidly transition everyone into a Self-Driving Car Engineer. That type of upskilling can take generations to complete.

Not every job has to vanish. Just enough to make unemployment scarily high to cause social unrest. And those who believe Universal Basic Income is a panacea must beware truly bad versions of such programs, which can end up harming more than helping. 

Radical abundance doesn’t have to be a bad thing, of course. It should be a very good one. But we’ve never managed plenty in America very well, and this level would be on an entirely different scale.

Excerpts from two articles on the topic.


From Giles Wilkes’ Economist review of Ryan Avent’s The Wealth of Humans:

What saves this work from overreach is the insistent return to the problem of abundant human labour. The thesis is rather different from the conventional, Malthusian miserabilism about burgeoning humanity doomed to near-starvation, with demand always outpacing supply. Instead, humanity’s growing technical capabilities will render the supply of what workers produce, be that physical products or useful services, ever more abundant and with less and less labour input needed. At first glance, worrying about such abundance seems odd; how typical that an economist should find something dismal in plenty.

But while this may be right when it is a glut of land, clean water, or anything else that is useful, there is a real problem when it is human labour. For the role work plays in the economy is two-sided, responsible both for what we produce, and providing the rights to what is made. Those rights rely on power, and power in the economic system depends on scarcity. Rob human labour of its scarcity, and its position in the economic hierarchy becomes fragile.

A good deal of the Wealth of Humans is a discussion on what is increasingly responsible for creating value in the modern economy, which Mr Avent correctly identifies as “social capital”: that intangible matrix of values, capabilities and cultures that makes a company or nation great. Superlative businesses and nation states with strong institutions provide a secure means of getting well-paid, satisfying work. But access to the fruits of this social capital is limited, often through the political system. Occupational licensing, for example, prevents too great a supply of workers taking certain protected jobs, and border controls achieve the same at a national level. Exceptional companies learn how to erect barriers around their market. The way landholders limit further development provides a telling illustration: during the San Fransisco tech boom, it was the owners of scarce housing who benefited from all that feverish innovation. Forget inventing the next Facebook, be a landlord instead.

Not everyone can, of course, which is the core problem the book grapples with. Only a few can work at Google, or gain a Singaporean passport, inherit property in London’s Mayfair or sell $20 cheese to Manhattanites. For the rest, there is a downward spiral: in a sentence, technological progress drives labour abundance, this abundance pushes down wages, and every attempt to fight it will encourage further substitution towards alternatives.•


From Duncan Jefferies’ Guardian article “The Automated City“:

Enfield council is going one step further – and her name is Amelia. She’s an “intelligent personal assistant” capable of analysing natural language, understanding the context of conversations, applying logic, resolving problems and even sensing emotions. She’s designed to help residents locate information and complete application forms, as well as simplify some of the council’s internal processes. Anyone can chat to her 24/7 through the council’s website. If she can’t answer something, she’s programmed to call a human colleague and learn from the situation, enabling her to tackle a similar question unaided in future.

Amelia is due to be deployed later this year, and is supposed to be 60% cheaper than a human employee – useful when you’re facing budget cuts of £56m over the next four years. Nevertheless, the council claims it has no plans to get rid of its 50 call centre workers.

The Singaporean government, in partnership with Microsoft, is also planning to roll out intelligent chatbots in several stages: at first they will answer simple factual questions from the public, then help them complete tasks and transactions, before finally responding to personalised queries.

Robinson says that, while artificially intelligent chatbots could have a role to play in some areas of public service delivery: “I think we overlook the value of a quality personal relationship between two people at our peril, because it’s based on life experience, which is something that technology will never have – certainly not current generations of technology, and not for many decades to come.”

But whether everyone can be “upskilled” to carry out more fulfilling work, and how many staff will actually be needed as robots take on more routine tasks, remains to be seen.•

Tags: , ,

Republican National Convention: Day Four

If Perez Hilton had sex with Lee Atwater’s corpse–and I have no proof that didn’t happen–the resulting offspring might have been Milo Yiannopoulos, an alt-right, Kostabi-ish performance artist and self-promoter given to the ugliest politics. The Breitbart News dipshit supports Donald Trump’s anti-immigrant policies even though he’s a Brit working for a U.S. media outlet, taking a job away from a Real American™. A toad who spends his time harassing Leslie Jones on Twitter, Yiannopoulos may not have any actual authority, but in a decentralized media his outrages find outlets.

Another extremist potentially far closer to genuine power is Donald Trump Jr., the winner of several regional Patrick Bateman look-alike contests and an amateur eugenicist who thinks evolution has smiled on a stallion like himself. With his hideous hotelier of a father trying to moderate during the general election–at least by his bizarre standards–the chip off the old blockhead has been pandering to white supremacists at a furious clip, with a Holocaust reference here and a comparison of Syrian refugees to poison Skittles there. He is deplorable.

Excerpts below from articles about two of Donald Trump’s children.


From Joel Stein at Bloomberg Businessweek:

At 4 p.m., Milo Yiannopoulos puts on a pair of glasses for the first time today. He examines himself in a mirror to see if he wants to add a gray suit to his purchases, which will push his bill to almost $12,000 at Savile Row’s Gieves & Hawkes. He’s buying clothes for his next round of college speeches in, as his bus announces in huge letters next to five giant photos of him, the Dangerous Faggot Tour. It resumed at Texas Tech University on Sept. 12 and is scheduled to hit campuses including Columbia, Dartmouth, the University of Alabama, and the University of California at Berkeley before concluding at UCLA in February. “I have ridiculously bad eyesight, but I have learned to live with an impressionistic view. Life is a Monet painting,” he says, taking off his glasses. “I wander around enjoying myopia.”

Yiannopoulos is the 31-year-old British tech editor and star writer for Breitbart News, where he’s the loudest defender of the new, Trump-led ultraconservatism, standing athwart history, shouting to stop immigrants, feminists, political correctness, and any non-Western culture. Yiannopoulos gained his initial fame as the general in a massive troll war over misogyny in the video game world, known as Gamergate. He was permanently banned from Twitter in July after the social media company said his almost 350,000 followers were responsible for harassing Ghostbusters star Leslie Jones. He still has nearly 275,000 subscribers to his YouTube speeches, and CNBC and Fox turn to him as the most notorious spokesman for the alt-right, the U.S. version of Europe’s far right (led at various times by England’s Nigel Farage, France’s Marine Le Pen, Austria’s Jorg Haider, the Netherlands’ Geert Wilders, and Germany’s Frauke Petry). Their followers’ politics are almost exactly the same: They’re angry about globalization—culturally even more than economically. They’re angry about political correctness guilting them about insensitivity to women, minorities, gays, transgender people, the disabled, the sick—the everyone-but-them. They’re angry about feminism. They don’t like immigrants. They don’t like military intervention. They aren’t into free trade. They don’t like international groups such as the European Union, United Nations, or NATO—even the International Olympic Committee. They admire the bravado of authoritarians, especially Vladimir Putin. Some are white supremacists. Most enjoy a good conspiracy theory.

But members of the alt-right, unlike their old, frustrated European counterparts, are less focused on policy than on performance. Their MO usually involves pissing people off with hypermasculine taunts. They call establishment and even Tea Party Republicans “cuckservatives”—because they are cuckolded by the Left. They do most of their acting out online, often by organizing on 4chan or Reddit and then trolling targets on Twitter. The alt-right is a new enough phenomenon that in August, Republican House Speaker Paul Ryan—running against an alt-right candidate in a primary—mistakenly called it “alt-conservatism” on a radio show. “It’s a nasty, virulent strain of something,” he said. “I don’t even know what it is, other than that it isn’t us. It isn’t what we believe in.”

As Donald J. Trump has become the candidate of the alt-right, Breitbart News has become the movement’s voice.•


From David A. Graham at the Atlantic:

Donald Trump Jr. isn’t just his father’s namesake or dark-haired doppelgänger. He is increasingly emerging as his father’s id—or perhaps simply his father’s emissary to the alt-right.

Over the last few weeks, Trump has made an effort to tone down his rhetoric and try to avoid the most outrageous comments, the ones that endeared him to the racists, misogynists, and xenophobes who gather in darker corners of the internet. Ironically, this switch has come since he installed Stephen Bannon, the CEO of Breitbart, a leading alt-right outlet, as his campaign CEO. It has also produced positive results, with Trump reaching his high point of the campaign with just about 50 days to go.

But it’s still important to maintain the base, and that role seems to have fallen to Donald Trump Jr. Trump fils has been increasingly catering to the fringe right in his social-media statements and interviews.•

Tags: , , ,

H.G. Wells hoped the people of Earth would someday live in a single world state overseen by a benign central government–if they weren’t first torn apart by yawning wealth inequality abetted by technology. He was correctly sure you couldn’t decouple the health of a society with the machines it depended on, which could have an outsize impact on economics.

In a smart The Conversation essay, Simon John James advises that the author’s social predictions have equal importance to his scientific ones. The opening:

No writer is more renowned for his ability to foresee the future than HG Wells. His writing can be seen to have predicted the aeroplane, the tank, space travel, the atomic bomb, satellite television and the worldwide web. His fantastic fiction imagined time travel, alien invasion, flights to the moon and human beings with the powers of gods.

This is what he is generally remembered for today, 150 years after his birth. Yet for all these successes, the futuristic prophecy on which Wells’s heart was most set – the establishment of a world state – remains unfulfilled. He envisioned a Utopian government which would ensure that every individual would be as well educated as possible (especially in science), have work which would satisfy them, and the freedom to enjoy their private life.

His interests in society and technology were closely entwined. Wells’s political vision was closely associated with the fantastic transport technologies that Wells is famous for: from the time machine to the Martian tripods to the moving walkways and aircraft in When the Sleeper Wakes. In Anticipations (1900), Wells prophesied the “abolition of distance” by real-life technologies such as the railway. He stressed that since the inhabitants of different nations could now travel towards each other more quickly and easily, it was all the more important for them to do so peacefully rather than belligerently.•

Tags: ,

radiobatteryheadphonescreepy8

We’ve been plugging our heads into the Internet for 20 years, and so far the results have been mixed.

Unfettered information has not proven to be a path to greater truth. Conspiracists of all stripes are doing big business, Donald Trump is a serious contender for the Presidency and Americans think the country is a dangerous place when it’s never been safer. Something has been lost in translation.

Is the answer to go deeper into the cloud? In order to keep AI from obviating our species, Elon Musk wants us to connect our brains to a “benevolent AI.” The question is, would greater clarity attend greater intelligence? “Yes” doesn’t seem to be the definite answer.

From Joe Carmichael at Inverse:

Elon Musk says the key to preventing an artificial intelligence-induced apocalypse is to create an “A.I.-human symbiote.” It’s the fated neural lace, part of the “democratization of A.I. technology,” that connects our brains to the cloud. And when enough brains are connected to the cloud — when “we are the A.I. collectively” — the “evil dictator A.I.” will be powerless against us all, Musk told Y Combinator recently.

Yes, you read that right. Musk yearns for and believes in the singularity — the moment A.I. evolves beyond human control — so long as it comes out better for the humans than it does for the machines. Musk, the CEO of SpaceX and Tesla, is no stranger to out-there ideas: Among his many are that electric, autonomous cars are the future of transportation, that we can colonize Mars, that life is in all likelihood a grand simulation, and that Sundays are best spent baking cookies. (Okay, okay: He’s onto something with that last one.)

Along with running the show at SpaceX and Tesla, Musk co-chairs OpenAI, a nonprofit dedicated to precluding malicious A.I. and producing benevolent A.I. But that’s just one part of the equation; the other part, as he told Y Combinator CEO and fellow OpenAI chair Sam Altman on Thursday, is to incorporate this benevolent A.I. into the human brain. Once that works, he wants to incorporate it into all human brains — or at least those who wish to augment their au naturel minds.•

Tags: ,

jackjohnsonrobot6

dempbot

supermanrobot (1)

  • Somehow I don’t think toil will ever completely disappear from the human struggle, but then I’m a product of my time.
  • Limitless abundance is on the table as more and more work becomes automated as is societal collapse. Distribution is the key, especially in the near future.
  • If post-scarcity were to become reality in the next several hundred years, humans would have to redefine why we’re here. I’m not so worried about that possibility if it happens gradually. I think we’re good at redirecting ourselves over time. It’s the crash into the new that can cause us trouble, and right now a collision seems more likely than a safe landing.
  • When we decided to head to the moon, many, even the U.S. President, thought travel into space would foster peace on Earth. Maybe it has helped somewhat and perhaps it will do so more in the future as we fan out through the solar system, but it wasn’t a cure-all for what ails us as a species. Neither would a work-free world make things perfect. We’ll still fight amongst ourselves and struggle to govern. It will likely be better, but it won’t be utopia. 

In a Guardian piece, Ryan Avent, author of The Wealth of Humans, writes of the potential pluses and perils of a work-free world. An excerpt:

Despite impressive progress in robotics and machine intelligence, those of us alive today can expect to keep on labouring until retirement. But while Star Trek-style replicators and robot nannies remain generations away, the digital revolution is nonetheless beginning to wreak havoc. Economists and politicians have puzzled over the struggles workers have experienced in recent decades: the pitiful rate of growth in wages, rising inequality, and the growing flow of national income to profits and rents rather than pay cheques. The primary culprit is technology. The digital revolution has helped supercharge globalisation, automated routine jobs, and allowed small teams of highly skilled workers to manage tasks that once required scores of people. The result has been a glut of labour that economies have struggled to digest.

Labour markets have coped the only way they are able: workers needing jobs have little option but to accept dismally low wages. Bosses shrug and use people to do jobs that could, if necessary, be done by machines. Big retailers and delivery firms feel less pressure to turn their warehouses over to robots when there are long queues of people willing to move boxes around for low pay. Law offices put off plans to invest in sophisticated document scanning and analysis technology because legal assistants are a dime a dozen. People continue to staff checkout counters when machines would often, if not always, be just as good. Ironically, the first symptoms of a dawning era of technological abundance are to be found in the growth of low-wage, low-productivity employment. And this mess starts to reveal just how tricky the construction of a workless world will be. The most difficult challenge posed by an economic revolution is not how to come up with the magical new technologies in the first place; it is how to reshape society so that the technologies can be put to good use while also keeping the great mass of workers satisfied with their lot in life. So far, we are failing.

Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes.•

Tags:

scary-wtf-easter-bunny

6a00d83451ccbc69e201539280c2af970b

swimmingmask

Things that upset our expectations by presenting something far less desired than what was anticipated can cause us to become discombobulated, even overwhelmed by creepiness. Most of these things are completely harmless even if they’re distressing, but while their inefficacy means they pose no physical threat, they still reach deep inside us and instill terror. Why? Complicating matters even further is that some of us seek out these viscerally unsettling feelings in films and haunted houses.

In “A Theory of Creepiness,” an excellent Aeon essay, philosopher David Livingstone Smith attempts to explain this phenomenon, delving deeply into the history of theories on the topic. He ultimately believes the root cause lies in “psychological essentialism.”

The opening:

Imagine looking down to see a severed hand scuttling toward you across the floor like a large, fleshy spider. Imagine a dog trotting up to you, amiably wagging its tail – but as it gets near you notice that, instead of a canine head, it has the head of an enormous green lizard. Imagine that you are walking through a garden where the vines all writhe like worms.

There’s no denying that each of these scenarios is frightening, but it’s not obvious why. There’s nothing puzzling about why being robbed at knifepoint, pursued by a pack of wolves, or trapped in a burning house are terrifying given the physical threat involved. The writhing vines, on the other hand, can’t hurt you though they make your blood run cold. As with the severed hand or the dog with the lizard head, you have the stuff of nightmares – creepy.

And creepiness – Unheimlichkeit, as Sigmund Freud called it – definitely stands apart from other kinds of fear. Human beings have been preoccupied with creepy beings such as monsters and demons since the beginning of recorded history, and probably long before. Even today in the developed world where science has banished the nightmarish beings that kept our ancestors awake at night, zombies, vampires and other menacing entities retain their grip on the human imagination in tales of horror, one of the most popular genres in film and TV.

Why the enduring fascination with creepiness?•

Tags:

officespace

For decades, we’ve been promised a paperless office, yet my hands are still covered in bloody, weeping sores.

It’s not a surprise that reams and sheets and post-its have persisted into the era of tablets and smartphones, as horse-drawn carriages and trolleys shared the roads with automobiles during the early years of the latter’s introduction. (The final horse-driven tram in NYC was still on the streets in 1917.)

So far, the many descendants of papyrus have persevered, showing no sign of truly disappearing from desks and portfolios, though Christopher Mims of the Wall Street Journal believes the decline may finally have begun. Electronic signatures and the like have for the first time in history led to a “steady decline of about 1% to 2% a year in office use of paper,” Mims writes. The downside to clutter is that while paper leaves a trail, it isn’t prone to instantaneous surveillance like our newer techologies.

The opening:

Every year, America’s office workers print out or photocopy approximately one trillion pieces of paper. If you add in all the other paper businesses produce, the utility bills and invoices and bank statements and the like, the figure rises to 1.6 trillion. If you stacked all that paper up, it would be 18,000 times as high as Mount Everest. It would reach nearly halfway to the moon.

This is why HP Inc.’s acquisition of Samsung Electronics Co.’s printing and copying business last week makes sense. HP, says a company spokesman, has less than 5% of the market for big, high-throughput office copying machines. The company says the acquisition will incorporate Samsung’s technology in new devices, creating a big opportunity for growth.

Yet by all rights, this business shouldn’t exist. Forty years ago, at least, we were promised the paperless office. In a 1975 article in BusinessWeek, an analyst at Arthur D. Little Inc., predicted paper would be on its way out by 1980, and nearly dead by 1990.•

Tags:

David Frost was a jester, then a king. After that, he was somewhere in between but always closer to royalty than risible. The Frost-Nixon interview saw to that.

Below is an excerpt from a more-timely-than ever interview from Frost’s 1970 book, The Americansan exchange about privacy the host had with Ramsey Clark, who served as U.S. Attorney General, who is still with us, doing a Reddit Ask Me Anything just last year. At the outset of this segment, Clark is commenting about wiretapping, though he broadens his remarks to regard privacy in general.

Ramsey Clark:

[It’s] an immense waste, an immoral sort of thing.

David Frost:

Immoral in what sense?

Ramsey Clark:

Well, immoral in the sense that government has to be fair. Government has to concede the dignity of its citizens. If the government can’t protect its citizens with fairness, we’re in real trouble, aren’t we? And it’s always ironic to me that those who urge wiretapping strongest won’t give more money for police salaries to bring real professionalism and real excellence to law enforcement, which is so essential to our safety.

They want an easy way, they want a cheap way. They want a way that demeans the integrity of the individual, of all of our citizens. We can’t overlook the capabilities of our technology. We can destroy privacy, we really can. We have techniques now–and we’re only on the threshold of discovery–that can permeate brick walls three feet thick. 

David Frost:

How? What sorts of things?

Ramsey Clark:

You can take a laser beam and you put it on a resonant surface within the room, and you can pick up any vibration in that room, any sound within that room, from half a mile away.

David Frost:

I think that’s terrifying.

Ramsey Clark:

You know, we can do it with sound and lights, in other words, visual-audio invasion of privacy is possible, and if we really worked at it with the technology that we have, in a few years we could destroy privacy as we know it.

Privacy is pretty hard to retain anyway in a mass society, a highly urbanized society, and if we don’t discipline ourselves now to traditions of privacy and to traditions of the integrity of the individual, we can have a generation of youngsters quite soon that won’t know what it meant because it wasn’t here when they came.•

edward_albee_writing_new_york_corbis_be061295_lqhh1x-1

Edward Albee, one of the best playwrights America has ever produced, just died.

At the end of his privileged youth, the future dramatist worked delivering telegrams and selling music albums at Bloomingdale’s, and he didn’t care to advance much technologically beyond the record player and the typewriter. Albee despised Digital Era tools, never wanting to own a smartphone or look at the Internet, haughtily sneering at them the way intelligentsia in an earlier age derided TV as the “idiot box.” His New York Times obituary includes this 2012 quote from the writer: “All of my plays are about people missing the boat, closing down too young, coming to the end of their lives with regret at things not done.” Whether or not that applies to his defiant technophobia or not depends on your perspective. At any rate, it worked for him.

From Claudine Ko’s 2010 Vice Q&A:

Question:

Do you have a specific writing space?

Edward Albee:

I do my writing in my head. There are tables around for whenever I feel like writing something down. I don’t care where I do it. It’s called a manuscript, so I write by hand.

Question:

That’s pretty old school.

Edward Albee:

I don’t believe in all those machines.

Question:

And the internet?

Edward Albee:

I know it exists. I don’t use it.

Question:

Do you have a cell phone?

Edward Albee:

No. It’s a waste of time. I might as well watch television. I walk along the streets of New York and I find people bumping into each other, bumping into things, and they have these things in their ears or in their face. They’re not seeing anything of the real world.•

putinwithgun

Among the many vile, disgusting things about the Presidential campaign of Donald Trump is the love for autocrats expressed by the candidate and his followers. In exchange for a few words of flattery directed at the hideous hotelier, Vladimir Putin has been treated as if he were a hero rather than the preening capo he is. The thugocrat is celebrated for being “strong” when he’s actually foolishly leading his country into the past, trying to make Russia great again in a way that will never work. Sound familiar?

From Courtney Weaver at the Financial Times

From his perch in southern California, Jeff Grimord knows Vladimir Putin is no saint.

A 71-year-old executive recruiter in Newport Beach, Mr Grimord acknowledges the Russian president is often accused of “nasty things”. “Journalists who criticise him are found dead. A little bit of him is still a communist at heart.” Yet despite it all, he cannot help but feel enamoured of the Russian strongman.

“I think he’s the only leader of a large, major country that stands out these days,” Mr Grimord, a supporter of Donald Trump, explained in a recent interview. “He acts like he’s acting in his country’s interest and makes no bones about it.” 

Among the many curveballs of the US election, here is one more to add to the list. After years by being pilloried by western leaders, criticised by human rights groups and targeted by sanctions, Vladimir Putin has a small but sizeable fan club in certain corners of the US, particularly among voters who back Donald Trump. Perhaps more improbably, that fan club appears to be growing.•

Tags: , , ,

tumblr_lp242swcq91qb8vpuo1_500

Just under two weeks ago, the BBC’s economics editor, Kamal Ahmed, sat down for a fascinating 90-minute Intelligence Squared conversation with Sapiens historian Yuval Noah Harari, whose futuristic new book, Homo Deus, was just released in the U.K. and has an early 2017 publication date in the U.S.

The Israeli historian believes that during the Industrial Revolution, humans have intellectually, if not practically, figured out how to put under our control the triple threats of famine, plague and war. He says these things still bedevil us, if to a lesser degree, because of politics and incompetence, not due to ignorance. (I’ll also add madness, individual and the mass kind, to those causes.) That’s great should we continue to use knowledge to reduce counterproductive politics, incompetence, and, ultimately, to further mitigate suffering. 

For the first time in history, Harari asserts, wide abundance is now more of a threat to us than want, with obesity a greater threat than starvation. As he says, “McDonald’s and Coca-Cola pose a far greater threat to our lives than Al-Qaeda and the Islamic State.”

What will we do over the next century or two if we are able to shuffle off the old obstacles?

Harari says: “Try overcome sickness and death, find the keys to happiness and upgrade humans into gods…in the literal sense…to design life according to our wishes. The main products of the human economy will no longer be vehicles and textiles and food and weapons. The main products will be bodies and brains and minds. 

“The next phase will involve trying to gain mastery of what’s inside, of trying to decipher human biochemistry, our bodies, our brains, learning how to re-engineer them, learning how to manufacture them. This will require a lot of computing power. There’s no way the human brain has the capacity to decipher the secrets, to process the data that’s necessary to understand what’s happening inside. You need help from Artificial Intelligence and Big Data Systems, and this is what is happening already today. We see a merger of the biological sciences with computer sciences.”

Despite such promise, Harari doesn’t believe godliness is assuredly our ultimate destination. “The result may not be uploading humans into gods,” he says. “The result may be massive useless class…the end of humanity.”

The academic acknowledges he’s not an expert in AI and technology and when he makes predictions about the future, he takes for granted the accuracy of the experts in those fields. He argues that “you don’t really need to know how a nuclear bomb works” to understand its impact.

Also discussed: technological unemployment, a potentially new and radical type of wealth inequality, the poisonous American political season and how Native peoples selling Manhattan for colorful beads is recurring now with citizens surrendering private information for “free email and some cute cat videos.”•

Tags: ,

bd98734

The American military has long dreamed of an automated force, though it was only a publicity stunt during the Jazz Age when robots “joined” the army. Since then, powerful tools emerged and were miniaturized, ultimately sliding low-priced supercomputers into almost every pocket. Now when it comes to making our fighting machine an actual machine, anything seems possible–or may soon be.

Even if we remain in control of the strategic decisions governing these new warriors, the mere presence of “unkillable” battalions will likely come to bear on our thinking. Sooner or later, with several well-funded nations vying for supremacy, mission creep could remove the controls from human hands.

In “Our New War Machines,” Scott Beauchamp’s Baffler piece, the Army veteran and writer says this shift from carbon to silicon soldiers will result in “less democratic oversight of the American military.” The opening:

For an institution synonymous with tradition and continuity, the American military is in quite a radical state of flux. In just the six or so years since I left the Army, two major demographic shifts that might superficially appear unrelated (or even contradictory) have taken place within the Department of Defense. The first of these transformations involves opening up the ranks of service to previously excluded or marginalized populations: bringing women soldiers into all combat roles, allowing gay and lesbian personnel to serve openly, repealing the ban on trans people in the military.

The other major change, known in the defense industry and milblog enclaves as the Third Offset Strategy, involves taking the human element out of combat entirely. Third Offset focuses on using robots to automate warfare and reduce human (or at least American human) exposure to combat. So at the same moment that more people than ever are able to openly serve in the United States military and find the level of service best suited to their talent and abilities, fewer people are actually necessary for waging war. 

The “offset” terminology itself signals the projected scale this transformation. In Pentagon-ese, an offset denotes a strategy aimed at making irrelevant a strategic advantage held by enemy forces. The first modern offset was the exploitation of American’s nuclear arsenal in the 1950s to compensate for the Warsaw pact participants’ considerable manpower advantage. The second offset was likewise geared toward outsmarting the Soviet war machine once it had gained roughly equivalent nuclear capabilities; it involved things like stealth technology, precision-guided munitions, and ISR (intelligence, surveillance, and reconnaissance) platforms. But forty years on, our “near-peer” competitors, as the defense world refers to China and Russia, have developed their own versions of our second offset technologies. And so something new is needed; hence the Pentagon’s new infatuation with roboticized warfare.•

Tags:

Bernard Pomerance’s brilliant play The Elephant Man received equally bright stagings in 1979 in New York, from Jack Hofsiss, then a 28-year-old wunderkind who adeptly wrestled 21 short scenes about the 19th-century sideshow act John Merrick, who suffered from severe physical deformities, into a thing of moving beauty.

Sadly, Hofsiss, just died. He was adept at all media, working also in TV and film, and his career continued even after he was paralyzed from the waist down in a diving accident six years after his Elephant Man triumph. Here’s a piece from Richard F. Shepard’s 1979 New York Times profile of Hofsiss as he was readying to move the drama from Off-Broadway to on:

Mr. Hofsiss is a man of his generation, that is, a man who can call the action with equal ease in stage, film or television. Yet there is something about Broadway that stirs the blood and seizes the imagination, even though one knows that Broadway is just another stage, maybe one with a bigger budget and higher prices.

“Each production you do has its realities and necessities,” Mr. Hofsiss said. “These are compounded on Broadway because of the commercial nature of the beast. There is a pressing professionalism on Broadway.”

“The Elephant Man” is the story of John Merrick, a Briton who lived in the late 1800’s and was a fleshy, prehensile monster of a man whose awful‐looking body encased a sharp and inquiring mind that developed quickly as opportunity allowed. The opportunity came from a doctor who interested himself in Merrick and brought him to the attention of upper‐crust curiosity seekers. As played by Philip Anglim, an actor with good and regular features, the monstrous nature of the deformity is not spelled out by specific makeup, but the sense of it is conveyed by the manner in which Mr. Anglim can contort his body, although even this is not a constant distraction during a performance.

All of this is by way of saying that this is a show that leaves much to a director’s imagination, backed by a good deal of self‐discipline. Mr. Hofsiss, who was born and reared in Brooklyn and received a classical, old‐style educadon from the Jesuits at Brooklyn Prep and a more freeform one at Georgetown University, where he majored in English and theater, felt equipped for the situation.

“This is an episodic play, 21 scenes that constantly shift the characters,” he said. “The script itself is purely words, containing no production instructions. In that way, it reads like Shakespeare: enter, blackout, and that’s all. Like Shakespeare, Bernard Pomerance wrote it for a theater he knew in England, where it opened in 1977. Everything is there in the script, but it’s as though you’re carving a sculpture out of a beautiful piece of stone, frightening but rewarding.”

Mr. Pomerance, an American who lives in England, came to New York only briefly before each opening, the one Off Broadway and the one on Broadway, and one might wonder whether author and director who are oceans apart in the flesh might not be in the same condition spiritually. But, “Bernard and I worked it out by telephone — he’s trusting of directors,” Mr. Hofsiss said.•


The third John Merrick during the original run was David Bowie. A 1980 episode of Friday Night…Saturday Morning featured Tim Rice interviewing the rock star about the play.

Tags: , , , ,

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Aeon, which already presented a piece from Nicholas Carr’s new book, Utopia Is Creepy, has another, a passage about biotechnology which wonders if science will soon move too fast not only for legislation but for ethics as well.

The “philosophy is dead” assertion that’s persistently batted around in scientific circles drives me bonkers because we dearly need consideration about our likely commandeering of evolution. Carr doesn’t make that argument but instead rightly wonders if ethics is likely to be more than a “sideshow” when garages aren’t used to just hatch computer hardware or search engines but greatly altered or even new life forms. The tools will be cheap, the “creativity” decentralized, the “products” attractive. As Freeman Dyson wrote nearly a decade ago: “These games will be messy and possibly dangerous.”

From Carr:

If to be transhuman is to use technology to change one’s body from its natural state, then we are all already transhuman. But the ability of human beings to alter and augment themselves might expand enormously in the decades ahead, thanks to a convergence of scientific and technical advances in such areas as robotics, bioelectronics, genetic engineering and pharmacology. Progress in the field broadly known as biotechnology promises to make us stronger, smarter and fitter, with sharper senses and more capable minds and bodies. And scientists can already use the much discussed gene-editing tool CRISPR, derived from bacterial immune systems, to rewrite genetic code with far greater speed and precision, and at far lower cost, than was possible before. In simple terms, CRISPR pinpoints a target sequence of DNA on a gene, uses a bacterial enzyme to snip out the sequence, and then splices a new sequence in its place. The inserted genetic material doesn’t have to come from the same species. Scientists can mix and match bits of DNA from different species, creating real-life chimeras.

As long ago as 1923, the English biologist J B S Haldane gave alecturebefore the Heretics Society in Cambridge on how science would shape humanity in the future. ‘We can already alter animal species to an enormous extent,’ he observed, ‘and it seems only a question of time before we shall be able to apply the same principles to our own.’ Society would, Haldane felt sure, defer to the scientist and the technologist in defining the boundaries of the human species. ‘The scientific worker of the future,’ he concluded, ‘will more and more resemble the lonely figure of Daedalus as he becomes conscious of his ghastly mission, and proud of it.’

The ultimate benefit of transhumanism, argues Nick Bostrom, professor of philosophy at the University of Oxford, and one of the foremost proponents of radical human enhancement, is that it expands human potential, giving individuals greater freedom ‘to shape themselves and their lives according to their informed wishes’.Transhumanismunchains us from our nature. Critics take a darker view, suggesting that biological and genetic tinkering is more likely to demean or even destroy the human race than elevate it.

The ethical debate is profound, but it seems fated to be a sideshow.•

Tags:

321helmet1

Read the fine print. That’s always been good advice, but it’s never been taken seriously when it comes to the Internet, a fast-moving, seemingly ephemeral medium that doesn’t invite slowing down to contemplate. So companies attach a consent form to their sites and apps about cookies. No one reads it, and there’s no legal recourse from having your laptop or smartphone from being plundered for all your personal info. It quietly removes legal recourse from surveillance capitalism.

In an excellent and detailed Locus Magazine essay, Cory Doctorow explains how this oversight, which has already had serious consequences, will snake its way into every corner of our lives once the Internet of Things turns every item into a computer, cars and lamps and soda machines and TV screens. “Notice and consent is an absurd legal fiction,” he writes, acknowledging that it persists despite its ridiculous premise and invasive nature.

An excerpt:

The coming Internet of Things – a terrible name that tells you that its proponents don’t yet know what it’s for, like ‘‘mobile phone’’ or ‘’3D printer’’ – will put networking capability in everything: appliances, light­bulbs, TVs, cars, medical implants, shoes, and garments. Your lightbulb doesn’t need to be able to run apps or route packets, but the tiny, com­modity controllers that allow smart lightswitches to control the lights anywhere (and thus allow devices like smart thermostats and phones to integrate with your lights and home security systems) will come with full-fledged computing capability by default, because that will be more cost-efficient that customizing a chip and system for every class of devices. The thing that has driven computers so relentlessly, making them cheaper, more powerful, and more ubiquitous, is their flexibility, their character of general-purposeness. That fact of general-purposeness is inescapable and wonderful and terrible, and it means that the R&D that’s put into making computers faster for aviation benefits the computers in your phone and your heart-monitor (and vice-versa). So every­thing’s going to have a computer.

You will ‘‘interact’’ with hundreds, then thou­sands, then tens of thousands of computers every day. The vast majority of these interactions will be glancing, momentary, and with computers that have no way of displaying terms of service, much less presenting you with a button to click to give your ‘‘consent’’ to them. Every TV in the sportsbar where you go for a drink will have cameras and mics and will capture your image and process it through facial-recognition software and capture your speech and pass it back to a server for continu­ous speech recognition (to check whether you’re giving it a voice command). Every car that drives past you will have cameras that record your like­ness and gait, that harvest the unique identifiers of your Bluetooth and other short-range radio devices, and send them to the cloud, where they’ll be merged and aggregated with other data from other sources.

In theory, if notice-and-consent was anything more than a polite fiction, none of this would hap­pen. If notice-and-consent are necessary to make data-collection legal, then without notice-and-consent, the collection is illegal.

But that’s not the realpolitik of this stuff: the reality is that when every car has more sensors than a Google Streetview car, when every TV comes with a camera to let you control it with gestures, when every medical implant collects telemetry that is collected by a ‘‘services’’ business and sold to insurers and pharma companies, the argument will go, ‘‘All this stuff is both good and necessary – you can’t hold back progress!’’•

Tags:

astronauts1231-1-e1434398269718

Peter Diamandis, who dreams of the world’s first trillionaire, believes we’ll become a “world of haves and super-haves.” On some levels, that would be great: a chance for no more poverty, disease greatly reduced, more opportunity and education for even those of us who have the least.

But there are some problems with that thinking. One is that even if there was some level of prosperity for everyone, great wealth inequality would still allow some to rig the system for themselves. Also if you look at America, a rich country in which everyone could certainly have food, shelter, a decent standard of living and good education and health care, that isn’t the case, and our infant mortality rate is shockingly high. I mean, we already have abundance. Distribution is really the challenge.

Post-scarcity would be wonderful, but it must be managed well. Diamandis seems a good-hearted person who would likely agree with that sentiment, but his macro vision for the future seems as flawed as his belief that he has a good shot at a mutli-century life. Much of his dreams for tomorrow seem driven by Silicon Valley insularity and irrational exuberance.

An excerpt from Leia Parker’s excellent Business Journals Q&A with the Singularitarian:

Question:

How old are you today?

Peter Diamandis:

55.

Question:

With longevity, are we at a point now with medicine that people alive today could live far longer than the average current life expectancy?

Peter Diamandis:

I think I’ve got a shot at living a multi-hundred-year lifespan. For me, it’s living long enough to live forever. We have incredible discoveries going on. There are incredible breakthroughs going on right now in stem cell science, so there’s no reason to believe that we will not see a longevity revolution coming our way.

Question:

You have written that we will enter an age of abundance because of exponential technological progress. If many people’s jobs are replaced by automation in this era of abundance, you have suggested people could receive a basic income so everyone could then work on their passions. Do you anticipate that happening in our lifetime? Oh yeah, I mean, you’ve got multiple countries working on that or testing it right now. Canada’s passed that.

Peter Diamandis:

Two things are going on in that regard. One is, we don’t realize it, but we’re very rapidly demonetizing the cost of living. The cost of things are dropping rapidly as we digitize, demonetize and democratize. So autonomous Ubers will be five to 10 times cheaper than owning and driving a car. Solar energy — we just set a record low of solar energy at 2.91 cents per kilowatt hour out of South America, and so we are going to see solar dropping in cost precipitously.

Imagine a world where our basic needs — energy and water, healthcare and education — are effectively free.

So the cost of living is dropping. Not that you wouldn’t be able to spend money on all kinds of things if you had it, but the fundamental Maslow’s needs are going to be met through what you can call a sort of technological socialism, where technology is taking care of those things for you.

And, it’s going to be interesting to see where humanity spends its time. Is it going to be in the virtual world, the gaming world?

Question:

Who would pay the basic income? Would it be governments or wealthy individuals?

Peter Diamandis:

I think it’s going to be governments through wealthy individuals. I think we’re going to see–

Question:

Like a taxing and redistribution?

Peter Diamandis:

Ultimately, I think that is likely to be what happens.•

Tags: ,

1473606747742It’s amusing that the truest thing Hillary Clinton has said during the election season–the “basket of deplorables” line–has caused her grief. The political rise of the hideous hotelier Donald Trump, from his initial announcement of his candidacy forward, has always been about identity politics (identity: white), with the figures of the forgotten, struggling Caucasians of Thomas Frank narratives more noise than signal.

The American middle class has legitimately taken a big step back for four decades, owing to a number of factors (globalization, computerization, tax rates, etc.), but the latest numbers show a huge rebound for middle- and lower-class Americans under President Obama and his worker-friendly policies. 

Perhaps that progress will be short lived, with automation and robotics poised to alter the employment landscape numerous times in the coming decades. But the Digital Age challenges have been completely absent from Trump’s rhetoric (if he even knows about them). And his stated policies will reverse the gains made by the average American over the last eight years. His ascent has always been about color and not the color of money.

From Sam Fleming at the Financial Times:

Household incomes surged last year in the US, suggesting American middle class fortunes are improving in defiance of the dark rhetoric that has dominated the presidential election campaign.

A strengthening labour market, higher wages and persistently subdued inflation pushed real median household income up 5.2 per cent between 2014 and 2015 to $56,516, the Census Bureau said on Tuesday. This marked the first gain since the eve of the global financial crisis in 2007 and the first time that inflation-adjusted growth exceeded 5 per cent since the bureau’s records began in 1967.

But the increase in 2015 still brought incomes to just 1.6 per cent below the levels they were hovering at the year before the recession started and they remain 2.4 per cent below their peak in 1999. Income gains were largest at the bottom and middle of the income scale relative to the top, reducing income inequality. 

The US election debate has been dominated by the story of long-term income stagnation, with analysts attributing the rise of Donald Trump in part to the shrinking ranks of America’s middle class, rising inequality and the impact of globalisation on household incomes. Tuesday’s strong numbers, which cover the year in which the Republican candidate launched his campaign, cast that narrative in a new light.•

Tags: , ,

ford-gyron-1961

Not that long ago, it was considered bold–foolhardy, even–to predict the arrival of a more technological future 25 years down the road. That was the gambit the Los Angeles Times made in 1988 when it published “L.A. 2013,” a feature that imagined the next-level life of a family of four and their robots.

We may not yet be at the point when “whole epochs will pass, cultures rise and fall, between a telephone call and a reply“–I mean, who talks on the phone anymore?–but it doesn’t require a quarter of a century for the new to shock us now. In the spirit of our age, Alissa Walker of the Curbed LA works from the new report “Urban Mobility in the Digital Age” when imagining the city’s remade transportation system in just five years. Regardless of what Elon Musk promises, I’ll bet the over on autonomous cars arriving in a handful of years, but it will be transformative when it does materialize, and it’ll probably happen sooner than later.

The opening:

It’s 2021, and you’re making your way home from work. You jump off the Expo line (which now travels from Santa Monica to Downtown in 20 minutes flat), and your smartwatch presents you with options for the final two miles to your apartment. You could hop on Metro’s bike share, but you decide on a tiny, self-driving bus that’s waiting nearby. As you board, it calculates a custom route for you and the handful of other passengers, then drops you off at your doorstep in a matter of minutes. You walk through your building’s old parking lot—converted into a vegetable garden a few years ago—and walk inside in time to put your daughter to bed.

That’s the vision for Los Angeles painted in Urban Mobility in the Digital Age, a new report that provides a roadmap for the city’s transportation future. The report, which was shared with Curbed LA and has been posted online, addresses the city’s plan to combine self-driving vehicles (buses included) with on-demand sharing services to create a suite of smarter, more efficient transit options.

But it’s not just the way that we commute that will change, according to the report. Simply being smarter about how Angelenos move from one place to another brings additional benefits: alleviating vehicular congestion, potentially eliminating traffic deaths, and tackling climate change—where transportation is now the fastest-growing contributor to greenhouse gases. And it will also impact the way the city looks, namely by reclaiming the streets and parking lots devoted to the driving and storing of cars that sit motionless 95 percent of the time.

The report is groundbreaking because it makes LA the first U.S. city to specifically address policies around self-driving cars.•

Tags:

321snowdeni9

We should always err on the side of whistleblowers like Edward Snowden because they’ve traditionally served an important function in our democracy, but that doesn’t mean the former NSA employee has changed America for the better–or much at all.

In the wake of 9/11, most in the country wanted to feel safe and were A-OK with the government taking liberties (figuratively and literally). Big Brother became the favorite sibling. The White House position and policy has shifted somewhat since Snowden went rogue, but I believe from here on in we’re locked in a cat-and-mouse game among government, corporations and citizens, with surveillance and leaks a permanent part of the landscape. The technology we have–and the even more powerful tools we’ll have in the future–almost demands such an arrangement. We’re all increasingly inside a machine now, one that moves much faster than legislation. That’s the new abnormal.

The Financial Times set up an interview with Snowden conducted by Alan Rusbridger, former EIC of the Guardian, the publication that broke the story. The subject is unsurprisingly much more hopeful about the impact of his actions than I am. An excerpt:

Alan Rusbridger:

It’s now, what, three years since the revelations?

Edward Snowden:

It’s been more than three years. June 2013.

Alan Rusbridger:

Tell me first how the world has changed since then. What’s changed as a result of what you did, from your perspective? Not from your personal life, but the story you revealed.

Edward Snowden:

The main thing is that our culture has changed, right? There are many different ways of approaching this. One is we look at the structural changes, we look at the policy changes, we look at the fact that the day the Guardian published the story, for example, the entire establishment leaped out of their chairs and basically said ‘This is untrue, it’s not right, there’s nothing to see here’. You know, ‘Nobody’s listening to your phone calls’, as the president said very early on. Do you remember? I think he sort of spoke with the voice of the establishment in all of these different countries here, saying, ‘I think we’ve drawn the right balance’.

Then you move on to later in the same year when the very first court verdicts began to come forward and they found that these programmes were ‘unlawful, likely unconstitutional’ — that’s a direct quote — and ‘Orwellian in their scope’ — again a quote. And this trend continued in many different courts. The government realising that these programmes could not be legally sustained and would have to be amended if they were to keep any of these powers at all. And to avoid a precedent that they would consider damaging, which is that the Supreme Court basically locks the power of mass surveillance away from them forever, they need a pretty substantial pivot, whereby January of 2014 the president of the US said that, well, of course you could never condone what I did. He believes that this has made us stronger as a nation and that he was going to be recommending changes to a law of Congress, which then later, again this is Congress, they don’t do anything quickly, they actually did amend the law.

Now, they would not likely have made these changes to law on their own without the involvement of the Courts. But these are all three branches of government in the US completely changing their position. In March of 2013, the Supreme Court flushed the case, right, saying that this is a state secret, we can’t talk about it and you can’t prove that you were spied on. Then suddenly when everyone can prove that they had been spied on, we see that the law changed. So that’s sort of the policy side of looking at that. And people can look at the substance there and say, ‘This is significant’. Even though it didn’t solve the problem, it’s a start and, more importantly, it empowers people, it empowers the public; it shows that, for the first time in four years, we can actually start to impose more oversight on intelligence agencies, on spies, rather than giving them a free pass to do whatever, simply because we’re scared, which is understandable but clearly not ethical.

As online threats race up national security agendas and governments look at ways of protecting their national infrastructures a cyber arms race is causing concern to the developed world. Then there’s the other way of looking at it, which is in terms of public awareness.•

Tags: ,

robotheadcovering7

Right now the intrusion of Digital Age surveillance is still (mostly) external to our bodies, though computers have shrunk small enough to slide into our pockets. If past is prologue, the future progression would move this hardware inside ourselves, the way pacemakers for the heart were originally exterior machines until they could fit in our chests. Even if no such mechanisms were necessary and we manipulated health, longevity, appearance and longevity though biological means, the thornier ethical questions would probably remain. 

A month ago, I published a post about Eve Herold’s new book, Beyond Human, when the opening was excerpted in Vice. Here’s a piece from “Transhumanism Is Inevitable,” Ronald Bailey’s review of the title in the Libertarian magazine Reason:

Herold thinks these technological revolutions will be a good thing, but that doesn’t mean she’s a Pollyanna. Throughout the book, she worries about how becoming ever more dependent on our technologies will affect us. She foresees a world populated by robots at our beck and call for nearly any task. Social robots will monitor our health, clean our houses, entertain us, and satisfy our sexual desires. Isolated users of perfectly subservient robots could, Herold cautions, “lose important social skills such as unselfishness and the respect for the rights of others.” She further asks, “Will we still need each other when robots become our nannies, friends, servants, and lovers?”

There is also the question of how centralized institutions, as opposed to empowered individuals, might use the new tech. Behind a lot of the coming enhancements you’ll find the U.S. military, which funds research to protect its warriors and make them more effective at fighting. As Herold reports, the Defense Advance Research Projects Agency (DARPA) is funding research on a drug that would keep people awake and alert for a week. DARPA is also behind work on brain implants designed to alter emotions. While that technology could help people struggling with psychological problems, it might also be used to eliminate fear or guilt in soldiers. Manipulating soldiers’ emotions so they will more heedlessly follow orders is ethically problematic, to say the least.

Similar issues haunt Herold’s discussion of the technologies, such as neuro-enhancing drugs and implants, that may help us build better brains. Throughout history, the ultimate realm of privacy has been our unspoken thoughts. The proliferation of brain sensors and implants might open up our thoughts to inspection by our physicians, friends, and family—and also government officials and corporate marketers.

Yet Herold effectively rebuts bioconservative arguments against the pursuit and adoption of human enhancement.•

Tags: ,

A big problem with data analysis is that when it goes really deep, it’s not so easy to know why it’s working, if it’s working. Algorithms can be skewed consciously or not to favor some and keep us in separate silos, and the findings of artificial neural networks can be mysterious to even machine-learning professionals. We already base so much on silicon crunching numbers and are set to bet the foundations of our society on these operations, so that’s a huge issue. Another one: The efficacy of neural nets may be inhibited by more transparent approaches. Two pieces on the topic follow.


The opening of Aaron M. Bornstein’s Nautilus essay “Is Artificial Intelligence Permanently Inscrutable?“:

Dmitry Malioutov can’t say much about what he built.

As a research scientist at IBM, Malioutov spends part of his time building machine learning systems that solve difficult problems faced by IBM’s corporate clients. One such program was meant for a large insurance corporation. It was a challenging assignment, requiring a sophisticated algorithm. When it came time to describe the results to his client, though, there was a wrinkle. “We couldn’t explain the model to them because they didn’t have the training in machine learning.”

In fact, it may not have helped even if they were machine learning experts. That’s because the model was an artificial neural network, a program that takes in a given type of data—in this case, the insurance company’s customer records—and finds patterns in them. These networks have been in practical use for over half a century, but lately they’ve seen a resurgence, powering breakthroughs in everything from speech recognition and language translation to Go-playing robots and self-driving cars.

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.•


From Rana Foroohar’s Time article about mathematician and author Cathy O’Neil:

O’Neil sees plenty of parallels between the usage of Big Data today and the predatory lending practices of the subprime crisis. In both cases, the effects are hard to track, even for insiders. Like the dark financial arts employed in the run up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. “The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,” says O’Neil.

The effects are just as pernicious. Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.

In higher education, the use of algorithmic models that rank colleges has led to an educational arms race where schools offer more and more merit rather than need based aid to students who’ll make their numbers (thus rankings) look better. At the same time, for-profit universities can troll for data on economically or socially vulnerable would be students and find their “pain points,” as a recruiting manual for one for-profit university, Vatterott, describes it, in any number of online questionnaires or surveys they may have unwittingly filled out. The schools can then use this info to funnel ads to welfare mothers, recently divorced and out of work people, those who’ve been incarcerated or even those who’ve suffered injury or a death in the family.

Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since “they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.” Whereas the poor engage more with faceless educators and employers, “the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.”•

Tags: , , ,

isspokesman-newstrack_2016

You go to war with the whole world and you lose. Just ask Germany.

The hit-and-run thuggery of ISIS differed from Al-Qaeda and other terrorist organizations in one important way: It actually put stakes in the ground, conquering cities and laying claim to land. It sought permanence in a material way, aspiring to become a state.

It didn’t work out. Once other nations processed what was happening and started pushing back, ISIS began what will likely be a permanent retreat. Having grown more desperate, the terrorist organization began striking abroad, offering a few wild haymakers before the end of a losing fight.

When ISIS as a fledgling nation is permanently disabled, will the land it ruled, if briefly, in Iraq and Syria become a breeding ground for new terror groups and sectarian violence? Given the recent and distant history of the region, what’s past may be prologue. In a well-written Wall Street JournalSaturday Essay,” Yaroslav Trofimov believes trouble won’t end when ISIS is extinguished, with factions that fought together against it perhaps turning on one another. An excerpt:

It is easy to think that Islamic State is still on the march. It isn’t. Over the past year, the territory under its control—once roughly the size of the U.K.—has shrunk rapidly in both Iraq and Syria. Islamic State has lost the Iraqi cities of Ramadi and Fallujah, the ancient Syrian city of Palmyra and the northern Syrian countryside bordering on Turkey. Its militants in Libya were ousted in recent weeks from their headquarters in Sirte. In coming months, the group will face a battle that it is unlikely to win for its two most important remaining centers—Mosul in Iraq and Raqqa in Syria.

It may be tempting fate to ask the question, but it must be asked all the same: What happens once Islamic State falls? The future of the Middle East may well depend on who fills the void that it leaves behind both on the ground and, perhaps more important, in the imagination of jihadists around the world.

As we mark the 15th anniversary this weekend of the terrorist attacks of 9/11, one likely consequence of the demise of ISIS (as Islamic State in Iraq and Syria is often known) will be to revive its ideological rival, al Qaeda, which opposed Mr. Baghdadi’s ambitions from the start. Al Qaeda may yet unleash a fresh wave of terrorist attacks in the West and elsewhere—as may the remnants of Islamic State, eager to show that they still matter.

“Simply having ISIS go away doesn’t mean that the jihadist problem goes away,” said Daniel Benjamin of Dartmouth College, who served as the State Department’s counterterrorism coordinator during the Obama administration. “Eliminating the caliphate will be an achievement—but more likely, it will be just the end of the beginning rather than the beginning of the end.”•

Tags:

the-bellboy-jerry-lewis-1960

botlr-in-hallway-lr

When Silicon Valley stalwart Marc Andreessen directs mocking comments or tweets at those who fear the Second Machine Age could lead to mass unemployment, even societal upheaval, he usually depicts them as something akin to Luddites, pointing out that the Industrial Revolution allowed for the creation of more and better jobs. Of course, history doesn’t necessarily repeat itself. 

In a Bloomberg View column, Noah Smith says that while the statistics say a robot revolution hasn’t yet arrived and may or may not emerge, relying on the past to predict the future isn’t sound strategy. An excerpt:

Predicting whether machines will make the bulk of humans useless is beyond my capability. The future of technology is much too hard to predict. But I can say this: one of the main arguments often used to rule out this worrisome possibility is very shaky. If you think that history proves that humans can’t be replaced, think again.

I see this argument all the time. Because humans have never been replaced before, people say, it can’t happen in the future. Many cite the example of the Luddites, British textile workers in the early 19th century who protested against the introduction of technologies that could do their jobs more cheaply. In retrospect, the Luddites look foolish. As industrial technology improved, skilled workers were not impoverished — instead, they found ever-more-lucrative jobs that made use of new tools. As a result, “Luddite” is now a term of derision for those who doubt the power of technology to improve the world.

A more sophisticated version of this argument is offered by John Lewis of the Bank of England, in arecent blog post. Reviewing economic history, he shows what most people intuitively understand — new technology has complemented human labor rather than replacing it. Indeed, as Lewis points out, most macroeconomic models assume that the relationship between technology and humans is basically fixed.

That’s the problem, though — economic assumptions are right, until they’re not. The future isn’t always like the past. Sometimes it breaks in radical ways.•

 

Tags:

vintagehairdryer

If you like your human beings to come with fingers and toes, you may be disquieted by the Reddit Ask Me Anything conducted by Andrew Hessel, a futurist and a “biotechnology catalyst” at Autodesk. It’s undeniably one of the smartest and headiest AMAs I’ve ever read, with the researcher fielding questions about a variety of flaws and illnesses plaguing people that biotech may be able to address, even eliminate. Of course, depending on your perspective, humanness itself can be seen as a failing, something to be “cured.” A few exchanges follow.


Question:

Four questions, all on the same theme:

1) What is the probable timeframe for when we’ll see treatments that will slow senescence?

2) Do we have a realistic estimate, in years or decades (or bigger, fingers crossed!), on the life extension potential of such treatments?

3) Is it realistic that such treatments would also include senescence reversal (de-ageing)?

4) Is there any indication at present as to what kind of form these treatments will take, particularly with regards to their invasiveness?

Andrew Hessel:

1) We are already seeing some interesting results here — probably the most compelling I’ve seen is in programming individually senescent cells to die. More work needs to be done. 2) In humans, no. We are already long-lived. Experiments that lead to longer life can’t be rushed — the results come at the end! 3) TBD — but I can’t see why not 4) Again, TBD, but I think it will involve tech like viruses and nanoparticles that can target cells / tissue with precision.

Overall, trying to extend our bodies may be throwing good effort at a bad idea. In some ways, the important thing is to be able to extract and transfer experience and memory (data). We do this when we upgrade our phones, computers, etc.


Question:

Can Cas9/Crispr edit any gene that control physical appearance in an adult human? say for example it it’s the gene that controls the growth of a tail? will reactivating it actually cause a tail to grow in an already mature human?

Andrew Hessel:

It’s a powerful editing technology that could potentially allow changing appearance. The problem is editing a fully developed organism is new territory. Also, there’s the challenges of reprogramming millions or billions of cells! But it’s only a 4 year old technology, lots of room to explore and learn.


Question:

I’m an artist who’s curious about using democratized genetic engineering techniques (i.e. CRISPR) to make new and aesthetically interesting plant life, like roses the size of sunflowers or lillies and irises in shapes and colors nobody has ever scene. Is this something that is doable by an non-scientist with the tools and understanding available today? I know there are people inserting phosphorescence into plant genes – I’d like to go one further and actually start designing flower, or at least mucking around with the code to see what kinds of (hopefully) pretty things emerge. I’d love your thoughts on this… Thanks!

Andrew Hessel:

I think it’s totally reasonable to start thinking about this. CRISPR allows for edits of genomes and using this to explore size/shape/color etc of plants is fascinating. As genome engineering (including whole genome synthesis) tech becomes cheaper and faster, doing more extensive design work will be within reach. The costs need to drop dramatically though — unless you’re a very rich artists. :-) As for training, biodesign is already so complicated that you need software tools to help. The software tools are going to improve a lot in the future, allowing designers to focus more on what they want to make, rather than the low level details of how to make it. But we still have a ways to go on this front. We still don’t have great programming tools for single cells, let alone more complex organisms. But they WILL come.


Question:

So my question is, do you think there will be a “biological singularity,” similar to Ray Kurzweil’s “technological singularity?”

Will there be a time in the near future where the exponential changes in genetic engineering (synthetic biology, dna synthesis, genome sequencing, etc.) will have such a profound impact on human civilization that it is difficult to predict what the future will be like?

Andrew Hessel:

I think it’s already hard to figure out where the future is going. Seriously. Who would have predicted politics to play out this way this year? But yes, I think Kurzweil calls it right that the combination of accelerating computation, biotech, etc creates a technological future that is hard to imagine. This said, I don’t think civilization will change that quickly. Computers haven’t changed the fundamentals of life, just the details of how we go about our days. Biotech changes can be no less profound, but they take longer to code, test, and implement., Overall, though, I think we come of this century with a lot more capabilities than we brought into it!•

Tags:

img-williamklein_102633410484

Desperation sounds funny when expressed in words. A scream would probably be more coherent.

Nobody really knows how to remake newspapers and magazines of a bygone era to be profitable in this one, and the great utility they provided–the Fourth Branch of Government the traditional media was called–is not so slowly slipping away. What’s replaced much of it online has been relatively thin gruel, with the important and unglamorous work of covering local politics unattractive in a viral, big-picture machine.

All I know is when Condé Nast is using IBM’s Watson to help advertisers “activate” the “right influencers” for their “brands,” we’re all in trouble.

From The Drum:

With top titles like Vogue, Vanity Fair, Glamour and GQ, Conde Nast’s partnership heralds a key step merging targeted influencer marketing and artificial intelligence in the fashion and lifestyle industry. The platform will be used to help advertiser clients improve how they connect with audiences over social media and gain measurable insights into how their campaigns resonate.

“Partnering with Influential to leverage Watson’s cognitive capabilities to identify the right influencers and activate them on the right campaigns gives our clients an advantage and increases our performance, which is paramount in today’s distributed content world,” said Matt Starker, general manager, digital strategy and initiatives at Condé Nast. “We engage our audiences in innovative ways, across all platforms, and this partnership is another step in that innovation.”

By analyzing unstructured data from an influencer’s social media feed and identifying key characteristics that resonate with a target demographic, the Influential platform uses IBM’s personality insights into, for example, a beauty brand that focuses on self-enhancement, imagination and trust. This analysis helps advertisers identify the right influencers by homing in on previously hard-to-measure metrics–like how they are perceived by their followers, and how well their specific personality fits the personality of the brand.•

« Older entries § Newer entries »