Science/Tech

You are currently browsing the archive for the Science/Tech category.

Gary Silverman of the Financial Times penned a great piece in February about Alabama encountering the false promise of a manufacturing revival, with the jobs, uncoupled from union protections and divorced from good policy, often proving dangerous, contracted and low-paying.

These are the scraps really being offered with Trump’s vow to return America to factory-town glory. Individuals face fierce competition for substandard positions as they dwindle before the progress of automation, while poorer states offer such aggressive incentives to attract plants that their tax bases aren’t much enhanced in the bargain.

In Bloomberg Businessweek, Peter Waldman treads on the same territory with “Inside Alabama’s Auto Jobs Boom,” which makes it clear the “New Detroits” dotting the Southern landscape aren’t much like the classic model. An excerpt:

Alabama has been trying on the nickname “New Detroit.” Its burgeoning auto parts industry employs 26,000 workers, who last year earned $1.3 billion in wages. Georgia and Mississippi have similar, though smaller, auto parts sectors. This factory growth, after the long, painful demise of the region’s textile industry, would seem to be just the kind of manufacturing renaissance President Donald Trump and his supporters are looking for.

Except that it also epitomizes the global economy’s race to the bottom. Parts suppliers in the American South compete for low-margin orders against suppliers in Mexico and Asia. They promise delivery schedules they can’t possibly meet and face ruinous penalties if they fall short. Employees work ungodly hours, six or seven days a week, for months on end. Pay is low, turnover is high, training is scant, and safety is an afterthought, usually after someone is badly hurt. Many of the same woes that typify work conditions at contract manufacturers across Asia now bedevil parts plants in the South.

“The supply chain isn’t going just to Bangladesh. It’s going to Alabama and Georgia,” says David Michaels, who ran OSHA for the last seven years of the Obama administration. Safety at the Southern car factories themselves is generally good, he says. The situation is much worse at parts suppliers, where workers earn about 70¢ for every dollar earned by auto parts workers in Michigan, according to the Bureau of Labor Statistics. (Many plants in the North are unionized; only a few are in the South.)

Cordney Crutcher has known both environments. In 2013 he lost his left pinkie while operating a metal press at Matsu Alabama, a parts maker in Huntsville owned by Matcor-Matsu Group Inc. of Brampton, Ont. Crutcher was leaving work for the day when a supervisor summoned him to replace a slower worker on the line, because the plant had fallen 40 parts behind schedule for a shipment to Honda Motor Co. He’d already worked 12 hours, Crutcher says, and wanted to go home, “but he said they really needed me.” He was put on a press that had been acting up all day. It worked fine until he was 10 parts away from finishing, and then a cast-iron hole puncher failed to deploy. Crutcher didn’t realize it. Suddenly the puncher fired and snapped on his finger. “I saw my meat sticking out of the bottom of my glove,” he says.

Now Crutcher, 42, commutes an hour to the General Motors Co. assembly plant in Spring Hill, Tenn., where he’s a member of United Auto Workers. “They teach you the right way,” he says. “They don’t throw you to the wolves.” His pay rose from $12 an hour at Matsu to $18.21 at GM.•

Tags:

If we don’t kill ourselves first and we probably will, the Posthuman Industrial Complex will ultimately become a going concern. I can’t say I’m sorry I’ll miss out on it.

Certainly establishing human colonies in space will change life or perhaps we’ll change life as a precursor to settling the final frontier. From Freeman Dyson:

Sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth. I give the name Noah’s Ark culture to this style of space operation. A Noah’s Ark spacecraft is an object about the size and weight of an ostrich egg, containing living seeds with the genetic instructions for growing millions of species of microbes and plants and animals, including males and females of sexual species, adapted to live together and support one another in an alien environment.

There are also computational scientists among the techno-progressivists who are endeavoring, with the financial aid of their deep-pocketed Silicon Valley investors, to radically alter life down here, believing biology itself a design flaw. To such people, there are many questions and technology is the default answer.

In an excellent excerpt in the Guardian, To Be a Machine author Mark O’Connell explores the Transhumanisitic trend and its “profound metaphysical weirdness,” profiling the figures forging ahead with reverse brain engineering, neuroprostheses and emulations, who wish to reduce human beings to data. The opening:

Here’s what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spider’s legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

You’re in pretty deep with this thing; there’s no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computer’s hardware. As the work proceeds, another mechanical appendage – less delicate, less careful – removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe – with sadness, or horror, or detached curiosity – the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.•

Tags:

Promises during the campaign season about reshoring manufacturing jobs was perplexing and counterproductive. Most of that work has disappeared not to China and Mexico but into the zeros and ones. Artificial Intelligence is poised to further radically transform the labor landscape in the coming decades, whether or not the Frey-Osborne benchmark predicting 47% of current jobs are at risk turns out to be prophetic.

The honest argument, whether correct or not, against the prevailing idea that AI will disrupt society by replacing us at the office and factory is that these positions will be supplanted by superior ones, as was the case when we transitioned from an agrarian culture to the Industrial Age. Even those who are certain of this outcome often fail to recall what a bumpy progression that was, with legislation, unionization and the establishment of social safety nets required to avoid bloody revolution or collapse. It wasn’t easy, literal blood was spilled, and that’s the glass-half-filled option in the Second Machine Age.

Treasury Secretary Steven Mnuchin, who strapped on beer goggles of a 1930s vintage by declaring today that Donald Trump has “perfect genes,” is either wildly dishonest or completely oblivious when he says AI is not a threat to today’s workers.

From Gillian B. White at the Atlantic:

On Friday, during a conversation with Mike Allen of Axios, the newly minted Treasury Secretary Steven Mnuchin said that there was no need to worry about artificial intelligence taking over U.S. jobs anytime soon. “It’s not even on our radar screen,” he told Allen. When pressed for when, exactly, he thought concern might be warranted, Mnuchin offered “50 to 100 more years.” Just about anyone who works on, or studies machine learning would beg to differ.

In December of 2016, about one month before President Trump officially took office, the White House released a report on artificial intelligence and its impact on the economy. It found that advances in machine learning already had the potential to disrupt some sectors of the labor market, and that capabilities such as driverless cars and some household maintenance tasks were likely to cause further disruptions in the near future. Experts asked to weigh in on the report estimated that in the next 10 to 20 years, 47 percent of U.S. jobs could in some way be at risk due to advances in automation.

The Obama administration is certainly not the only group of experts to believe that the impact of machine learning on the labor market has already started. In a conversation earlier this month, Melinda Gates cited rapidly advancing machine learning as part of the reason that the tech industry needed to tackle its gender diversity initiatives immediately. In 2016, a report from McKinsey found that existing technologies could automate about 45 percent of the activities that humans are paid to perform. Even Mnuchin’s former employer, Goldman Sachs, believes that a massive leap forward in terms of machine learning will occur within the next decade.•

Tags: ,

Channeling Nicholas Carr’s comments on the recent Mark Zuckerberg “Building Global Community” manifesto, I will say this: The answer to all technologically enabled human problems is not more technology. Sometimes the system itself is the bug, the fatal error.

In a Financial Times piece, Yuval Noah Harari is more hopeful on the Facebook founder’s globalization gambit, not thinking his intentions grandiose but believing them largely praiseworthy if decidedly vague. The historian does caution that social-media companies would need to alter their focus, perhaps sacrifice financially, to actually foster healthy, large-scale societies, a shift that seems fanciful. 

Harari thinks we’d likely be safer and more prosperous as a world community, which isn’t a sure thing, but even if it were, many forces are working against transitioning humans into a “global brand.” If Harari is correct, Facebook’s place in that scheme would likely be minute–or perhaps it would serve as an impediment despite Zuckerberg’s designs.

Regardless of where you stand on these issues, Harari’s writing is, as always, dense with thought-provoking ideas and enlivened by examples plucked from centuries past. One example, about the downside of residing in cyberspace rather than in actual space: “Humans lived for millions of years without religions and without nations — they can probably live happily without them in the 21st century, too. Yet they cannot live happily if they are disconnected from their bodies. If you don’t feel at home in your body, you will never feel at home in the world.”

The opening:

Mark Zuckerberg last month published an audacious manifesto on the need to build a global community, and on Facebook’s role in that project. His 5,700-word letter — on his Facebook page — was intended not just to allay concerns over social media’s role in spreading “fake news”. It also indicated that Facebook is no longer merely a business, or even a platform. It is on its way to becoming a worldwide ideological movement.

Of course words are cheaper than actions. To implement his manifesto, Zuckerberg might have to jump headlong into a political minefield, and even change his company’s entire business model. You can hardly lead a global community when you make your money from capturing people’s attention and selling it to advertisers. Despite this, his willingness to even formulate a political vision deserves praise.

Most corporations are faithful to the neoliberal dogma that says corporations should focus on making money, governments should do as little as possible, and humankind should trust market forces to take the really important decisions on our behalf. Tech giants such as Facebook have extra reason to distance themselves from any paternalistic political agenda and to present themselves as a transparent medium. With their immense power and hoard of personal data, they have been extremely wary of saying anything that might cause them to look even more like Big Brother.

There are certainly good reasons to fear Big Brother. In the 21st century, Big Data algorithms could be used to manipulate people in unprecedented ways. Take future election races, for example: in the 2020 race, Facebook could theoretically determine not only who are the 32,578 swing voters in Pennsylvania, but also what you need to tell each of them in order to swing them in your favour. But there is also much to fear from abdicating all responsibility to market forces. The market has proven itself woefully inadequate in confronting climate change and global inequality, and is even less likely to self-regulate the explosive powers of bioengineering and artificial intelligence. If Facebook intends to make a real ideological commitment, those who fear its power should not push it back into the neoliberal cocoon with cries of “Big Brother!”. Instead, we should urge other corporations, institutions and governments to contest its vision by making their own ideological commitments.•

Tags:

During Space Race 1.0., it was the Soviets who first successfully launched a satellite and landed a craft on the moon (the astronaut-less Luna 9). Our communist adversaries seemed destined to be the first to put humans on the moon, but that’s not how it turned out. 

In retrospect, it seems vital that the U.S., (then and perhaps still) a democracy, won the contest to take the first steps on solid ground in a sphere other than our own mothership. It provided a boost to us psychologically and technologically, maintaining the momentum we’d won in World War II, but the following decade was the beginning of a long decline for middle-class Americans, which was of course unrelated to space pioneering but likewise was not be prevented by it.

Did it really matter politically that we got there first? Hard to say.

· · ·

The human genome might actually be the final frontier, a voyage not out there but in here. The question is does it matter for humanity if the U.S. or China or some other state arrives, in one way or another, first? The invention of CRISPR-Cas9 makes this point more pressing than ever, as an autocratic nation without concern about public backlash is likely to go boldly into the future. Unlike space exploration, which is still remarkably expensive, genetic modification to not only cure disease but also to enhance healthy embryos and bodies is likely to become markedly more affordable in a relatively short span of time. That will allow for easy access to exploring–and, potentially, exploitating–which might mean the victor in this nouveau race is important. My best guess, however, is that taking the initial giant leap won’t ultimately be as meaningful as walking on the right path thereafter.

From G. Owen Schaefer’s smart Conversation piece “The Future Of Genetic Enhancement Is Not in the West“:

Aside from a preoccupation with being the best in everything, is there reason for Westerners to be concerned by the likelihood that genetic enhancement is apt to emerge out of China?

If the critics are correct that human enhancement is unethical, dangerous or both, then yes, emergence in China would be worrying. From this critical perspective, the Chinese people would be subject to an unethical and dangerous intervention – a cause for international concern. Given China’s human rights record in other areas, it is questionable whether international pressure would have much effect. In turn, enhancement of its population may make China more competitive on the world stage. An unenviable dilemma for opponents of enhancement could emerge – fail to enhance and fall behind, or enhance and suffer the moral and physical consequences.

Conversely, if one believes that human enhancement is actually desirable, this trend should be welcomed. As Western governments hem and haw, delaying development of potentially great advances for humanity, China leads the way forward. Their increased competitiveness, in turn, would pressure Western countries to relax restrictions and thereby allow humanity as a whole to progress – becoming healthier, more productive and generally capable.•

Tags:

Some prominent American captains of industry of the 1930s openly admired Italy’s Fascism, even Hitler’s Nazism, sure the crushing grip on workers those authoritarian regimes maintained would defeat American liberalism. This popular idea was useful to Charles Lindbergh and others in selling the original “America First” mentality. Of course, those same totalitarian impulses helped push both nations to disaster unparalleled in modern times.

In a Cato Institute essay that wonders whether free societies will be ascendant in the coming decades, Tyler Cowen argues China’s ballooning share of the GDP has served as significant soft power, encouraging other players on the world stage that their system is superior. I’m not convinced. While it stands to reason that any supersized idea in the market will hold some sway, it doesn’t seem like insurgent forces in the U.S. and the U.K.–and certainly not their rank-and-file supporters–aspire to the Chinese model. The factors provoking the political tumult seem to be economic concerns, underlying bigotries exploited by opportunists and the aftereffects of 9/11, the Iraq War, the 2008 financial collapse and the very uneven outcomes of the Arab Spring. 

Of course, there’s no exact science to decide where the blame lies.

An excerpt:

The percentage of global GDP which is held in relatively non-free countries, such as China, has been rising relative to the share of global GDP held in the freer countries. I suspect we are underrating the noxious effects of that development.

Just think back to the 1930s, and some other decades, and consider how many Westerners and Western intellectuals were infatuated with communism and also Stalinism, even at times with fascism, at least before WWII. I would say that if a big idea is around, and supported by some major governments, some number of people will be attracted to that idea, even if we don’t understand the mechanisms here very well. Nonetheless that seems to be an unfortunate sociological truth. Today that big idea isn’t so much communism as it is various forms of authoritarianism. Authoritarians have more presence on the global stage today than has been the case for a while. Furthermore, a lot of the authoritarian states are still in their “rising” forms, rather than their decadent forms, as was the case for Soviet communism in say the 1980s. For instance, while predictions about the future of China are difficult to make, the Chinese Communist Party hardly seems to be on the verge of collapse, and thus its authoritarianism may not be discredited by current events anytime soon. On the global stage, Putin’s Russia has won some recent successes as of late, including in Crimea and also by interfering with democratic elections in the West, apparently with impunity.

To put it simply, global authoritarianism is probably poisoning our political climate more than many people realize.•

Tags:

Prone as we are to expecting what has happened before to come around again, the shock of the new often causes us to frame outliers with narratives, to assign order to what disturbs us. 

While Brexit and Trump’s election would make for great fictional plot twists in novels sold at airports, they’re so deeply upsetting to many among us and such a threat to global order that these events have been suggested by some as evidence that we exist inside a computer simulation written by future humans testing our mettle, a theory spread widely by Elon Musk in recent years, fueled by his Bostrom bender. Even the recent Oscar snafu was peddled as proof of the same.

None of these occurrences proves anything, of course. Statistically, the unusual and unpleasant is bound to happen sometimes. A “cancer cluster” is occasionally just a natural and random spike, not the result of locals tasting tainted drinking water. A sad-sack sports team on a winning streak can likewise be arbitrary noise. Not everything is a conspiracy, not everything evidence.

Political Theory professor Michael Frazer’s Conversation article “Do Brexit and Trump Show That We’re Living in a Computer Simulation? neatly outlines philosopher Nick Bostrom’s reasons for believing we exist inside a sort of video game controlled by others:

“Either humanity goes extinct before developing the technology to make universe simulations possible. Or advanced civilisations freely choose not to run such simulations. Or we are probably living in a simulation.”

Of course, all of those options rely on us having incredibly distant “descendants,” something those in the simulated-universe camp seem to blithely accept without any proof. Today’s academics may create counterfactuals on historical epochs, but they possess good evidence we have ancestors. Descendants living in a far-flung future building a narrative from us seems more like our own narrative.

Some have argued that superior humans of tomorrow wouldn’t be so unethical as to create a universe of pain and calamity, as if intelligence and morality are always linked. (Just consider a “genius” of today like Peter Thiel as a reference point on that one.) Frazer makes a compelling case, however, that if a future world exists, it’s probably not populated by code-friendly tormentors. As he asserts, great immorality mixing with unimaginable technology would likely be too toxic a combination for these people of tomorrow to have survived.

The opening:

Recent political events have turned the world upside down. The UK voting for Brexit and the US electing Donald Trump as president were unthinkable 18 months ago. In fact, they’re so extraordinary that some have questioned whether they might not be an indication that we’re actually living in some kind of computer simulation or alien experiment.

These unexpected events could be experiments to see how our political systems cope under stress. Or they could be cruel jokes made at our expense by our alien zookeepers. Or maybe they’re just glitches in the system that were never meant to happen. Perhaps the recent mix-up at the Oscars or the unlikely victories of Leicester City in the English Premier League or the New England Patriots in the Superbowl are similar glitches.

The problem with using these difficult political events as evidence that our world is a simulation is how unethical such a scenario would be. If there really were a robot or alien power that was intelligent enough to control all our lives in this way, there’s a good chance they’d have developed the moral sense not to do so.•

Tags:

In a Guardian article, Andrew Anthony writes that Yuval Noah Harari is a “historian of the distant past and the near future,” an apt description. The Israeli may be the least likely public figure to come to prominence this decade, a deeply cerebral academic in an age when intellectualism and higher education are often perplexingly scorned.

Of course, in their own moments Carl Sagan and Stephen Jay Gould were also unlikely celebrities. The common bond they all shared: an ability to relate vivid narratives, which is an especially appropriate skill as it refers to Harari, who believes a penchant for storytelling and processing abstract thoughts is what made our species predominant among humans and all other creatures.

Anthony collected questions from notable public figures and readers to pose to Harari. A few of the exchanges follow.


Helen Czerski, physicist

We are living through a fantastically rapid globalisation. Will there be one global culture in the future or will we maintain some sort of deliberate artificial tribal groupings?

Yuval Noah Harari:

I’m not sure if it will be deliberate but I do think we’ll probably have just one system, and in this sense we’ll have just one civilisation. In a way this is already the case. All over the world the political system of the state is roughly identical. All over the world capitalism is the dominant economic system, and all over the world the scientific method or worldview is the basic worldview through which people understand nature, disease, biology, physics and so forth. There are no longer any fundamental civilisational differences.

· · ·

Lucy Prebble, playwright

What is the biggest misconception humanity has about itself?

Yuval Noah Harari:

Maybe it is that by gaining more power over the world, over the environment, we will be able to make ourselves happier and more satisfied with life. Looking again from a perspective of thousands of years, we have gained enormous power over the world and it doesn’t seem to make people significantly more satisfied than in the stone age.

· · ·

TheWatchingPlace, posted online:

Is there a real possibility that environmental degradation will halt technological progress?

Yuval Noah Harari:

I think it will be just the opposite – that, as the ecological crisis intensifies, the pressure for technological development will increase, not decrease. I think that the ecological crisis in the 21st century will be analogous to the two world wars in the 20th century in serving to accelerate technological progress.

As long as things are OK, people would be very careful in developing or experimenting in genetic engineering on humans or giving artificial intelligence control of weapon systems. But if you have a serious crisis, caused for example by ecological degradation, then people will be tempted to try all kinds of high-risk, high-gain technologies in the hope of solving the problem, and you’ll have something like the Manhattan Project in the second world war.

· · ·

Andrew Anthony:

You live in a part of the world that has been shaped by religious fictions. Which do you think will happen first – that Homo sapiens leave behind religious fiction or the Israel-Palestine conflict will be resolved?

Yuvan Noah Harari:

As things look at present, it seems that Homo sapiens will disappear before the Israeli political conflict will be resolved. I think that Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different. The timescale for that kind of change is maybe a century. And it’s quite likely that the Palestinian-Israeli conflict will not be resolved by that time. But it will definitely be influenced by it.•

Tags: , , ,

Nature is a necessary evil, and humans are a mixed blessing. That’s my credo. Hopeful, huh?

Five years ago, when this blog was something other than what it is today (though I don’t really know what it is now, either), I use to run an occasional post called “5 Things About Us Future People Won’t Believe.” In these short pieces, carnivorism, internal gestation, factory work, invasive surgery and prisons were my suggestions for elements of today’s society that would brand us as “backwards” by tomorrow’s standards. I didn’t mention anything obvious like warfare because the “enlightened” of the future will still participate in such tribalism, even if the nature of the battle changes markedly. 

In a similar vein, Matt Chessen has published “The Future Called: We’re Disgusting And Barbaric,” a Backchannel piece that hits on some of same predictions I made but also has some very interesting topics I didn’t touch at all. One item:

Tolerating homes and bodies infested with critters

Right now, there are hundreds of millions of insects living on your body and in your home. Tiny dust mites inhabit your mattress, your pillow, your carpeting, and your body, regardless of how clean everything is. Microscopic demodex mites live in the follicles of your eyelashes and prowl your face at night. And this doesn’t even consider the trillions of bacteria and parasites that live inside us. Our bodies are like planets, full of life that is not us.

Future folk will be thoroughly disgusted. They will have nanotechnology antibodies — tiny machines that patrol our homes and skin, hoovering up dust mite food (our skin flakes) and exterminating the little suckers. They can’t completely eliminate all the insects and bacteria — human beings have developed a symbiosis with them; we need bacteria to do things like digest food—but the nanobots will police this flora, keeping it within healthy bounds and eliminating any micro-infestations or infections that grow out of control.

And forget about infestations by critters like cockroaches. Nanobots will exterminate larger household pests en masse. The real terminators of the future wont wreck havoc on humanity: They’ll massacre our unwanted insect houseguests.•

Tags:

Elon Musk has made the unilateral decision that Mars will be ruled by direct democracy, and considering how dismal his political record is over the last five months with his bewildering bromance with the orange supremacist, it might be great if he blasted from Earth sooner than later.

Another billionaire of dubious governmental wisdom also believed in direct democracy. That was computer-processing magnate Ross Perot who, in 1969, had a McLuhan-ish dream: an electronic town hall in which interactive television and computer punch cards would allow the masses, rather than elected officials, to decide key American policies. In 1992, he held fast to this goal–one that was perhaps more democratic than any society could survive–when he bankrolled his own populist third-party Presidential campaign. 

The opening of “Perot’s Vision: Consensus By Computer,” a New York Times article from that year by the late Michael Kelly:

WASHINGTON, June 5— Twenty-three years ago, Ross Perot had a simple idea.

The nation was splintered by the great and painful issues of the day. There had been years of disorder and disunity, and lately, terrible riots in Los Angeles and other cities. People talked of an America in crisis. The Government seemed to many to be ineffectual and out of touch.

What this country needed, Mr. Perot thought, was a good, long talk with itself.

The information age was dawning, and Mr. Perot, then building what would become one of the world’s largest computer-processing companies, saw in its glow the answer to everything. One Hour, One Issue

Every week, Mr. Perot proposed, the television networks would broadcast an hourlong program in which one issue would be discussed. Viewers would record their opinions by marking computer cards, which they would mail to regional tabulating centers. Consensus would be reached, and the leaders would know what the people wanted.

Mr. Perot gave his idea a name that draped the old dream of pure democracy with the glossy promise of technology: “the electronic town hall.”

Today, Mr. Perot’s idea, essentially unchanged from 1969, is at the core of his ‘We the People’ drive for the Presidency, and of his theory for governing.

It forms the basis of Mr. Perot’s pitch, in which he presents himself, not as a politician running for President, but as a patriot willing to be drafted ‘as a servant of the people’ to take on the ‘dirty, thankless’ job of rescuing America from “the Establishment,” and running it.

In set speeches and interviews, the Texas billionaire describes the electronic town hall as the principal tool of governance in a Perot Presidency, and he makes grand claims: “If we ever put the people back in charge of this country and make sure they understand the issues, you’ll see the White House and Congress, like a ballet, pirouetting around the stage getting it done in unison.”

Although Mr. Perot has repeatedly said he would not try to use the electronic town hall as a direct decision-making body, he has on other occasions suggested placing a startling degree of power in the hands of the television audience.

He has proposed at least twice — in an interview with David Frost broadcast on April 24 and in a March 18 speech at the National Press Club — passing a constitutional amendment that would strip Congress of its authority to levy taxes, and place that power directly in the hands of the people, in a debate and referendum orchestrated through an electronic town hall.•

In addition to the rampant myopia that would likely blight such a system, most Americans, with jobs and families and TV shows to binge watch, don’t take the time to fully appreciate the nuances of complex policy. The stunning truth is that even in a representative democracy in this information-rich age, we have enough uninformed voters minus critical-thinking abilities to install an obvious con artist into the Oval Office to pick their pockets. 

In a Financial Times column, Tim Harford argues in favor of the professional if imperfect class of technocrats, who get the job done, more or less. An excerpt:

For all its merits, democracy has always had a weakness: on any detailed piece of policy, the typical voter — I include myself here — does not understand what is really at stake and does not care to find out. This is not a slight on voters. It is a recognition of our common sense. Why should we devote hours to studying every policy question that arises? We know the vote of any particular citizen is never decisive. It would be a deluded voter indeed who stayed up all night revising for an election, believing that her vote would be the one to make all the difference.

So voters are not paying close attention to the details. That might seem a fatal flaw in democracy but democracy has coped. The workaround for voter ignorance is to delegate the details to expert technocrats. Technocracy is unfashionable these days; that is a shame.

One advantage of a technocracy is that it constrains politicians who are tempted by narrow or fleeting advantages. Multilateral bodies such as the World Trade Organization and the European Commission have been able to head off popular yet self-harming behaviour, such as handing state protection to which ever business has the best lobbyists.

Meanwhile independent central banks have been the grown-ups of economic policymaking. Once the immediate aftermath of the financial crisis had passed, elected politicians sat on their hands. Technocratic central bankers were — to borrow a phrase from Mohamed El-Erian, economic adviser — “the only game in town” in sustaining a recovery.

A second advantage is that technocrats can offer informed, impartial analysis. Consider the Congressional Budget Office in the US, the Office for Budget Responsibility in the UK, and Nice, the National Institute for Health and Care Excellence.

Technocrats make mistakes, it’s true — many mistakes. Brain surgeons also make mistakes. That does not mean I’d be better off handing the scalpel to Boris Johnson.•

Tags:

An unqualified sociopath was elected President of the United States with the aid of the FBI, fake news, Russian spies, white supremacists and an accused rapist who’s holed up inside the Ecuadorian embassy in London to avoid arrest. Writing that sentence a million times can’t make it any less chilling.

WikiLeaks’ modus operandi over the last couple of years probably wouldn’t be markedly different if it were in the hands of Steve Bannon rather than Julian Assange, so it’s not surprising the organization leaked a trove of (apparently overhyped) documents about CIA surveillance just as Trump was being lambasted from both sides of the aisle for baselessly accusing his predecessor for “wiretapping.” The timing is familiar if you recall that WikiLeaks began releasing Clinton campaign emails directly after the surfacing of a video that recorded Trump’s boasts of sexual assault. With all this recent history, is it any surprise Assange mockingly described himself as a “deplorable” when chiding Twitter for refusing verify his account?

The decentralization of media, with powerful tools in potentially every hand, has changed the game, no doubt. We’re now in a permanent Spy vs. Spy cartoon, though one that isn’t funny, with feds and hackers permanently at loggerheads. Which side can do the most damage? Voters have some recourse in regards to government snooping but not so with private-sector enterprises. In the rush to privatize and outsource long-established areas of critical services, from prisons to the military to intelligence work, we’ve also dispersed dangers.

From Sue Halpern’s New York Review of Books pieceThe Assange Distraction“:

In his press conference, Assange observed that no cyber weapons are safe from hacking because they live on the Internet, and once deployed are themselves at risk of being stolen. When that happens, he said, “there’s a very easy cover for any gray market operator, contractor, rogue intelligence agent to take that material and start a company with it. Start a consulting company, a hacker for hire company.” Indeed, the conversation we almost never have when we’re talking about cyber-security and hacking is the one where we acknowledge just how privatized intelligence gathering has become, and what the consequences of this have been. According to the reporters Dana Priest, Marjorie Censer and Robert O’Harrow, Jr., at least 70 percent of the intelligence community’s “secret” budget now goes to private contractors. And, they write, “Never before have so many US intelligence workers been hired so quickly, or been given access to secret government information through networked computers. …But in the rush to fill jobs, the government has relied on faulty procedures to vet intelligence workers, documents and interviews show.” Much of this expansion occurred in the aftermath of the September 11 attacks, when the American government sought to dramatically expand its intelligence-gathering apparatus.

Edward Snowden was a government contractor; he had a high security clearance while working for both Dell and for Booz, Allen, Hamilton. Vault 7’s source, from what one can discern from Assange’s remarks, was most likely a contractor, too. The real connection between Snowden’s NSA revelations and an anonymous leaker handing off CIA malware to WikiLeaks, however, is this: both remind us, in different ways, that the expansion of the surveillance state has made us fundamentally less secure, not more.

Julian Assange, if he is to be believed, now possesses the entire cyber-weaponry of the CIA. He claims that they are safe with him while explaining that nothing is safe on the Internet. He says that the malware he’s published so far is only part of the CIA arsenal, and that he’ll reveal more at a later date. If that is not a veiled threat, then this is: Assange has not destroyed the source codes that came to him with Vault 7, the algorithms that run these programs, and he hasn’t categorically ruled out releasing them into the wild, where they would be available to any cyber-criminal, state actor, or random hacker. This means that Julian Assange is not just a fugitive, he is a fugitive who is armed and dangerous.•

Tags: ,

Trump poses many existential threats but let’s focus on two in particular that are linked: His autocratic impulses are a threat to liberal governance and America’s ethos of an immigrant nation, and his cultivation of a culture of complaint is a bankrupt brand of populism, a nauseating nostalgia for yesterday which places us in risk today and tomorrow.

The upshot is a federal government contemptible of the Constitution, one that’s willfully trying to block the steady flow of genius into the country and one that’s more enthusiastic for steel and coal than semiconductors. The Trump promise to America is that we can live like the 1950s and win the 21st century, that we don’t have to compete with the whole world because we can build a wall to keep out the future. He’s a new manner of aspiring autocrat concerned not with ideology by with its destruction. In Holly Case’s Aeon essay about contemporary strongmen who are divorced from governing principles beyond promising to make difficult challenges vanish, she concluded this way:

The new authoritarian does not pretend to make you better, only to make you feel better about not wanting to change. In this respect, he has tapped a gusher in the Zeitgeist that reaches well beyond the domain of state socialism, an attitude that the writer Marilynne Robinson disparages as ‘nonfailure’, and that the writer Walter Mosley elevates to a virtue: ‘We need to raise our imperfections to a political platform that says: “My flaws need attention too.” This is what I call the “untopia”.’ Welcome to the not-so-brave new world.

In 2017, China is a notable exception to this definition, an autocracy aiming to win the race in supercomputers, semiconductors and solar, which is particularly perilous when paired with America’s retreat. We picked an awful time to stop looking forward, and the ramifications will be felt long after Trump is gone.

From Michael Schuman in Bloomberg View:

China is marshaling massive resources to march into high-tech industries, from robotics to medical devices. In the case of semiconductors alone, the state has amassed $150 billion to build a homegrown industry. In a report in March, the European Union Chamber of Commerce in China pressed the point that the Chinese government is employing a wide range of tools to pursue these ambitions, from lavishing subsidies on favored sectors to squeezing technology out of foreign firms.

The only way for the U.S. to compete with those efforts is to “run faster.” Yet Trump’s ideas to boost competitiveness mainly amount to cutting taxes and regulation. Although reduced taxes might leave companies with more money to spend on research and development, that’s not enough. The U.S. needs to do much more to help businesses achieve bigger and better breakthroughs.

Trump is doing the opposite. One reason U.S. companies are so innovative is that they attract talented workers from everywhere else. But Trump’s recent suspension of fast-track H-1B visas could curtail this infusion of scientists and researchers. If his intention is to ensure jobs go to Americans first, he need not bother. The unemployment rate for Americans with a bachelor’s degree or higher — the skilled workers that H-1B holders would compete with — is a mere 2.5 percent. 

This policy isn’t just a threat to Silicon Valley, but across industries. Michael McGarry, the chief executive officer of PPG Industries Inc., worries about the effect visa restrictions would have on his paint-making business. “We create a lot of innovation because of the diversity that we have,” he recently told CNBC. “We think people with PhDs that are educated here should stay here and work for us and not work for the competition.”

China will likely try to capitalize on this mistake. Robin Li, CEO of the internet giant Baidu Inc., recently advocated that China ease its visa requirements to attract talented workers to help develop new technologies for Chinese industry, just the opposite of Trump’s approach.

Trump’s budget proposals are similarly a setback. He wants to boost defense spending by slashing funding for just about everything else, notably education. By one estimate, some $20 billion would have to get cut from the departments of education, labor, and health and human services to accommodate his plan. If Trump wants to contend with Chinese power, he’d be better off reversing those priorities — to create more graduates and fewer guns. He could offer proposals to make higher education more affordable for the poor, for instance, or to bolster vocational training. So far, there’s little evidence he’s making such spending a priority.

China, by contrast, is expanding access to education on a huge scale.

Tags: , ,

It would be great if all of us could grow smarter, but smart isn’t everything. Being wise and ethical are important, too.

PayPal co-founders Peter Thiel and Elon Musk have had access to elite educations, started successful businesses and amassed vast fortunes, but in this time of Trump they don’t seem particularly enlightened. Thiel ardently supported the bigoted, unqualified sociopath to the White House, while Musk’s situational ethics in dealing with the new abnormal are particularly amoral.

At SXSW, Ray Kurzweil said he believes technology has already made us much smarter and will improve us exponentially in that manner by 2029 when the Singularity arrives. While his views of the future are too aggressive, Kurzweil’s view of today seems oddly rose-colored. Why if we’re so much brighter do we have unintelligent reality TV host in the White House? Why is there ever-deepening wealth inequality? Why are we ravaged by an opioid epidemic? 

If we’re smarter now–a big if–and it’s divorced from basic morality and decency, are we any better off?

From Dyani Sabin’s Inverse piece about Kurzweil’s appearance in Austin:

The future isn’t going to look like a science fiction story with a few super intelligent A.I.s that attack us.

“That’s not realistic. We don’t have one or two A.I.s in the world. Today we have billions,” he says. And unlike Musk who imagines the rise of the A.I. as something that threatens human existence, Kurzweil says that doesn’t hold with how we interact with A.I.s today.

“What’s actually happening is they are powering all of us. They’re making us smarter. They may not yet be inside our bodies but by the 2030s we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”

This isn’t just a pipe dream to Kurzweil, who’s had reasonable luck predicting where the future is going to go. “There are people with computers in their brains today — Parkinson’s patients,” he points out. “That’s how these things start.” Following the path of steps from the technology we have now, to what will happen twenty years from now, Kurzweil says, “in the 2030’s there will be something you can take that will go inside your brain and help your memory.” And that’s just the beginning.

Uploading our brains into the cloud will allow humanity to waste less time on lower-level types of mental tasks, Kurzweil says. He’s very interested in the idea of uploading the neocortex because it’s responsible for things like art, music, and humor. By allowing our brains to connect more on that level, by melding with artificial intelligence, we will expand our ability to do these things and be better people. “Ultimately it will affect everything,” he says. “We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”•

Tags: ,

  • Don’t blame Tim Berners-Lee, not for cat memes, spam or even the way his gift connected and emboldened the absolute worst among us. A hammer is a weapon or a tool depending on how you swing it, and like almost any invention, the World Wide Web is as good as people utilizing it. A lot of us aren’t very good right now.
  • Marshall McLuhan feared the Global Village even as he was heralding its arrival 50 years ago. He believed all this closeness, these worlds colliding, could explode. He encouraged us to study the new arrangement–“why not devote your powers to discerning patterns?”–lest we’d be overrun by them. Plenty among us know the issues at hand, but they’re not easy to address.
  • There’s no doubt the Internet does a lot of good and some of its worst excesses can be curbed, but the trouble with this tool isn’t that we’re not yet familiar enough with decentralized media and soon enough we’ll have a handle on the situation. The problems seem inherent to the medium, which is a large-scale experiment in anarchy, and just as sure as we correct some of bugs, others will take flight.
  • During the Arab Spring there was much debate over whether the Internet aas actually useful in toppling states. I think it is, regardless of whether the nation or usurpers happen to be good or bad.

In a Guardian essay, Berners-Lee offers biting criticism of his pet project, suggesting fixes. I wonder though, as with Facebook promising to address its shortcomings, if the system isn’t built for mayhem. That may be especially true since most citizens don’t seem very bothered by handing over their personal information in exchange for sating some psychological needs, offering their own Manhattan for some shiny beads.

An excerpt:

1) We’ve lost control of our personal data

The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share – especially with third parties – the T&Cs are all or nothing.

This widespread data collection by companies also has other impacts. Through collaboration with – or coercion of – companies, governments are also increasingly watching our every move online and passing extreme laws that trample on our rights to privacy. In repressive regimes, it’s easy to see the harm that can be caused – bloggers can be arrested or killed, and political opponents can be monitored. But even in countries where we believe governments have citizens’ best interests at heart, watching everyone all the time is simply going too far. It creates a chilling effect on free speech and stops the web from being used as a space to explore important topics, such as sensitive health issues, sexuality or religion.•

Tags:

If Moby-Dick were the only Herman Melville book I’d ever read, I would have assumed that he was a mediocre writer with great ideas. Having gone through all of his shorter works, however, I know he could be a precise and cogent talent. He seemed to have reached for everything with his most famous novel–aiming to fashion a sort of Shakespearean Old Testament story of good and evil–and buckled under the weight of his ambitions.

The far better Moby-Dick is Cormac McCarthy’s 1992 Blood Meridian: Or the Evening Redness in the West, a horse opera of Biblical proportions, a medicine show peddling poison, which takes an unsparing look at our black hearts and leaves the reader with a purple bruise. Twenty-five years on, it remains as profound and disturbing as any American novel.

The British writer David Vann reveals he’s similarly admiring of this McCarthy work in a “Twenty Questions” interview in the Times Literary Supplement. He’s also despairing of what he believes is the bleak future of literature. I believe as long as humans are largely human, we’ll always enamored by narratives. My fear is mainly that sometimes we choose the wrong ones.

An excerpt:

Question:

Is there any book, written by someone else, that you wish you’d written?

David Vann:

There are hundreds, but the foremost from this time is Cormac McCarthy‘s Blood Meridian, which I think is the greatest novel ever written in English.  He’s not a dramatist, and I write Greek tragedy, so I never could have imagined skipping the dramatic plane and going straight to vision.  I do write in the same American landscape tradition, extending literal landscapes into figurative ones, but I’ll never do it as powerfully as he does.

Question:

What will your field look like twenty-five years from now?

David Vann:

Less money for sure. We’ve already lost so much to piracy and shrinking readerships and economic downturns. Publishers will be less brave, editors will edit less, more books will be published online for nothing, we’ll continue to lose experts and have to put up with even more reviews from unqualified idiots, and as entire generations learn to read without subtext about what someone had for lunch, we can expect literature to look more like an account of what someone had for lunch. There is absolutely no way in which the technology or literary theory of the past decades will enrich literature.  We should be honest about what is crap. …

Question:

What author or book do you think is most overrated? And why?

David Vann:

I should never answer this kind of question, because I’m only shooting myself in the foot, but when Jonathan Franzen appeared on the cover of Time as the Great American Novelist, who could not have thought of McCarthy, Proulx, Robinson, Morrison, Oates, Roth, DeLillo and at least a hundred others far better than Franzen?  And to call The Corrections the best book in ten years?  Really?•

Tags: ,

Thomas E. Ricks of Foreign Policy asked one of the most horrifying questions about America you can pose: Will we have another civil war in the next ten to fifteen years? Keith Mines of the United States Institute of Peace and a career foreign service officer provided a sobering reply, estimating the chance for large-scale internecine violence at 60%. 

Things can change unexpectedly, sometimes for the better, but it sure does feel like we’re headed down a road to ruin, with the anti-democratic, incompetent Trump and company provoking us to a tipping point. The Simon Cowell-ish strongman may seem a fluke because of his sizable loss in the popular vote, but in many ways his political ascent is the culmination of the past four decades of dubious U.S. cultural, civic, economic, technological and political decisions. We’re not here by accident. 

· · ·

One of the criteria on which Mines bases his diagnosis: “Press and information flow is more and more deliberately divisive, and its increasingly easy to put out bad info and incitement.” That triggered in me a memory of a 2012 internal Facebook study, which, unsurprisingly, found that Facebook was an enemy of the echo chamber rather than one of its chief enablers. I’m not saying the scholars involved were purposely deceitful, but I don’t think even Mark Zuckerberg would stand by those results five years later. We’re worlds apart in America, and social media, and the widespread decentralization of all media, has hastened and heightened those divisions.

· · ·

An excerpt from Farhad Manjoo’s 2012 Slate piece “The End of the Echo Chamber,” about the supposed salubrious effects of social networks, is followed by Mines’ opening.


From Manjoo:

Today, Facebook is publishing a study that disproves some hoary conventional wisdom about the Web. According to this new research, the online echo chamber doesn’t exist.
 
This is of particular interest to me. In 2008, I wrote True Enough, a book that argued that digital technology is splitting society into discrete, ideologically like-minded tribes that read, watch, or listen only to news that confirms their own beliefs. I’m not the only one who’s worried about this. Eli Pariser, the former executive director of MoveOn.org, argued in his recent book The Filter Bubble that Web personalization algorithms like Facebook’s News Feed force us to consume a dangerously narrow range of news. The echo chamber was also central to Cass Sunstein’s thesis, in his book Republic.com, that the Web may be incompatible with democracy itself. If we’re all just echoing our friends’ ideas about the world, is society doomed to become ever more polarized and solipsistic?

It turns out we’re not doomed. The new Facebook study is one of the largest and most rigorous investigations into how people receive and react to news. It was led by Eytan Bakshy, who began the work in 2010 when he was finishing his Ph.D. in information studies at the University of Michigan. He is now a researcher on Facebook’s data team, which conducts academic-type studies into how users behave on the teeming network.

Bakshy’s study involves a simple experiment. Normally, when one of your friends shares a link on Facebook, the site uses an algorithm known as EdgeRank to determine whether or not the link is displayed in your feed. In Bakshy’s experiment, conducted over seven weeks in the late summer of 2010, a small fraction of such shared links were randomly censored—that is, if a friend shared a link that EdgeRank determined you should see, it was sometimes not displayed in your feed. Randomly blocking links allowed Bakshy to create two different populations on Facebook. In one group, someone would see a link posted by a friend and decide to either share or ignore it. People in the second group would not receive the link—but if they’d seen it somewhere else beyond Facebook, these people might decide to share that same link of their own accord.

By comparing the two groups, Bakshy could answer some important questions about how we navigate news online. Are people more likely to share information because their friends pass it along? And if we are more likely to share stories we see others post, what kinds of friends get us to reshare more often—close friends, or people we don’t interact with very often? Finally, the experiment allowed Bakshy to see how “novel information”—that is, information that you wouldn’t have shared if you hadn’t seen it on Facebook—travels through the network. This is important to our understanding of echo chambers. If an algorithm like EdgeRank favors information that you’d have seen anyway, it would make Facebook an echo chamber of your own beliefs. But if EdgeRank pushes novel information through the network, Facebook becomes a beneficial source of news rather than just a reflection of your own small world.

That’s exactly what Bakshy found. His paper is heavy on math and network theory, but here’s a short summary of his results. First, he found that the closer you are with a friend on Facebook—the more times you comment on one another’s posts, the more times you appear in photos together, etc.—the greater your likelihood of sharing that person’s links. At first blush, that sounds like a confirmation of the echo chamber: We’re more likely to echo our closest friends.

But here’s Bakshy’s most crucial finding: Although we’re more likely to share information from our close friends, we still share stuff from our weak ties—and the links from those weak ties are the most novel links on the network. Those links from our weak ties, that is, are most likely to point to information that you would not have shared if you hadn’t seen it on Facebook.•


From Mines:

What a great but disturbing question (the fact that you can even ask it). Weird question for me as for most of my career I have been traveling the world observing other countries in various states of dysfunction and answering this same question. In this case if the standard is largescale violence that requires the National Guard to deal with in the timeline you lay out, I would say about 60 percent.

I base that on the following factors:

— Entrenched national polarization of our citizenry with no obvious meeting place. (Not true locally, however, which could be our salvation; but the national issues are pretty fierce and will only get worse).

— Press and information flow is more and more deliberately divisive, and its increasingly easy to put out bad info and incitement.

— Violence is “in” as a method to solve disputes and get one’s way. The president modeled violence as a way to advance politically and validated bullying during and after the campaign.  Judging from recent events the left is now fully on board with this, although it has been going on for several years with them as well — consider the university events where professors or speakers are shouted down and harassed, the physically aggressive anti-Israeli events, and the anarchists during globalization events. It is like 1859, everyone is mad about something and everyone has a gun.

— Weak institutions — press and judiciary, that are being further weakened. (Still fairly strong and many of my colleagues believe they will survive, but you can do a lot of damage in four years, and your timeline gives them even more time).

— Total sellout of the Republican leadership, validating and in some cases supporting all of the above.•

Tags: , ,

Developing visual recognition in machines is helpful in performing visual tasks, of course, but this ability has the potential to unfold Artificial Intelligence in much broader and significant ways, providing AI with a context from which to more accurately “comprehend” the world. (I’m not even sure if the quotation marks in the previous sentence are necessary.)

In an interview conducted by Tom Simonite of Technology Review, Director of AI Research at Facebook’s AI research director Yann LeCun explains that exposing machines to video will hopefully enable them to learn through observation as small children do. “That’s what would allow them to acquire common sense, in the end,” he says.

An excerpt:

Question:

Babies learn a lot about the world without explicit instruction, though.

Yann LeCun:

One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels. That’s what would allow them to acquire common sense, in the end. These are things that animals and babies learn in the first few months of life—you learn a ridiculously large amount about the world just by observation. There are a lot of ways that machines are currently fooled easily because they have very narrow knowledge of the world.

Question:

What progress is being made on getting software to learn by observation?

Yann LeCun:

We are very interested in the idea that a learning system should be able to predict the future. You show it a few frames of video and it tries to predict what’s going to happen next. If we can train a system to do this we think we’ll have developed techniques at the root of an unsupervised learning system. That is where, in my opinion, a lot of interesting things are likely to happen. The applications for this are not necessarily in vision—it’s a big part of our effort in making progress in AI.•

Tags: ,

When it comes to technology, promises often sound like threats. 

In a very smart Edge piece, Chris Anderson, the former Wired EIC who’s now CEO of 3DRobotics, holds forth on closed-loop systems, which allow for processes to be monitored, measured and corrected–even self-corrected. As every object becomes “smart,” they can collect information about themselves, their users and their surroundings. In many ways, these feedback loops will be a boon, allowing (potentially) for smoother maintenance, a better use of resources and a cleaner environment. But the new arrangement won’t all be good.

The question Anderson posed which I used as the headline makes it sound like we’ll be able to control where such technology snakes, but I don’t think that’s true. It won’t get out of hand in a sci-fi thriller sense but in very quiet, almost imperceptible ways. There will hardly be a hum. 

At any rate, Anderson’s story of how he built a drone company from scratch, first with the help of his children and then a 19-year-old kid with no college background from Tijuana, is amazing and a great lesson in globalized economics.

From Edge:

If we could measure the world, how would we manage it differently? This is a question we’ve been asking ourselves in the digital realm since the birth of the Internet. Our digital lives—clicks, histories, and cookies—can now be measured beautifully. The feedback loop is complete; it’s called closing the loop. As you know, we can only manage what we can measure. We’re now measuring on-screen activity beautifully, but most of the world is not on screens.                                 

As we get better and better at measuring the world—wearables, Internet of Things, cars, satellites, drones, sensors—we are going to be able to close the loop in industry, agriculture, and the environment. We’re going to start to find out what the consequences of our actions are and, presumably, we’ll take smarter actions as a result. This journey with the Internet that we started more than twenty years ago is now extending to the physical world. Every industry is going to have to ask the same questions: What do we want to measure? What do we do with that data? How can we manage things differently once we have that data? This notion of closing the loop everywhere is perhaps the biggest endeavor of our age.                                 

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can’t measure the results—there’s no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops; they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely. …

I use the phrase closing the loop because that’s the phrase we use in robotics. Other people might use the phrase big data. Before they called it big data, they called it data mining. Remember that? That was nuts. Anyway, we’re going to come up with a new word for it.                                 

It goes both ways: The tendrils of the Internet reach out through sensors, and then these sensors feed back to the Internet. The sensors get smarter because they’re connected to the Internet, and the Internet gets smarter because it’s connected to the sensors. This feedback loop extends beyond the industry that’s feeding back to the meta-industry, which is the Internet and the planet.•

Tags:

In the latest round of what may be gamesmanship between the American Intelligence Community and Russia, Wikileaks released information about purported CIA spying techniques, which included, among other tricks of the trade, a way to remotely hack smart televisions so that the watchers would become the watched. Such methods should surprise no one.

What does startle me is how receptive Americans are to being watched, as if in this decentralized media age, we’ve accepted, finally and completely, that all the world actually is a stage. It goes far beyond the way we revel in the modern freak show of Reality TV or allow social networks access to our private lives in return for a cheap platform on which to peddle our personalities. As sensors and microchips proliferate, we’re gradually turning every object into a computer from which we can be monitored and quantified. Big Brother will eventually have several siblings in every room. The shock is that we’re so willing to be members of this non-traditional family.

Chance the Gardner was all of us when he said, “I like to watch.” Apparently, we also like to be watched.

The opening of Sapna Maheshwari’s smart New York Times piece on the topic:

While Ellen Milz and her family were watching the Olympics last summer, their TV was watching them.

Ms. Milz, 48, who lives with her husband and three children in Chicago, had agreed to be a panelist for a company called TVision Insights, which monitored her viewing habits — and whether her eyes flicked down to her phone during the commercials, whether she was smiling or frowning — through a device on top of her TV.

“The marketing company said, ‘We’re going to ask you to put this device in your home, connect it to your TV and they’re going to watch you for the Olympics to see how you like it, what sports, your expression, who’s around,’” she said. “And I said, ‘Whatever, I have nothing to hide.’”

Ms. Milz acknowledged that she had initially found the idea odd, but that those qualms had quickly faded.

“It’s out of sight, out of mind,” she said, comparing it to the Nest security cameras in her home. She said she had initially received $60 for participating and an additional $230 after four to six months.

TVision — which has worked with the Weather Channel, NBC and the Disney ABC Television Group — is one of several companies that have entered living rooms in recent years, emerging with new, granular ways for marketers to understand how people are watching television and, in particular, commercials. The appeal of this information has soared as Americans rapidly change their viewing habits, streaming an increasing number of shows weeks or months after they first air, on devices as varied as smartphones, laptops and Roku boxes, not to mention TVs.

Through the installation of a Microsoft Kinect device, normally used for Xbox video games, on top of participants’ TVs, TVision tracks the movement of people’s eyes in relation to the television. The device’s sensors can record minute shifts for all the people in the room.•

Tags:

Margaret Atwood is in an odd position: As our politics get worse, her stature grows. Right now, sadly (for us), she’s never towered higher.

Appropriate that on International Women’s Day and the A Day Without a Woman protests, the The Handmaid’s Tale novelist conducted a Reddit Ask Me Anything to coincide with the soon-to-premiere Hulu version of her most famous work. Dystopia, feminism and literature are, of course, among the discussion topics. A few exchanges follow.


Question:

Thank you so much for writing The Handmaid’s Tale. It was the book that got me hooked on dystopian novels. What was your inspiration for the story?

Margaret Atwood:

Ooo, three main things: 1) What some people said they would do re: women if they had the power (they have it now and they are); 2) 17th C Puritan New England, plus history through the ages — nothing in the book that didn’t happen, somewhere and 3) the dystopian spec fics of my youth, such as 1984, Ray Bradbury’s Fahrenheit 451, etc. I wanted to see if I could write one of those, too.


Question:

What would you be doing right now if you were an American? Would you run for office? Would you protest? Would you be planning to resist ICE?

Margaret Atwood:

I would make a very bad politician, so no, I wouldn’t run for office. But I would support those who were running. I would certainly turn out for protests, as I did here in Toronto, wearing a rather strange pink hat. I don’t know what else I would do! We are in a time when reality seems to shift every day…


Question:

What is a book you keep going back to read and why?

Margaret Atwood:

This is going to sound corny but Shakespeare is my return read. He knew so much about human nature (+ and minus) and also was an amazing experimenter with language. But there are many other favourites. Wuthering Heights recently. In moments of crisis I go back to (don’t laugh) Lord of the Rings, b/c despite the EVIL EYE OF MORDOR it comes out all right in the end. Whew.


Question:

How, if at all, has your feminism changed over the last decade or so? Can you see these changes taking place throughout your literature? Lastly, can you offer any advice for feminists of the millennial generation? What mistakes are we making/repeating? What are our priorities in this political climate?

Margaret Atwood:

Hello: I am so shrieking old that my formative years (the 40s and 50s) took place before 2nd wave late-60’s feminist/women’s movement. But since I grew up largely in the backwoods and had strong female relatives and parents who read a lot and never told me I couldn’t do such and such because of being a girl, I avoided the agit-prop of the 50s that said women should be in bungalows with washing machines to make room for men coming back from the war. So I was always just very puzzled by some of the stuff said and done by/around women. I was probably a danger to myself and others! (joke) My interest was in women of all kinds — and they are of all kinds. They are interesting in and of themselves, and they do not always behave well. But then I learned more about things like laws and other parts of the world, and history… try Marilyn French’s From Eve to Dawn, pretty massive. We are now in what is being called the 3rd wave — seeing a lot of pushback against women, and also a lot of women pushing back in their turn. I’d say in general: be informed, be aware. The priorities in the US are roughly trying to prevent the roll-back that is taking place especially in the area of women’s health. Who knew that this would ever have to be defended? Childbirth care, pre-natal care, early childhood care — many people will not even be able to afford any of it. Dead bodies on the floor will result. It is frightful. Then there is the whole issue of sexual violence being used as control — it is such an old motif. For a theory of why now, see Eve’s Seed. It’s an unsettled time. If I were a younger woman I’d be taking a self-defense course. I did once take Judo, in the days of the Boston Strangler, but it was very lady-like then and I don’t think it would have availed. There’s something called Wen-Do. It’s good, I am told.


Question:

The Handmaid’s Tale gets thrown out as your current worst-case scenario right now but I read The Heart Goes Last a few months ago and I was surprised how possible it felt. Was there a specific news story or event that compelled you to write that particular story?

Margaret Atwood:

The Heart Goes Last — yes, came from my interest in what happens when a region’s economy collapses and people are really up against it, and the only “business” in which people can have jobs is a prison. It pushes the envelope (will there really be some Elvis robots?) but again, much of what was only speculation then is increasingly possible.


Question:

How did your experience with the 2017 version differ from the 1990 version of The Handmaid’s Tale?

Margaret Atwood:

Different times (that world is closer now!) and a 90 minute film is a different proposition from a 10 part 1st season series, which can build out and deep dive because it has more time. The advent of high-quality streamed or televised series has opened up a whole new set of possibilities for longer novels. We launched the 1990 film in West and then East Berlin just as the Wall was coming down… and I started writing book when the Wall was still there… Framed it in people’s minds in a different way. Also, then, many people were saying “It can’t happen here.” Now, not so much….•

Tags:

Andrew Ng’s predictions about Artificial Intelligence carry more weight with me than the projections of many of his peers because he never seems driven by irrational exuberance. In fact, he often urges caution when talk about the imminent arrival of driverless cars and other landscape-changing tools becomes overheated. 

So, when Baidu’s Chief Scientist asserts AI will soon deliver to us a brave new world, one in which, for instance, speech recognition is all but perfected, it’s probably wise to take notice. Computer conversation that’s wholly convincing should give us pause, however. Any technology that becomes seamless should be met as much by concern as enthusiasm.

An excerpt from a smart Wall Street Journal interview Scott Austin conducted with Ng and Neil Jacobstein of Singularity University:

Andrew Ng:

In addition to strengthening our core business, AI is creating a lot of new opportunities. Just as about 100 years ago electrification changed every single major industry, I think we’re in the phase where AI will change pretty much every major industry.

So part of my work at Baidu is to systematically explore new verticals. We have built up an autonomous driving unit. We have a conversational computer, similar to Amazon’s Alexa and Google Home. And we’re systematically pursuing new industries where we think we can build an AI team to create and capture value.

Question:

Let’s talk about speech recognition. I believe someone in your program has said that the hope is to get to the point where it is 99% accurate. Where are you on that?

Andrew Ng:

A couple of years ago, we started betting heavily on speech recognition because we felt that it was on the cusp of being so accurate that you would use it all the time. And the difference between speech recognition that is 95% accurate, which is where we were several years ago, versus 99% accuracy isn’t just an incremental improvement.

It’s the difference between you barely using it, like a couple of years ago, versus you using it all the time and not even thinking about it. At Baidu we have passed the knee of that adoption curve. Over the past year, we’ve seen about 100% year-to-year growth in the daily active use of speech recognition across our assets, and we project that this will continue to grow.

In a few years everyone will be using speech recognition. It will feel natural. You’ll soon forget what it was like before you could talk to computers.•

Tags: , ,

Economist Tyler Cowen just did a fun Ask Me Anything at Reddit, discussing driverless cars, the Hyperloop, wealth redistribution, Universal Basic Income, the American Dream, etc.

Cowen also discusses Peter Thiel’s role in the Trump Administration, though his opinion seems too coy. We’re not talking about someone who just so happens to work for a “flawed” Administration but a serious supporter of a deeply racist campaign to elect a wholly unqualified President and empower a cadre of Breitbart bigots. Trump owns the mess he’s creating, but Thiel does also. The most hopeful thing you can say about the Silicon Valley billionaire, who was also sure there were WMDs in Iraq, is that outside of his realm he has no idea what he’s doing. The least hopeful is that he’s just not a good person.

A few exchanges follow.


Question:

What is an issue or concept in economics that you wish were easier to explain so that it would be given more attention by the public?

Tyler Cowen:

The idea that a sound polity has to be based on ideas other than just redistribution of wealth.


Question:

What do you think about Peter Thiel’s relationship with President Trump?

Tyler Cowen:

I haven’t seen Peter since his time with Trump. I am not myself a Trump supporter, but wish to reserve judgment until I know more about Peter’s role. I am not in general opposed to the idea of people working with administrations that may have serious flaws.


Question:

In a recent article by you, you spoke about who in the US was experiencing the American Dream, finding evidence that the Dream is still alive and thriving for Hispanics in the U.S. What challenges do you perceive now with the new Administration that might reduce the prospects for this group?

Tyler Cowen:

Breaking up families, general feeling of hostility, possibly damaging the economy of Mexico and relations with them. All bad trends. I am hoping the strong and loving ties across the people themselves will outweigh that. We will see, but on this I am cautiously optimistic.


Question:

Do you think convenience apps like Amazon grocery make us more complacent?

Tyler Cowen:

Anything shipped to your home — worry! Getting out and about is these days underrated. Serendipitous discovery and the like. Confronting the physical spaces we have built, and, eventually, demanding improvements in them.


Question:

Given that universal basic income or similar scheme will become necessity after large scale automation kicks in, will these arguments about fiscal and budgetary crisis still hold true?

And with self driving cars and tech like Hyperloop, wouldn’t the rents in the cities go down?

Tyler Cowen:

Driverless cars are still quite a while away in their most potent form, as that requires redoing the whole infrastructure. But so far I see location only becoming more important, even in light of tech developments, such as the internet, that were supposed to make it less important. It is hard for me to see how a country with so many immigrants will tolerate a UBI. I think that idea is for Denmark and New Zealand, I don’t see it happening in the United States. Plus it can cost a lot too. So the arguments about fiscal crisis I think still hold.


Question:

What is the most underrated city in the US? In the world?

Tyler Cowen:

Los Angeles is my favorite city in the whole world, just love driving around it, seeing the scenery, eating there. I still miss living in the area.


Question:

I am a single guy. Can learning economics help me find a girlfriend?

Tyler Cowen:

No, it will hurt you. Run the other way!•

Tags: ,

In the Financial Times interview with Daniel Dennett I recently blogged about, a passage covers a compelling idea hatched by the philosopher and MIT’s Deb Roy in “Our Transparent Future,” a 2015 Scientific American article. The academics argue that the radical transparency now taking hold because of new technological tools, which will only grow more profound as we are lowered even further into a machine with no OFF switch, is akin to the circumstances that may have catalyzed the Cambrian explosion.

In that epoch, it might have been an abundance of light that shined through in a newly transparent atmosphere which forced organisms to adapt and led to tremendous growth–and also death. Dennett and Roy believe that society’s traditional institutions (government, marriage, education, etc.) are facing the same challenge to reinvent themselves or else, due to the tremendous flow of information we have today at our fingertips. Now that privacy is all but impossible, what is the best way to arrange ourselves? 

The opening:

MORE THAN HALF A BILLION YEARS AGO A SPECTACULARLY CREATIVE burst of biological innovation called the Cambrian explosion occurred. In a geologic “instant” of several million years, organisms developed strikingly new body shapes, new organs, and new predation strategies and defenses against them. Evolutionary biologists disagree about what triggered this prodigious wave of novelty, but a particularly compelling hypothesis, advanced by University of Oxford zoologist Andrew Parker, is that light was the trigger. Parker proposes that around 543 million years ago, the chemistry of the shallow oceans and the atmosphere suddenly changed to become much more transparent. At the time, all animal life was confined to the oceans, and as soon as the daylight flooded in, eyesight became the best trick in the sea. As eyes rapidly evolved, so did the behaviors and equipment that responded to them. 

Whereas before all perception was proximal—by contact or by sensed differences in chemical concentration or pressure waves—now animals could identify and track things at a distance. Predators could home in on their prey; prey could see the predators coming and take evasive action. Locomotion is a slow and stupid business until you have eyes to guide you, and eyes are useless if you cannot engage in locomotion, so perception and action evolved together in an arms race. This arms race drove much of the basic diversification of the tree of life we have today.

Parker’s hypothesis about the Cambrian explosion provides an excellent parallel for understanding a new, seemingly unrelated phenomenon: the spread of digital technology. Although advances in communications technology have transformed our world many times in the past—the invention of writing signaled the end of prehistory; the printing press sent waves of change through all the major institutions of society—digital technology could have a greater impact than anything that has come before. It will enhance the powers of some individuals and organizations while subverting the powers of others, creating both opportunities and risks that could scarcely have been imagined a generation ago. 

Through social media, the Internet has put global-scale communications tools in the hands of individuals. A wild new frontier has burst open. Services such as YouTube, Facebook, Twitter, Tumblr, Instagram, WhatsApp and SnapChat generate new media on a par with the telephone or television—and the speed with which these media are emerging is truly disruptive. It took decades for engineers to develop and deploy telephone and television networks, so organizations had some time to adapt. Today a social-media service can be developed in weeks, and hundreds of millions of people can be using it within months. This intense pace of innovation gives organizations no time to adapt to one medium before the arrival of the next.

The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before—and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.

The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct.•

Tags: ,

A little more on Cambridge Analytica, which Carole Caldwalla reported on recently in the Guardian. The audience-targeting company is being given significant credit by some for powering the Trump campaign to Electoral College victory. My main hesitation with believing online fake news was an predominant factor in the recent election is that Trump won overwhelmingly with older Americans, who seem to have been more plugged into Fox News than Facebook. It may have played a role, but did it really have a greater impact than, say, a legacy-media company like the New York Times, running an un-skeptical headline above the fold about the FBI suspiciously reopening the investigation into the Clinton emails? Or Russia hacking the election? It’s hard to untangle what just went on, but looking for a single smoking gun will probably always prove unsatisfactory. 

From Nicholas Confessore and Danny Hakim of the New York Times

Cambridge Analytica’s rise has rattled some of President Trump’s critics and privacy advocates, who warn of a blizzard of high-tech, Facebook-optimized propaganda aimed at the American public, controlled by the people behind the alt-right hub Breitbart News. Cambridge is principally owned by the billionaire Robert Mercer, a Trump backer and investor in Breitbart. Stephen K. Bannon, the former Breitbart chairman who is Mr. Trump’s senior White House counselor, served until last summer as vice president of Cambridge’s board.

But a dozen Republican consultants and former Trump campaign aides, along with current and former Cambridge employees, say the company’s ability to exploit personality profiles — “our secret sauce,” Mr. Nix once called it — is exaggerated.

Cambridge executives now concede that the company never used psychographics in the Trump campaign. The technology — prominently featured in the firm’s sales materials and in media reports that cast Cambridge as a master of the dark campaign arts — remains unproved, according to former employees and Republicans familiar with the firm’s work.

“They’ve got a lot of really smart people,” said Brent Seaborn, managing partner of TargetPoint, a rival business that also provided voter data to the Trump campaign. “But it’s not as easy as it looks to transition from being excellent at one thing and bringing it into politics. I think there’s a big question about whether we think psychographic profiling even works.”•

Tags: , ,

Late to Industrialization, China entered the process knowing what much of the Western world had to learn the hard way in the 1970s: Urbanizing and modernizing an entire nation brings with it tremendous economic growth, but it can’t be sustained by the same methods–or perhaps at all–when the mission is complete. It’s a one-time-only bargain.

A richer nation can’t grow endlessly on the production of cheap exports, so the newly minted superpower is pivoting more to domestic demand, a nuance no doubt lost in the Trump Administration’s ham-handed appreciation of global politics. In “Trump’s Most Chilling Economic Lie,” a Joseph Stiglitz Vanity Fair “Hive” article, the economist highlights the insanity of America engaging in a trade war with China and expecting to emerge the richer. An excerpt:

Trump’s team may be tempted to conclude, naively, that because China exports so much more to the U.S. than the U.S. exports to China, the loss of a huge export market would hurt them more than it would hurt us. This reasoning is too simplistic by half. China’s government has far more control over the country’s economy than our government has over ours; and it is moving from export dependence to a model of growth driven by domestic demand. Any restriction on exports to the U.S. would simply accelerate a process already underway. Moreover, China’s government has the resources (it’s still sitting on some $3 trillion of reserves) and instruments to help any sector that has been shut out—and in this respect, too, China is better placed than the U.S.

China has already shown how it is likely to respond if Trump should launch a trade war. At Davos, President Xi Jinping came out as the great supporter of globalization and the international rule of law—as well China should. China, with its large emerging middle class, is among the big beneficiaries of globalization. Critics have said that China does not always play fair. They complain that as China has grown, it has taken away some of the privileges, some of the tax preferences, that it gave to foreigners in earlier stages of development. They are unhappy, too, that some Chinese firms have learned quickly how to compete—some of them even appropriating ideas from others, just as we appropriated intellectual property from Europe more than a century ago.

It is worth noting that, although large multinationals complain, they are not leaving. And we tend to forget the extensive restrictions we impose on Chinese firms when they seek to invest in the U.S. or buy high-tech products. Indeed, the Chinese frequently point out that if the U.S. lifted those restrictions, America’s trade deficit with China would be smaller.

China’s first response will be to try to find areas of cooperation. They are experts in construction. They know how to build high-speed trains. They might even provide some financing for these projects. Given Trump’s rhetoric, though, I suspect that such cooperation is just a dream.

If Trump insists on an adversarial stance, China is likely to respond within the framework of international law even if Trump puts little weight on such agreements—and thus is not likely to retaliate in a naive, tit-for-tat way. But China has made it clear that it will respond. And if history is any guide, it will respond both forcefully and intelligently, hitting us where it hurts economically and politically—where, for instance, cutbacks in purchases by China will lead to more unemployment in congressional districts that are vulnerable, influential, or both. If Boeing’s order book is thin, it might, for instance, cancel its purchases of Boeing planes.•

Tags: ,

« Older entries