Science/Tech

You are currently browsing the archive for the Science/Tech category.

It wasn’t the Jazz Singer, but Benito Mussolini agreed to star in a talkie when asked by Fox Movietone News to stand before the company’s motion-picture cameras and address the citizens of the United States. In the 80-second running time, Il Duce used the phrase “make America great.” 

This type of content helped the then-struggling Fox establish, in 1929, a newsreel theater in Times Square, which served as a forerunner to today’s cable outlets.

The Fascist leader, who understood the power of communications like few in his era, would endeavor within a decade of making this short to build his very own Hollywood. Today he would merely need to open his own Twitter account. Progress.

An article in the Brooklyn Daily Eagle reported on the first foreign leader to have a speaking role on film.

Like most people who order assassins into a Malayasian airport to murder their half-brother with nerve agent, Kim Jong-un makes it difficult to examine his motivations with a sober head.

Historian Bruce Cumings attempts to do just that in an article in The Nation which explains the recent U.S. political bungling that allowed us to arrive at this scary precipice. There was a prime opportunity not even 20 years ago to have a nuke-free North Korea, but, alas, it was bungled by the Bush Administration. In the intervening period both sides of the aisle have ignored the meaning of this failure, exacerbating the situation. 

Now America’s guided by a deeply ignorant, unbalanced President who’s managed after much effort to finally locate one murderous despot he despises. So it’s game on, but it’s the most dangerous game.

An excerpt:

As I wrote for this magazine in January 2016, the North Koreans must be astonished to discover that US leaders never seem to grasp the import of their history-related provocations. Even more infuriating is Washington’s implacable refusal ever to investigate our 72-year history of conflict with the North; all of our media appear to live in an eternal present, with each new crisis treated as sui generis. Visiting Seoul in March, Secretary of State Rex Tillerson asserted that North Korea has a history of violating one agreement after another; in fact, President Bill Clinton got it to freeze its plutonium production for eight years (1994–2002) and, in October 2000, had indirectly worked out a deal to buy all of its medium- and long-range missiles. Clinton also signed an agreement with Gen. Jo Myong-rok stating that henceforth, neither country would bear “hostile intent” toward the other.

The Bush administration promptly ignored both agreements and set out to destroy the 1994 freeze. Bush’s invasion of Iraq is rightly seen as a world-historical catastrophe, but next in line would be placing North Korea in his “axis of evil” and, in September 2002, announcing his “preemptive” doctrine directed at Iraq and North Korea, among others. The simple fact is that Pyongyang would have no nuclear weapons if Clinton’s agreements had been sustained.

Now comes Donald Trump, blasting into a Beltway milieu where, in recent months, a bipartisan consensus has emerged based on the false assumption that all previous attempts to rein in the North’s nuclear program have failed, so it may be time to use force—to destroy its missiles or topple the regime. …

A bigger lesson awaits Donald Trump, should he attack North Korea. It has the fourth-largest army in the world, as many as 200,000 highly trained special forces, 10,000 artillery pieces in the mountains north of Seoul, mobile missiles that can hit all American military bases in the region (there are hundreds), and nuclear weapons more than twice as powerful as the Hiroshima bomb (according to a new estimate in a highly detailed Times study by David Sanger and William Broad).•

Tags: , ,

I’ve blogged before about Ross Perot’s McLuhan-ish dream circa 1969: an electronic town hall in which interactive television and computer punch cards would allow the masses, rather than elected officials, to decide key American policies. His technologically friendly version of direct democracy hasn’t made a dent in the decades since, despite quantum leaps in hardware and software, even today when we all potentially hold a voting booth in our pockets. That’s probably for the best.

No, representative democracy did not keep us from Brexit or Trump, but our reality would probably be worse if we turned the vote into The Voice, permitting the populace instant gratification (without much consideration) in choosing our path forward. 

In his provocative post “How Trump and Bannon Could Automate Populism,” John Robb argues for direct democracy at the party level if not the national one, believing immediate interactions between the electorate and representatives will serve as a salve. I’m not so sure. For instance, the GOP is already fully aware that its bloc doesn’t want Obamacare repealed yet it hasn’t be that knowledge but rather dysfunction that’s so far prevented the tearing of that social safety net. It may be that our system is too corrupted at present for apps to make much of a difference. There are many critical questions about our politics, but I don’t know that technology is the correct answer to any of them.

Robb’s opening:

We live in a world where we can get nearly everything instantly.  

Instant information.  Instant entertainment.  Instant communications.  Instant transactions.    

Simply and rightly, we have come to expect our decisions to yield instant results from the systems that serve us.  

Well, that’s true for every system except our political system.    

We’re only allowed to interact with our political system, in a meaningful way, only once every two years and only then by filling out a multiple choice quiz in an election booth.  

That’s akin to an Internet that only available for a couple of hours every two years at 1,200 baud.   

It’s crazy in this day and age.  Worse, there’s increasing evidence it is driving us crazy.   We are filling the time in between these electoral events with around the clock political warfare.  A ceaseless drumbeat of outrage and conspiracy, amplified by the online echo chambers we spend our time in.

Fortunately, I don’t believe this disconnect will last long.   A form of direct democracy is coming.  One that lets people directly influence the decisions of the people they send to Washington.

A form of interactive democracy that doesn’t require any changes to the constitution since it works at the party level and not the national.  

When it does, it’s going to hit us fast, taking off like wildfire since it fulfills a fundamental need that the current system does not provide.•

Tags:

Behavioral science, which I just mentioned, is usually sold as a modern means of guiding us to healthier decisions about food and finances, among other areas, nudging us to do right rather than forcing us to. It’s billed as being avuncular rather than autocratic, paternalistic instead of despotic. 

Even if that’s so, the field’s application is still often fairly creepy, marked by manipulation. It’s real noble contribution would be to teach us about the biases we unwittingly possess and the flaws in our thought processes, so we could analyze them and overcome these failings in time through the development of better critical thinking. Perhaps we’re only in the Proterozoic period of the discipline, and that’s what the branch actually contributes in the long run. 

Until that more enlightened age, capitalism almost demands that abuses of the subject will be employed by enough players hoping to pad their bank accounts through “priming” and other predatory practices. Even if the efficacy of these methods is overstated, there’s still plenty of money to be made on the margins, prodding the more prone among us to purchase or politick in a particular way.

In a wonderfully thought-provoking New York Review of Books piece about Michael Lewis’ book The Undoing Project: A Friendship That Changed Our Minds, philosopher Tamsin Shaw argues convincingly that the “pressures to exploit irrationalities rather than eliminate them are great.” An excerpt: 

In 2007, and again in 2008, Kahneman gave a masterclass in “Thinking About Thinking” to, among others, Jeff Bezos (the founder of Amazon), Larry Page (Google), Sergey Brin (Google), Nathan Myhrvold (Microsoft), Sean Parker (Facebook), Elon Musk (SpaceX, Tesla), Evan Williams (Twitter), and Jimmy Wales (Wikipedia).3 At the 2008 meeting, Richard Thaler also spoke about nudges, and in the clips we can view online he describes choice architectures that guide people toward specific behaviors but that can be reversed with one click if the subject doesn’t like the outcome. In Kahneman’s talk, however, he tells his assembled audience of Silicon Valley entrepreneurs that “priming”—picking a suitable atmosphere—is one of the most important areas of psychological research, a technique that involves offering people cues unconsciously (for instance flashing smiley faces on a screen at a speed that makes them undetectable) in order to influence their mood and behavior. He insists that there are predictable and coherent associations that can be exploited by this sort of priming. If subjects are unaware of this unconscious influence, the freedom to resist it begins to look more theoretical than real.

The Silicon Valley executives clearly saw the commercial potential in these behavioral techniques, since they have now become integral to that sector. When Thaler and Sunstein last updated their nudges.org website in 2011, it contained an interview with John Kenny, of the Institute of Decision Making, in which he says:

You can’t understand the success of digital platforms like Amazon, Facebook, Farmville, Nike Plus, and Groupon if you don’t understand behavioral economic principles…. Behavioral economics will increasingly be providing the behavioral insight that drives digital strategy.

And Jeff Bezos of Amazon, in a letter to shareholders in April 2015, declared that Amazon sellers have a significant business advantage because “through our Selling Coach program, we generate a steady stream of automated machine-learned ‘nudges’ (more than 70 million in a typical week).” It is hard to imagine that these 70 million nudges leave Amazon customers with the full freedom to reverse, after conscious reflection, the direction in which they are being nudged.

Facebook, too, has embraced the behavioral insights described by Kahneman and Thaler, having received wide and unwanted publicity for researching priming. In 2012 its Core Data Science Team, along with researchers at Cornell University and the University of California at San Francisco, experimented with emotional priming on Facebook, without the awareness of the approximately 700,000 users involved, to see whether manipulation of their news feeds would affect the positivity or negativity of their own posts. When this came to light in 2014 it was generally seen as an unacceptable form of psychological manipulation. But Facebook defended the research on the grounds that its users’ consent to their terms of service was sufficient to imply consent to such experiments.•

Tags:

Fascinating article by the New York Times Technology section detailing how Uber and other Gig Economy giants are employing behavioral science to subtlely manipulate their workers into acting in the best interests of the companies. As the piece says: “Most of this happens without giving off a whiff of coercion.”

Businesses have forever tried to nudge consumers into buying their products, whether though legitimate means or the unethical kind (e.g., subliminal advertising), but using Digital Age tools to stealthily treat employees like lab rats is an altogether different thing. The “freedom” promised to contractors who toil in the piecemeal workforce isn’t really quite so free, and there are broader implications for the future.

An excerpt:

Even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

Uber’s innovations reflect the changing ways companies are managing workers amid the rise of the freelance-based “gig economy.” Its drivers are officially independent business owners rather than traditional employees with set schedules. This allows Uber to minimize labor costs, but means it cannot compel drivers to show up at a specific place and time. And this lack of control can wreak havoc on a service whose goal is to seamlessly transport passengers whenever and wherever they want.

Uber helps solve this fundamental problem by using psychological inducements and other techniques unearthed by social science to influence when, where and how long drivers work. It’s a quest for a perfectly efficient system: a balance between rider demand and driver supply at the lowest cost to passengers and the company.

Employing hundreds of social scientists and data scientists, Uber has experimented with video game techniques, graphics and noncash rewards of little value that can prod drivers into working longer and harder — and sometimes at hours and locations that are less lucrative for them.•

The Quartz “Daily Brief” newsletter referred me to “Cars and Second Order Consequences,” a smart Benedict Evans post that tries to anticipate changes beyond the obvious that will be wrought by EVs and driverless. There’s plenty of good stuff on the fate of gas stations, mass transportation and city living when on-demand rides become the new normal.

What really caught my eye, though, was the final idea in the piece, in which Evans imagines how these rolling computers with unblinking vision will change policing. He focuses only on how it will be a boon for law enforcement, but this non-stop surveillance, a totalitarian dream, can easily be abused by governments, corporations and hackers. Let’s recall that a panopticon is a prison building designed to allow all inmates to be observed at all times. There’s no opting out.

An excerpt:

Finally, remember the cameras. Pretty much every vision of automatic cars involves them using HD, 360 degree computer vision. That means that every AV will be watching everything that goes on around it – even the things that are not related to driving. An autonomous car is a moving panopticon. They might not be saving and uploading every part of that data. But they could be. 

By implication, in 2030 or so, police investigating a crime won’t just get copies of the CCTV from surrounding properties, but get copies of the sensor data from every car that happened to be passing, and then run facial recognition scans against known offenders. Or, perhaps, just ask if any car in the area thought it saw something suspicious.•

Tags:

In the early 1930s, a shadowy figure named Howard Scott suddenly became a sensation in media and political circles when he announced American society was to collapse within 18 months. He wasn’t a theologian but a technocrat, and he warned that machine labor was poised to bring about universal unemployment. In the dark and desperate early days of the Great Depression, his secular sermon, colored by totalitarian overtones, was widely received.

While Scott’s credentials as a master engineer were more than greatly exaggerated, he didn’t allow a lack of paperwork to restrain his ambitions, arguing that he and a team of technocrats should run a new North American superstate, using facts and figures and numbers and math to do the job that politicians had traditionally handled. The result, it was promised, would be a radical abundance. In California alone, the movement soon boasted over a million members who wore gray suits, drove gray cars and “replaced their names with numbers, such as ‘1x1809x56.'”

America somehow crept from the Dust Bowl in one piece and Scott was more or less defrocked, but his ideas, an odd mixture of populism and anti-government impulses, still resound today, from the campaign trail to Silicon Valley, for better or worse.

An article in the January 1, 1933 Brooklyn Daily Eagle endeavored to unmask Scott.

Tags:

The excellent New York Times “Personal Tech” columnist Farhad Manjoo decided to wire his living and dining rooms with (almost) 24/7 surveillance to capture special family moments and every other moment, and somehow his wife didn’t defenestrate him as if he were a Russian businessman who’d thought bad thoughts about Vladimir Putin or Comrade Trump. 

The technology enabled him to accurately record beautiful scenes of life which would have otherwise become imperfect memories or perhaps been completely forgotten in time. The behavior is extreme and odd, at least for a little while longer. 

It wasn’t 20 years ago that Silicon Valley sadist Josh Harris created a Warholian police state when he ushered 100 volunteers into a Manhattan surveillance bunker full of free food and firearms for his extreme art project “We Live in Public.” Every formerly private moment was captured on film, until the NYPD shut down an increasingly ugly scene. 

The decadent and abusive nature of the undertaking, a cross between the Truman Show and the Stanford Prison Experiment, marked it as outré and unacceptable, but that doesn’t mean nonstop surveillance is to be rejected if the practices are more mundane, even painted as something of a friendly, value-added service.

In a recent TechCrunch piece, it was reported the Wynn Las Vegas hotel was “adding an Amazon Echo to every one of its 4,748 rooms,” meaning that a “built-in surveillance device [will] potentially listen in on all [visitor] conversations.” Ostensibly done to help anticipate customers’ every desire–and sell them products to satisfy those cravings–the creation of such material can obviously be repurposed for unsavory means, sold or stolen.

When Louis Daguerre, in 1839, first mastered his method of photography, it’s likely that along with sense of wonder people felt a sense of dread. Would this new tool change the nature of our memory, change the nature of us? His invention actually enhanced life and perhaps those in our time will as well, but anything without an OFF switch–and that’s where we’re headed–should be approached with caution.

From Manjoo:

Question:

What new tech product are you currently obsessed with using at home? What do you and your family do with it?

Farhad Manjoo:

This is going to sound weird, but I’m a strange person. I have two kids, ages 6 and 4, and for the last few years I’ve been mourning their loss of childhood. Every day they get a little bit older, and even though my wife and I take lots of photos and videos of them, I can’t shake the feeling that we’re losing most of the moments of their lives.

So last summer, after some intense lobbying of my wife, I did something radical: I installed several cameras in my living room and dining room to record everything we did at home for posterity. In other words, I created a reality show in my house.

In practice, it works like this: The cameras are motion-activated and connected to servers in the cloud. Like security cameras in a convenience store, they are set to record on a constant loop — every video clip is saved for a few days, after which it’s automatically deleted, unless I flag it for long-term keeping.

Yes, this system sets up a minefield of potential problems. We turn off the cameras when we have guests (it’s unethical and, depending on where you live, possibly even illegal to record people without their consent) and we don’t spy on each other. There are also security concerns. I’m not going to disclose the brand of the cameras I used because I don’t want to get hacked. The safety of internet-of-things devices is generally not airtight.

And yet I’ve found these cameras to be just wonderful at capturing the odd, beautiful, surprising, charming moments of life that we would never have been able to capture otherwise. Every time the kids say something hilarious or sweet, or do something for the first time, I make a note of the time and date. Later on, I can go and download that exact clip, to keep forever. I’ve already got amazing videos of weeknight dinners, of my wife and I watching the news on election night, of my son learning to play Super Mario Brothers, and my kids having a dance party to their favorite music.

When I’m 80 and the robots have taken over, I’ll look back on these and remember that life was good, once.•

Tags:

Peter Thiel wants to live forever, but the odds are against him. I mean, if you were absolutely sure there were WMDs in Iraq and that Donald Trump would be a great President, you might die right this minute of embarrassment, right? The billionaire investor is safe in this regard, however, since he possesses no shame or self-awareness. Still, he could get cancer or something, so like a lot of deep pockets in Silicon Valley, he’s pouring money into the pursuit of eternal life. 

From monkey glands to blood transfusions to all manners of elixirs, complete quacks have been selling forever, well, forever. Silicon Valley of the new millennium sees itself as a more serious player in the field, and its CEOs and VIPs have purchased instant credibility, collecting those with advanced degrees and impressive credentials. Still, something feels a little crooked about it all despite the sincerity.

The extravagant overpromising certainly doesn’t help. Gerontologist Aubrey de Grey is as much a true believer as anyone, his intense quest for immortality so fervent it leads him to sometimes make proclamations far too bold: In 2004, the scientist said, “The first person to live to 1,000 might be 60 already.” That line has not aged well.

The Immortality Industrial Complex will not ultimately make us live eternally, but there will likely be benefits to the research produced by the sector. We might get more bang for the buck if these folks focused on incremental improvements rather than moonshots, sure, but the fortunes funding the search for a “permanent cure” wouldn’t be available were it not for the lure of eternity to entice those with bottomless stock options. The blessing is mixed.

In his New Yorker article “Silicon Valley’s Quest to Live Forever,” Tad Friend approaches the immortalists working under this anti-death directive with his usual mixture of skepticism, sly humor and insight. His reporting suggests that even the most bleeding-edge labs, endeavoring to perfect computer-designed drugs and gene therapies, are still operating largely in the dark. “Super muscularity, ultra-endurance, super radiation-resistance” aren’t theoretically impossible, but we’ll likely have to wait a good, long while for such a biotech revolution.

An exerpt:

For those frustrated by the stately progress of research up the animal chain, from worms to flies to mice to dogs to monkeys, speculative treatments abound. In Monterey, California, a clinic will give you young plasma for eight thousand dollars a pop—but you have no idea what it’s doing to you. Peter Nygård, a leonine seventy-five-year-old Finnish-Canadian clothing designer who got rich making women look slim in modestly priced pants, has had injections with stem cells derived from his DNA. He believes that the process has reversed his aging. In an interview a few years ago, he proclaimed, “I’m the only guy in the world today who has me, in a petri dish, before I was born.”

While [microbiologist Brian] Hanley has a tinkerer’s mentality—there’s a hyperbaric chamber stuffed behind his couch—he’s a dedicated researcher. Since the F.D.A. requires an authorization for any new tests on humans, he began trying therapies on himself. He’d read the literature on self-experimentation, and tallied the results: eight deaths (including that of the blood-transfusing Alexander Bogdanov), and ten Nobel Prizes. Coin toss.

Hanley acknowledged that his research had a few basic problems as a template for reshaping life spans. First, a sample size of one; second, a therapeutic method whose results may not last; third, a gene whose effects seem to be regenerative rather than transformative. In order to comprehensively reprogram ourselves, we’d want to insert corrective genes into a virus that would disperse them throughout the body, but doing so could alarm the immune system.

The advent of CRISPR a gene-editing tool, has given researchers confidence that we’re on the verge of the gene-therapy era. George Church and his Harvard postdocs have culled forty-five promising gene variants, not only from “super centenarians”—humans who’ve lived to a hundred and ten—but also from yeast, worms, flies, and long-lived animals. Yet Church noted that even identifying longevity genes is immensely difficult: “The problem is that the bowhead whale or the capuchin monkey or the naked mole rat, species that live a lot longer than their close relatives, aren’t that close, genetically, to those relatives—a distance of tens of millions of genetic base pairs.” The molecular geneticist Jan Vijg said, “You can’t just copy a single mechanism from the tortoise,” which can live nearly two hundred years. “We’d have to turn our genome over to the tortoise—and then we’d be a tortoise.”

Becoming part tortoise wouldn’t necessarily alarm Brian Hanley. If we can only find the right genes and make their viral transmission safe, he declared, “we can enable human transformations that would rival Marvel Comics. Super muscularity, ultra-endurance, super radiation-resistance. You could have people living on the moons of Jupiter who’d be modified in this way, and they could physically harvest energy from the gamma rays they were exposed to.”

Tags: ,

When debating whether we’re on the verge of a revolution in automation that might displace too many workers in too brief a time, those sanguine on the topic invariably introduce bank tellers into the argument as proof that machines which would appear to kill jobs actually create more of them. The “Automation Paradox” it’s called.

There are two major problems with this theory which I’ll get to after an excerpt from James Bessen’s 2016 Atlantic article on the topic:

Robot panic is in full swing.

But these fears are misplaced—what’s happening with automation is not so simple or obvious. It turns out that workers will have greater employment opportunities if their occupation undergoes some degree of computer automation. As long as they can learn to use the new tools, automation will be their friend.

Take the legal industry as an example. Computers are taking over some of the work of lawyers and paralegals, and they’re doing a better job of it. For over a decade, computers have been used to sort through corporate documents to find those that are relevant to lawsuits. This process—called “discovery” in the profession—can run up millions of dollars in legal bills, but electronic methods can erase the vast majority of those costs. Moreover, the computers are often more accurate than humans: In one study, software correctly found 95 percent of the relevant documents, while humans identified only 51 percent.

But, perhaps surprisingly, electronic discovery software has not thrown paralegals and lawyers into unemployment lines. In fact, employment for paralegals and lawyers has grown robustly. While electronic discovery software has become a billion-dollar business since the late 1990s, jobs for paralegals and legal-support workers actually grew faster than the labor force as a whole, adding over 50,000 jobs since 2000, according to data from the U.S. Census Bureau. The number of lawyers increased by a quarter of a million.

Something similar happened when ATMs automated the tasks of bank tellers and when barcode scanners automated the work of cashiers: Rather than contributing to unemployment, the number of workers in these occupations grew. 

Okay, the two problems: 1) One is that bank tellers handle many functions beyond just dispensing money, so the ATM technology has been more an add-on convenience than a replacement. As AI improves and makes smart machines more flexible, they’ll nudge aside their human counterparts. 2) Just because a class of worker isn’t immediately elbowed aside by robotics doesn’t mean there’s a permanent detente. Emergent automobiles shared the roads with horses for decades before the animals were driven away. We’ll be employed to work alongside robots, in tandem with them, until we’re no longer employed that way. That day will come for almost all positions; it’s just a matter of how quickly.

Reuters piece by Jemima Kelly suggests that reckoning will arrive in a handful of years for bank tellers and customer-service people. Give or take, that’s probably so. An excerpt:

LONDON (Reuters) – Artificial intelligence (AI) will become the primary way banks interact with their customers within the next three years, according to three quarters of bankers surveyed by consultancy Accenture in a new report.

Four in five bankers believe AI will “revolutionise” the way in which banks gather information as well as how they interact with their clients, said the Accenture Banking Technology Vision 2017 report, which surveyed more than 600 top bankers and also consulted tech industry experts and academics.

Artificial intelligence — the technology behind driverless cars, drones and voice-recognition software — is seen by the financial world as a key technology which, along with other “fintech” innovations such as blockchain, will change the face of banking in the coming years.

More than three quarters of respondents to the survey believed that AI would enable more simple user interfaces, which would help banks create a more human-like customer experience.

“The big paradox here is that people think technology will lead to banking becoming more and more automated and less and less personalized, but what we’ve seen coming through here is the view that technology will actually help banking become a lot more personalized,” said Alan McIntyre, head of the Accenture’s banking practice and co-author of the report.

“(It) will give people the impression that the bank knows them a lot better, and in many ways it will take banking back to the feeling that people had when there were more human interactions.”•

Tags: ,

Megan McArdle seems like a basically decent person, but she’s spent much time railing against the Affordable Care Act which has helped my family and friends immeasurably. She has the privilege of worrying about “innovation” when others are fixated on that not-dying thing. Must be nice.

The Libertarian columnist recently went looking for the American Dream in Utah, a state that’s done a commendable job in combating homelessness and other social ills, though it must be noted that it’s whiter and more patriarchal than a Freedom Caucus meeting about maternity leave.

The role of the Mormon Church is clearly paramount in enabling a higher-than-usual upward mobility for the impoverished, and that aspect is clearly not replicable in other quarters of the country unless a large number of Midwesterners who’ve taken Broadway vacations to catch The Book of Mormon have had an epiphany. 

Worse yet, a scary number of Christians seem to have turned away from their charitable roots, not at all asking, “What would Jesus do?” In the recent Presidential election, Christianity was often a euphemism for white supremacy. Maybe that’s because many who identify with the faith have stopped attending church or perhaps the American strain of the religion is so embedded with prejudice that it’s incompatible with true equality.

Christian politicians are often are even worse when it comes to tending to the poor, pushing punishing policies trained on hurting those who have the least, creating a prison state and denying minorities of voting rights. They simply don’t want poorer citizens, especially non-white ones, to thrive, and there’s no moral equivalency in this regard between conservatives and liberals. For many, power trumps church teachings: Mike Pence was very eager to strike a deal with the devil, while Mike Huckabee has gleefully defended Trump’s incessant outrages.

There’s good stuff from McArdle about Utah’s social services programs, the role of volunteerism and the promotion of self-reliance, but she comes away only moderately hopeful that the Salt Lake miracle can be duplicated elsewhere in the U.S. Of course, if you’re a Libertarian who doesn’t really like government very much, there’s no other conclusion to be drawn. If Obamacare really helped your loved ones, however, you might feel differently.

An excerpt:

“Big government” does not appear to have been key to Utah’s income mobility. From 1977 to 2005, when the kids in Chetty et al’s data were growing up, the Rockefeller Institute ranks it near the bottom in state “fiscal capacity.” The state has not invested a lot in fighting poverty, nor on schools; Utah is dead last in per-pupil education spending. This should at least give pause to those who view educational programs as the natural path to economic mobility.

But “laissez faire” isn’t the answer either. Utah is a deep red state, but its conservatism is notably compassionate, thanks in part to the Mormon Church. Its politicians, like Senator Mike Lee, led the way in rejecting Donald Trump’s bid for the presidency. And the state is currently engaged in a major initiative on intergenerational poverty. The bill that kicked it off passed the state’s Republican legislature unanimously, and the lieutenant governor has been its public face.

This follows what you might call the state’s “war on homelessness” — a war that has been largely victorious, with most of the state’s homeless resettled in permanent housing through a focus on “Housing First.” That means getting people into permanent shelter before trying to diagnose and address the problems that contributed to their homelessness, like mental illness and substance abuse.

This approach can be cheaper than the previous regime, in which too many individuals ended up in emergency rooms or temporary shelter seeking expensive help for urgent crises. But Housing First runs into fierce emotional resistance in many quarters, because it smacks too much of rewarding people for self-destructive behaviors. Utah’s brand of conservatism overcame that, in part because the Mormon Church supported it.

That’s the thing about the government here. It is not big, but it’s also not … bad.•

Tags:

The narrative of the recent election is that Trump won over “forgotten Americans,” though Hillary Clinton received the most votes from households making under $50k. The MAGA voters who were fetishized in the Election Day post-mortem were white, and somehow their struggles were awarded greater currency than people who had less. Part of that is because they tipped a vital election by being located in certain states which gave them a certain political capital, but the truth is their skin color fit into the noxious demagoguery of the campaign season. 

I’ve published a couple of posts about the new Case-Deaton paper about morbidity and mortality, which tries to divine the reason for middle-aged Caucasians enduring a “great die-off.” The report has not yet been peer-reviewed, and in Pacific·Standard, Mark Harris pushes back at the findings, arguing the research is marked by suspect methodology (above my head) but also that it misleadingly fixates on white Americans who still enjoy healthier and wealthier lives across the board than, say, African-Americans. The latter group has a significantly shorter lifespan than their white counterparts.

If the trend lines truly show one race making progress and another faltering, even if the declining group is richer, it’s certainly valuable to report as much so that we can attempt to stem a serious problem. The danger, however, is that attention will be pulled from those who need it most because of a compelling story line. 

From Harris:

Dubious methodology aside, there is still some useful information in the Case and Deaton report. America does seem to have a serious problem ensuring longevity for its population as compared to its peer nations. But, though the international perspective is the strongest part in their paper, it’s not what the researchers or the newspapers led with. Why put the statistical alchemy in front? Why is the story more dramatic or attractive when it’s about white people?

Mistakes and missteps also propel social science forward, as the Olshansky paper did. Still, Case and Deaton didn’t publish their findings in a peer-reviewed public-health journal, at least not first. Brookings is a center of political influence in Washington, and I have no doubt that Capitol Hill staffers have already written up their briefs on the report and passed them to their bosses — that is, if they work half as fast as Internet journalists do.

By the time it makes its way to the top of the policymaker food chain, how will this report be understood? I’d wager it’s something like the Brookings blog headline: “Working Class White Americans Are Now Dying in Middle Age at Faster Rates Than Minority Groups.” I asked [Arline] Geronimus if that was, to her understanding, a true statement: “I think that’s misleading, I really do. Oh boy,” she laughs, “there’s so much wrong with that. That headline makes it sound like problems are worse for white Americans than black Americans.” The narrative is wrong, but it’s not the first time Geronimus has heard it since the election. The Case and Deaton paper, she says, fits conveniently in this story, and it’s one she fears Americans are primed to believe.•

Tags: , , ,

In an recent post, I commented on new Treasury Secretary Steven Mnuchin’s puzzling contention that AI replacing human workers is “not even on our radar screen.” It makes me think his radar screen is not plugged in. Maybe a robot could do it for him?

In a Financial Times opinion piece, Lawrence Summers, who previously held the same White House post, isn’t convinced that smart machines will lead to a large-scale job loss, but he is sure that a big technological switch is under way.

Cora Lewis penned a troubling BuzzFeed article about the impact on employment of autonomous machines, asserting that based on fresh research about six human workers are disappeared every time a new robot is utilized in a factory. If true, that still doesn’t mean Summers is definitely wrong in believing the Second Machine Age transition may not lead to a net job loss, but it would require lots of new positions to be created. What happens if they’re not?

Two excerpts follow.


From Summers:

In reference to a question about artificial intelligence displacing American workers, Secretary Mnuchin responded that: “I think that is so far in the future — in terms of artificial intelligence taking over American jobs — I think we’re like so far away from that (50 to 100 years) that it is not even on my radar screen”. He also remarked that he did not understand tech company valuations in a way that implied that he regarded them as excessive. I suppose there is a certain internal logic. If you think AI is not going to have any meaningful economic effects for a half century than I guess you should think that tech companies are overvalued. But neither statement is defensible.

Mr Mnuchin’s comment about the lack of impact of technology on jobs is to economics about what global climate change denial is to atmospheric science or what creationism is to biology. Yes, you can debate whether technological change is to the net good. I certainly believe it is. And you can debate what the job creation effects will be relative to the job destruction effects. I think this is much less clear given the trends downwards in adult employment especially for men over the last generation.

But I do not understand how anyone could reach the conclusion that all the action with technology is half a century away. AI is behind autonomous vehicles which will affect millions of jobs driving and dealing with cars within the next 15 years even on conservative projections. It is transforming everything from retailing to banking to the provision of medical care. Almost every economist who has studied the question believes that technology has had a greater impact on the wage structure and on employment than international trade and certainly a far greater impact than whatever increment to trade is the result of much debated trade agreements.•


From Lewis:

Every new robot added to an American factory in recent decades reduced employment in the surrounding area by 6.2 workers, according to a new study released by the National Bureau of Economic Research.

Researchers worked to separate the impact of robots from other big-picture economic trends that hit the US workforce in the same period, like imports from China and Mexico, computer software replacing office work, and offshoring. With all that taken into account, they estimated that for every one robot per thousands workers in a given area of the country, the employment rate went down by .2-.3 percentage points, and wages fell by between .25 and .5 percent.

“We see negative effects of robots on essentially all occupations, with the exception of managers,” wrote economists Daron Acemoglu of MIT and Pascual Restrepo of Boston University in the study. “Predictably, the major categories experiencing substantial declines are routine manual occupations, blue-collar workers, operators and assembly workers, and machinists and transport workers.”•

Tags: ,

Gary Silverman of the Financial Times penned a great piece in February about Alabama encountering the false promise of a manufacturing revival, with the jobs, uncoupled from union protections and divorced from good policy, often proving dangerous, contracted and low-paying.

These are the scraps really being offered with Trump’s vow to return America to factory-town glory. Individuals face fierce competition for substandard positions as they dwindle before the progress of automation, while poorer states offer such aggressive incentives to attract plants that their tax bases aren’t much enhanced in the bargain.

In Bloomberg Businessweek, Peter Waldman treads on the same territory with “Inside Alabama’s Auto Jobs Boom,” which makes it clear the “New Detroits” dotting the Southern landscape aren’t much like the classic model. An excerpt:

Alabama has been trying on the nickname “New Detroit.” Its burgeoning auto parts industry employs 26,000 workers, who last year earned $1.3 billion in wages. Georgia and Mississippi have similar, though smaller, auto parts sectors. This factory growth, after the long, painful demise of the region’s textile industry, would seem to be just the kind of manufacturing renaissance President Donald Trump and his supporters are looking for.

Except that it also epitomizes the global economy’s race to the bottom. Parts suppliers in the American South compete for low-margin orders against suppliers in Mexico and Asia. They promise delivery schedules they can’t possibly meet and face ruinous penalties if they fall short. Employees work ungodly hours, six or seven days a week, for months on end. Pay is low, turnover is high, training is scant, and safety is an afterthought, usually after someone is badly hurt. Many of the same woes that typify work conditions at contract manufacturers across Asia now bedevil parts plants in the South.

“The supply chain isn’t going just to Bangladesh. It’s going to Alabama and Georgia,” says David Michaels, who ran OSHA for the last seven years of the Obama administration. Safety at the Southern car factories themselves is generally good, he says. The situation is much worse at parts suppliers, where workers earn about 70¢ for every dollar earned by auto parts workers in Michigan, according to the Bureau of Labor Statistics. (Many plants in the North are unionized; only a few are in the South.)

Cordney Crutcher has known both environments. In 2013 he lost his left pinkie while operating a metal press at Matsu Alabama, a parts maker in Huntsville owned by Matcor-Matsu Group Inc. of Brampton, Ont. Crutcher was leaving work for the day when a supervisor summoned him to replace a slower worker on the line, because the plant had fallen 40 parts behind schedule for a shipment to Honda Motor Co. He’d already worked 12 hours, Crutcher says, and wanted to go home, “but he said they really needed me.” He was put on a press that had been acting up all day. It worked fine until he was 10 parts away from finishing, and then a cast-iron hole puncher failed to deploy. Crutcher didn’t realize it. Suddenly the puncher fired and snapped on his finger. “I saw my meat sticking out of the bottom of my glove,” he says.

Now Crutcher, 42, commutes an hour to the General Motors Co. assembly plant in Spring Hill, Tenn., where he’s a member of United Auto Workers. “They teach you the right way,” he says. “They don’t throw you to the wolves.” His pay rose from $12 an hour at Matsu to $18.21 at GM.•

Tags:

If we don’t kill ourselves first and we probably will, the Posthuman Industrial Complex will ultimately become a going concern. I can’t say I’m sorry I’ll miss out on it.

Certainly establishing human colonies in space will change life or perhaps we’ll change life as a precursor to settling the final frontier. From Freeman Dyson:

Sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth. I give the name Noah’s Ark culture to this style of space operation. A Noah’s Ark spacecraft is an object about the size and weight of an ostrich egg, containing living seeds with the genetic instructions for growing millions of species of microbes and plants and animals, including males and females of sexual species, adapted to live together and support one another in an alien environment.

There are also computational scientists among the techno-progressivists who are endeavoring, with the financial aid of their deep-pocketed Silicon Valley investors, to radically alter life down here, believing biology itself a design flaw. To such people, there are many questions and technology is the default answer.

In an excellent excerpt in the Guardian, To Be a Machine author Mark O’Connell explores the Transhumanisitic trend and its “profound metaphysical weirdness,” profiling the figures forging ahead with reverse brain engineering, neuroprostheses and emulations, who wish to reduce human beings to data. The opening:

Here’s what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spider’s legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

You’re in pretty deep with this thing; there’s no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computer’s hardware. As the work proceeds, another mechanical appendage – less delicate, less careful – removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe – with sadness, or horror, or detached curiosity – the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.•

Tags:

Promises during the campaign season about reshoring manufacturing jobs was perplexing and counterproductive. Most of that work has disappeared not to China and Mexico but into the zeros and ones. Artificial Intelligence is poised to further radically transform the labor landscape in the coming decades, whether or not the Frey-Osborne benchmark predicting 47% of current jobs are at risk turns out to be prophetic.

The honest argument, whether correct or not, against the prevailing idea that AI will disrupt society by replacing us at the office and factory is that these positions will be supplanted by superior ones, as was the case when we transitioned from an agrarian culture to the Industrial Age. Even those who are certain of this outcome often fail to recall what a bumpy progression that was, with legislation, unionization and the establishment of social safety nets required to avoid bloody revolution or collapse. It wasn’t easy, literal blood was spilled, and that’s the glass-half-filled option in the Second Machine Age.

Treasury Secretary Steven Mnuchin, who strapped on beer goggles of a 1930s vintage by declaring today that Donald Trump has “perfect genes,” is either wildly dishonest or completely oblivious when he says AI is not a threat to today’s workers.

From Gillian B. White at the Atlantic:

On Friday, during a conversation with Mike Allen of Axios, the newly minted Treasury Secretary Steven Mnuchin said that there was no need to worry about artificial intelligence taking over U.S. jobs anytime soon. “It’s not even on our radar screen,” he told Allen. When pressed for when, exactly, he thought concern might be warranted, Mnuchin offered “50 to 100 more years.” Just about anyone who works on, or studies machine learning would beg to differ.

In December of 2016, about one month before President Trump officially took office, the White House released a report on artificial intelligence and its impact on the economy. It found that advances in machine learning already had the potential to disrupt some sectors of the labor market, and that capabilities such as driverless cars and some household maintenance tasks were likely to cause further disruptions in the near future. Experts asked to weigh in on the report estimated that in the next 10 to 20 years, 47 percent of U.S. jobs could in some way be at risk due to advances in automation.

The Obama administration is certainly not the only group of experts to believe that the impact of machine learning on the labor market has already started. In a conversation earlier this month, Melinda Gates cited rapidly advancing machine learning as part of the reason that the tech industry needed to tackle its gender diversity initiatives immediately. In 2016, a report from McKinsey found that existing technologies could automate about 45 percent of the activities that humans are paid to perform. Even Mnuchin’s former employer, Goldman Sachs, believes that a massive leap forward in terms of machine learning will occur within the next decade.•

Tags: ,

Channeling Nicholas Carr’s comments on the recent Mark Zuckerberg “Building Global Community” manifesto, I will say this: The answer to all technologically enabled human problems is not more technology. Sometimes the system itself is the bug, the fatal error.

In a Financial Times piece, Yuval Noah Harari is more hopeful on the Facebook founder’s globalization gambit, not thinking his intentions grandiose but believing them largely praiseworthy if decidedly vague. The historian does caution that social-media companies would need to alter their focus, perhaps sacrifice financially, to actually foster healthy, large-scale societies, a shift that seems fanciful. 

Harari thinks we’d likely be safer and more prosperous as a world community, which isn’t a sure thing, but even if it were, many forces are working against transitioning humans into a “global brand.” If Harari is correct, Facebook’s place in that scheme would likely be minute–or perhaps it would serve as an impediment despite Zuckerberg’s designs.

Regardless of where you stand on these issues, Harari’s writing is, as always, dense with thought-provoking ideas and enlivened by examples plucked from centuries past. One example, about the downside of residing in cyberspace rather than in actual space: “Humans lived for millions of years without religions and without nations — they can probably live happily without them in the 21st century, too. Yet they cannot live happily if they are disconnected from their bodies. If you don’t feel at home in your body, you will never feel at home in the world.”

The opening:

Mark Zuckerberg last month published an audacious manifesto on the need to build a global community, and on Facebook’s role in that project. His 5,700-word letter — on his Facebook page — was intended not just to allay concerns over social media’s role in spreading “fake news”. It also indicated that Facebook is no longer merely a business, or even a platform. It is on its way to becoming a worldwide ideological movement.

Of course words are cheaper than actions. To implement his manifesto, Zuckerberg might have to jump headlong into a political minefield, and even change his company’s entire business model. You can hardly lead a global community when you make your money from capturing people’s attention and selling it to advertisers. Despite this, his willingness to even formulate a political vision deserves praise.

Most corporations are faithful to the neoliberal dogma that says corporations should focus on making money, governments should do as little as possible, and humankind should trust market forces to take the really important decisions on our behalf. Tech giants such as Facebook have extra reason to distance themselves from any paternalistic political agenda and to present themselves as a transparent medium. With their immense power and hoard of personal data, they have been extremely wary of saying anything that might cause them to look even more like Big Brother.

There are certainly good reasons to fear Big Brother. In the 21st century, Big Data algorithms could be used to manipulate people in unprecedented ways. Take future election races, for example: in the 2020 race, Facebook could theoretically determine not only who are the 32,578 swing voters in Pennsylvania, but also what you need to tell each of them in order to swing them in your favour. But there is also much to fear from abdicating all responsibility to market forces. The market has proven itself woefully inadequate in confronting climate change and global inequality, and is even less likely to self-regulate the explosive powers of bioengineering and artificial intelligence. If Facebook intends to make a real ideological commitment, those who fear its power should not push it back into the neoliberal cocoon with cries of “Big Brother!”. Instead, we should urge other corporations, institutions and governments to contest its vision by making their own ideological commitments.•

Tags:

During Space Race 1.0., it was the Soviets who first successfully launched a satellite and landed a craft on the moon (the astronaut-less Luna 9). Our communist adversaries seemed destined to be the first to put humans on the moon, but that’s not how it turned out. 

In retrospect, it seems vital that the U.S., (then and perhaps still) a democracy, won the contest to take the first steps on solid ground in a sphere other than our own mothership. It provided a boost to us psychologically and technologically, maintaining the momentum we’d won in World War II, but the following decade was the beginning of a long decline for middle-class Americans, which was of course unrelated to space pioneering but likewise was not be prevented by it.

Did it really matter politically that we got there first? Hard to say.

· · ·

The human genome might actually be the final frontier, a voyage not out there but in here. The question is does it matter for humanity if the U.S. or China or some other state arrives, in one way or another, first? The invention of CRISPR-Cas9 makes this point more pressing than ever, as an autocratic nation without concern about public backlash is likely to go boldly into the future. Unlike space exploration, which is still remarkably expensive, genetic modification to not only cure disease but also to enhance healthy embryos and bodies is likely to become markedly more affordable in a relatively short span of time. That will allow for easy access to exploring–and, potentially, exploitating–which might mean the victor in this nouveau race is important. My best guess, however, is that taking the initial giant leap won’t ultimately be as meaningful as walking on the right path thereafter.

From G. Owen Schaefer’s smart Conversation piece “The Future Of Genetic Enhancement Is Not in the West“:

Aside from a preoccupation with being the best in everything, is there reason for Westerners to be concerned by the likelihood that genetic enhancement is apt to emerge out of China?

If the critics are correct that human enhancement is unethical, dangerous or both, then yes, emergence in China would be worrying. From this critical perspective, the Chinese people would be subject to an unethical and dangerous intervention – a cause for international concern. Given China’s human rights record in other areas, it is questionable whether international pressure would have much effect. In turn, enhancement of its population may make China more competitive on the world stage. An unenviable dilemma for opponents of enhancement could emerge – fail to enhance and fall behind, or enhance and suffer the moral and physical consequences.

Conversely, if one believes that human enhancement is actually desirable, this trend should be welcomed. As Western governments hem and haw, delaying development of potentially great advances for humanity, China leads the way forward. Their increased competitiveness, in turn, would pressure Western countries to relax restrictions and thereby allow humanity as a whole to progress – becoming healthier, more productive and generally capable.•

Tags:

Some prominent American captains of industry of the 1930s openly admired Italy’s Fascism, even Hitler’s Nazism, sure the crushing grip on workers those authoritarian regimes maintained would defeat American liberalism. This popular idea was useful to Charles Lindbergh and others in selling the original “America First” mentality. Of course, those same totalitarian impulses helped push both nations to disaster unparalleled in modern times.

In a Cato Institute essay that wonders whether free societies will be ascendant in the coming decades, Tyler Cowen argues China’s ballooning share of the GDP has served as significant soft power, encouraging other players on the world stage that their system is superior. I’m not convinced. While it stands to reason that any supersized idea in the market will hold some sway, it doesn’t seem like insurgent forces in the U.S. and the U.K.–and certainly not their rank-and-file supporters–aspire to the Chinese model. The factors provoking the political tumult seem to be economic concerns, underlying bigotries exploited by opportunists and the aftereffects of 9/11, the Iraq War, the 2008 financial collapse and the very uneven outcomes of the Arab Spring. 

Of course, there’s no exact science to decide where the blame lies.

An excerpt:

The percentage of global GDP which is held in relatively non-free countries, such as China, has been rising relative to the share of global GDP held in the freer countries. I suspect we are underrating the noxious effects of that development.

Just think back to the 1930s, and some other decades, and consider how many Westerners and Western intellectuals were infatuated with communism and also Stalinism, even at times with fascism, at least before WWII. I would say that if a big idea is around, and supported by some major governments, some number of people will be attracted to that idea, even if we don’t understand the mechanisms here very well. Nonetheless that seems to be an unfortunate sociological truth. Today that big idea isn’t so much communism as it is various forms of authoritarianism. Authoritarians have more presence on the global stage today than has been the case for a while. Furthermore, a lot of the authoritarian states are still in their “rising” forms, rather than their decadent forms, as was the case for Soviet communism in say the 1980s. For instance, while predictions about the future of China are difficult to make, the Chinese Communist Party hardly seems to be on the verge of collapse, and thus its authoritarianism may not be discredited by current events anytime soon. On the global stage, Putin’s Russia has won some recent successes as of late, including in Crimea and also by interfering with democratic elections in the West, apparently with impunity.

To put it simply, global authoritarianism is probably poisoning our political climate more than many people realize.•

Tags:

Prone as we are to expecting what has happened before to come around again, the shock of the new often causes us to frame outliers with narratives, to assign order to what disturbs us. 

While Brexit and Trump’s election would make for great fictional plot twists in novels sold at airports, they’re so deeply upsetting to many among us and such a threat to global order that these events have been suggested by some as evidence that we exist inside a computer simulation written by future humans testing our mettle, a theory spread widely by Elon Musk in recent years, fueled by his Bostrom bender. Even the recent Oscar snafu was peddled as proof of the same.

None of these occurrences proves anything, of course. Statistically, the unusual and unpleasant is bound to happen sometimes. A “cancer cluster” is occasionally just a natural and random spike, not the result of locals tasting tainted drinking water. A sad-sack sports team on a winning streak can likewise be arbitrary noise. Not everything is a conspiracy, not everything evidence.

Political Theory professor Michael Frazer’s Conversation article “Do Brexit and Trump Show That We’re Living in a Computer Simulation? neatly outlines philosopher Nick Bostrom’s reasons for believing we exist inside a sort of video game controlled by others:

“Either humanity goes extinct before developing the technology to make universe simulations possible. Or advanced civilisations freely choose not to run such simulations. Or we are probably living in a simulation.”

Of course, all of those options rely on us having incredibly distant “descendants,” something those in the simulated-universe camp seem to blithely accept without any proof. Today’s academics may create counterfactuals on historical epochs, but they possess good evidence we have ancestors. Descendants living in a far-flung future building a narrative from us seems more like our own narrative.

Some have argued that superior humans of tomorrow wouldn’t be so unethical as to create a universe of pain and calamity, as if intelligence and morality are always linked. (Just consider a “genius” of today like Peter Thiel as a reference point on that one.) Frazer makes a compelling case, however, that if a future world exists, it’s probably not populated by code-friendly tormentors. As he asserts, great immorality mixing with unimaginable technology would likely be too toxic a combination for these people of tomorrow to have survived.

The opening:

Recent political events have turned the world upside down. The UK voting for Brexit and the US electing Donald Trump as president were unthinkable 18 months ago. In fact, they’re so extraordinary that some have questioned whether they might not be an indication that we’re actually living in some kind of computer simulation or alien experiment.

These unexpected events could be experiments to see how our political systems cope under stress. Or they could be cruel jokes made at our expense by our alien zookeepers. Or maybe they’re just glitches in the system that were never meant to happen. Perhaps the recent mix-up at the Oscars or the unlikely victories of Leicester City in the English Premier League or the New England Patriots in the Superbowl are similar glitches.

The problem with using these difficult political events as evidence that our world is a simulation is how unethical such a scenario would be. If there really were a robot or alien power that was intelligent enough to control all our lives in this way, there’s a good chance they’d have developed the moral sense not to do so.•

Tags:

In a Guardian article, Andrew Anthony writes that Yuval Noah Harari is a “historian of the distant past and the near future,” an apt description. The Israeli may be the least likely public figure to come to prominence this decade, a deeply cerebral academic in an age when intellectualism and higher education are often perplexingly scorned.

Of course, in their own moments Carl Sagan and Stephen Jay Gould were also unlikely celebrities. The common bond they all shared: an ability to relate vivid narratives, which is an especially appropriate skill as it refers to Harari, who believes a penchant for storytelling and processing abstract thoughts is what made our species predominant among humans and all other creatures.

Anthony collected questions from notable public figures and readers to pose to Harari. A few of the exchanges follow.


Helen Czerski, physicist

We are living through a fantastically rapid globalisation. Will there be one global culture in the future or will we maintain some sort of deliberate artificial tribal groupings?

Yuval Noah Harari:

I’m not sure if it will be deliberate but I do think we’ll probably have just one system, and in this sense we’ll have just one civilisation. In a way this is already the case. All over the world the political system of the state is roughly identical. All over the world capitalism is the dominant economic system, and all over the world the scientific method or worldview is the basic worldview through which people understand nature, disease, biology, physics and so forth. There are no longer any fundamental civilisational differences.

· · ·

Lucy Prebble, playwright

What is the biggest misconception humanity has about itself?

Yuval Noah Harari:

Maybe it is that by gaining more power over the world, over the environment, we will be able to make ourselves happier and more satisfied with life. Looking again from a perspective of thousands of years, we have gained enormous power over the world and it doesn’t seem to make people significantly more satisfied than in the stone age.

· · ·

TheWatchingPlace, posted online:

Is there a real possibility that environmental degradation will halt technological progress?

Yuval Noah Harari:

I think it will be just the opposite – that, as the ecological crisis intensifies, the pressure for technological development will increase, not decrease. I think that the ecological crisis in the 21st century will be analogous to the two world wars in the 20th century in serving to accelerate technological progress.

As long as things are OK, people would be very careful in developing or experimenting in genetic engineering on humans or giving artificial intelligence control of weapon systems. But if you have a serious crisis, caused for example by ecological degradation, then people will be tempted to try all kinds of high-risk, high-gain technologies in the hope of solving the problem, and you’ll have something like the Manhattan Project in the second world war.

· · ·

Andrew Anthony:

You live in a part of the world that has been shaped by religious fictions. Which do you think will happen first – that Homo sapiens leave behind religious fiction or the Israel-Palestine conflict will be resolved?

Yuvan Noah Harari:

As things look at present, it seems that Homo sapiens will disappear before the Israeli political conflict will be resolved. I think that Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different. The timescale for that kind of change is maybe a century. And it’s quite likely that the Palestinian-Israeli conflict will not be resolved by that time. But it will definitely be influenced by it.•

Tags: , , ,

Nature is a necessary evil, and humans are a mixed blessing. That’s my credo. Hopeful, huh?

Five years ago, when this blog was something other than what it is today (though I don’t really know what it is now, either), I use to run an occasional post called “5 Things About Us Future People Won’t Believe.” In these short pieces, carnivorism, internal gestation, factory work, invasive surgery and prisons were my suggestions for elements of today’s society that would brand us as “backwards” by tomorrow’s standards. I didn’t mention anything obvious like warfare because the “enlightened” of the future will still participate in such tribalism, even if the nature of the battle changes markedly. 

In a similar vein, Matt Chessen has published “The Future Called: We’re Disgusting And Barbaric,” a Backchannel piece that hits on some of same predictions I made but also has some very interesting topics I didn’t touch at all. One item:

Tolerating homes and bodies infested with critters

Right now, there are hundreds of millions of insects living on your body and in your home. Tiny dust mites inhabit your mattress, your pillow, your carpeting, and your body, regardless of how clean everything is. Microscopic demodex mites live in the follicles of your eyelashes and prowl your face at night. And this doesn’t even consider the trillions of bacteria and parasites that live inside us. Our bodies are like planets, full of life that is not us.

Future folk will be thoroughly disgusted. They will have nanotechnology antibodies — tiny machines that patrol our homes and skin, hoovering up dust mite food (our skin flakes) and exterminating the little suckers. They can’t completely eliminate all the insects and bacteria — human beings have developed a symbiosis with them; we need bacteria to do things like digest food—but the nanobots will police this flora, keeping it within healthy bounds and eliminating any micro-infestations or infections that grow out of control.

And forget about infestations by critters like cockroaches. Nanobots will exterminate larger household pests en masse. The real terminators of the future wont wreck havoc on humanity: They’ll massacre our unwanted insect houseguests.•

Tags:

Elon Musk has made the unilateral decision that Mars will be ruled by direct democracy, and considering how dismal his political record is over the last five months with his bewildering bromance with the orange supremacist, it might be great if he blasted from Earth sooner than later.

Another billionaire of dubious governmental wisdom also believed in direct democracy. That was computer-processing magnate Ross Perot who, in 1969, had a McLuhan-ish dream: an electronic town hall in which interactive television and computer punch cards would allow the masses, rather than elected officials, to decide key American policies. In 1992, he held fast to this goal–one that was perhaps more democratic than any society could survive–when he bankrolled his own populist third-party Presidential campaign. 

The opening of “Perot’s Vision: Consensus By Computer,” a New York Times article from that year by the late Michael Kelly:

WASHINGTON, June 5— Twenty-three years ago, Ross Perot had a simple idea.

The nation was splintered by the great and painful issues of the day. There had been years of disorder and disunity, and lately, terrible riots in Los Angeles and other cities. People talked of an America in crisis. The Government seemed to many to be ineffectual and out of touch.

What this country needed, Mr. Perot thought, was a good, long talk with itself.

The information age was dawning, and Mr. Perot, then building what would become one of the world’s largest computer-processing companies, saw in its glow the answer to everything. One Hour, One Issue

Every week, Mr. Perot proposed, the television networks would broadcast an hourlong program in which one issue would be discussed. Viewers would record their opinions by marking computer cards, which they would mail to regional tabulating centers. Consensus would be reached, and the leaders would know what the people wanted.

Mr. Perot gave his idea a name that draped the old dream of pure democracy with the glossy promise of technology: “the electronic town hall.”

Today, Mr. Perot’s idea, essentially unchanged from 1969, is at the core of his ‘We the People’ drive for the Presidency, and of his theory for governing.

It forms the basis of Mr. Perot’s pitch, in which he presents himself, not as a politician running for President, but as a patriot willing to be drafted ‘as a servant of the people’ to take on the ‘dirty, thankless’ job of rescuing America from “the Establishment,” and running it.

In set speeches and interviews, the Texas billionaire describes the electronic town hall as the principal tool of governance in a Perot Presidency, and he makes grand claims: “If we ever put the people back in charge of this country and make sure they understand the issues, you’ll see the White House and Congress, like a ballet, pirouetting around the stage getting it done in unison.”

Although Mr. Perot has repeatedly said he would not try to use the electronic town hall as a direct decision-making body, he has on other occasions suggested placing a startling degree of power in the hands of the television audience.

He has proposed at least twice — in an interview with David Frost broadcast on April 24 and in a March 18 speech at the National Press Club — passing a constitutional amendment that would strip Congress of its authority to levy taxes, and place that power directly in the hands of the people, in a debate and referendum orchestrated through an electronic town hall.•

In addition to the rampant myopia that would likely blight such a system, most Americans, with jobs and families and TV shows to binge watch, don’t take the time to fully appreciate the nuances of complex policy. The stunning truth is that even in a representative democracy in this information-rich age, we have enough uninformed voters minus critical-thinking abilities to install an obvious con artist into the Oval Office to pick their pockets. 

In a Financial Times column, Tim Harford argues in favor of the professional if imperfect class of technocrats, who get the job done, more or less. An excerpt:

For all its merits, democracy has always had a weakness: on any detailed piece of policy, the typical voter — I include myself here — does not understand what is really at stake and does not care to find out. This is not a slight on voters. It is a recognition of our common sense. Why should we devote hours to studying every policy question that arises? We know the vote of any particular citizen is never decisive. It would be a deluded voter indeed who stayed up all night revising for an election, believing that her vote would be the one to make all the difference.

So voters are not paying close attention to the details. That might seem a fatal flaw in democracy but democracy has coped. The workaround for voter ignorance is to delegate the details to expert technocrats. Technocracy is unfashionable these days; that is a shame.

One advantage of a technocracy is that it constrains politicians who are tempted by narrow or fleeting advantages. Multilateral bodies such as the World Trade Organization and the European Commission have been able to head off popular yet self-harming behaviour, such as handing state protection to which ever business has the best lobbyists.

Meanwhile independent central banks have been the grown-ups of economic policymaking. Once the immediate aftermath of the financial crisis had passed, elected politicians sat on their hands. Technocratic central bankers were — to borrow a phrase from Mohamed El-Erian, economic adviser — “the only game in town” in sustaining a recovery.

A second advantage is that technocrats can offer informed, impartial analysis. Consider the Congressional Budget Office in the US, the Office for Budget Responsibility in the UK, and Nice, the National Institute for Health and Care Excellence.

Technocrats make mistakes, it’s true — many mistakes. Brain surgeons also make mistakes. That does not mean I’d be better off handing the scalpel to Boris Johnson.•

Tags:

An unqualified sociopath was elected President of the United States with the aid of the FBI, fake news, Russian spies, white supremacists and an accused rapist who’s holed up inside the Ecuadorian embassy in London to avoid arrest. Writing that sentence a million times can’t make it any less chilling.

WikiLeaks’ modus operandi over the last couple of years probably wouldn’t be markedly different if it were in the hands of Steve Bannon rather than Julian Assange, so it’s not surprising the organization leaked a trove of (apparently overhyped) documents about CIA surveillance just as Trump was being lambasted from both sides of the aisle for baselessly accusing his predecessor for “wiretapping.” The timing is familiar if you recall that WikiLeaks began releasing Clinton campaign emails directly after the surfacing of a video that recorded Trump’s boasts of sexual assault. With all this recent history, is it any surprise Assange mockingly described himself as a “deplorable” when chiding Twitter for refusing verify his account?

The decentralization of media, with powerful tools in potentially every hand, has changed the game, no doubt. We’re now in a permanent Spy vs. Spy cartoon, though one that isn’t funny, with feds and hackers permanently at loggerheads. Which side can do the most damage? Voters have some recourse in regards to government snooping but not so with private-sector enterprises. In the rush to privatize and outsource long-established areas of critical services, from prisons to the military to intelligence work, we’ve also dispersed dangers.

From Sue Halpern’s New York Review of Books pieceThe Assange Distraction“:

In his press conference, Assange observed that no cyber weapons are safe from hacking because they live on the Internet, and once deployed are themselves at risk of being stolen. When that happens, he said, “there’s a very easy cover for any gray market operator, contractor, rogue intelligence agent to take that material and start a company with it. Start a consulting company, a hacker for hire company.” Indeed, the conversation we almost never have when we’re talking about cyber-security and hacking is the one where we acknowledge just how privatized intelligence gathering has become, and what the consequences of this have been. According to the reporters Dana Priest, Marjorie Censer and Robert O’Harrow, Jr., at least 70 percent of the intelligence community’s “secret” budget now goes to private contractors. And, they write, “Never before have so many US intelligence workers been hired so quickly, or been given access to secret government information through networked computers. …But in the rush to fill jobs, the government has relied on faulty procedures to vet intelligence workers, documents and interviews show.” Much of this expansion occurred in the aftermath of the September 11 attacks, when the American government sought to dramatically expand its intelligence-gathering apparatus.

Edward Snowden was a government contractor; he had a high security clearance while working for both Dell and for Booz, Allen, Hamilton. Vault 7’s source, from what one can discern from Assange’s remarks, was most likely a contractor, too. The real connection between Snowden’s NSA revelations and an anonymous leaker handing off CIA malware to WikiLeaks, however, is this: both remind us, in different ways, that the expansion of the surveillance state has made us fundamentally less secure, not more.

Julian Assange, if he is to be believed, now possesses the entire cyber-weaponry of the CIA. He claims that they are safe with him while explaining that nothing is safe on the Internet. He says that the malware he’s published so far is only part of the CIA arsenal, and that he’ll reveal more at a later date. If that is not a veiled threat, then this is: Assange has not destroyed the source codes that came to him with Vault 7, the algorithms that run these programs, and he hasn’t categorically ruled out releasing them into the wild, where they would be available to any cyber-criminal, state actor, or random hacker. This means that Julian Assange is not just a fugitive, he is a fugitive who is armed and dangerous.•

Tags: ,

Trump poses many existential threats but let’s focus on two in particular that are linked: His autocratic impulses are a threat to liberal governance and America’s ethos of an immigrant nation, and his cultivation of a culture of complaint is a bankrupt brand of populism, a nauseating nostalgia for yesterday which places us in risk today and tomorrow.

The upshot is a federal government contemptible of the Constitution, one that’s willfully trying to block the steady flow of genius into the country and one that’s more enthusiastic for steel and coal than semiconductors. The Trump promise to America is that we can live like the 1950s and win the 21st century, that we don’t have to compete with the whole world because we can build a wall to keep out the future. He’s a new manner of aspiring autocrat concerned not with ideology by with its destruction. In Holly Case’s Aeon essay about contemporary strongmen who are divorced from governing principles beyond promising to make difficult challenges vanish, she concluded this way:

The new authoritarian does not pretend to make you better, only to make you feel better about not wanting to change. In this respect, he has tapped a gusher in the Zeitgeist that reaches well beyond the domain of state socialism, an attitude that the writer Marilynne Robinson disparages as ‘nonfailure’, and that the writer Walter Mosley elevates to a virtue: ‘We need to raise our imperfections to a political platform that says: “My flaws need attention too.” This is what I call the “untopia”.’ Welcome to the not-so-brave new world.

In 2017, China is a notable exception to this definition, an autocracy aiming to win the race in supercomputers, semiconductors and solar, which is particularly perilous when paired with America’s retreat. We picked an awful time to stop looking forward, and the ramifications will be felt long after Trump is gone.

From Michael Schuman in Bloomberg View:

China is marshaling massive resources to march into high-tech industries, from robotics to medical devices. In the case of semiconductors alone, the state has amassed $150 billion to build a homegrown industry. In a report in March, the European Union Chamber of Commerce in China pressed the point that the Chinese government is employing a wide range of tools to pursue these ambitions, from lavishing subsidies on favored sectors to squeezing technology out of foreign firms.

The only way for the U.S. to compete with those efforts is to “run faster.” Yet Trump’s ideas to boost competitiveness mainly amount to cutting taxes and regulation. Although reduced taxes might leave companies with more money to spend on research and development, that’s not enough. The U.S. needs to do much more to help businesses achieve bigger and better breakthroughs.

Trump is doing the opposite. One reason U.S. companies are so innovative is that they attract talented workers from everywhere else. But Trump’s recent suspension of fast-track H-1B visas could curtail this infusion of scientists and researchers. If his intention is to ensure jobs go to Americans first, he need not bother. The unemployment rate for Americans with a bachelor’s degree or higher — the skilled workers that H-1B holders would compete with — is a mere 2.5 percent. 

This policy isn’t just a threat to Silicon Valley, but across industries. Michael McGarry, the chief executive officer of PPG Industries Inc., worries about the effect visa restrictions would have on his paint-making business. “We create a lot of innovation because of the diversity that we have,” he recently told CNBC. “We think people with PhDs that are educated here should stay here and work for us and not work for the competition.”

China will likely try to capitalize on this mistake. Robin Li, CEO of the internet giant Baidu Inc., recently advocated that China ease its visa requirements to attract talented workers to help develop new technologies for Chinese industry, just the opposite of Trump’s approach.

Trump’s budget proposals are similarly a setback. He wants to boost defense spending by slashing funding for just about everything else, notably education. By one estimate, some $20 billion would have to get cut from the departments of education, labor, and health and human services to accommodate his plan. If Trump wants to contend with Chinese power, he’d be better off reversing those priorities — to create more graduates and fewer guns. He could offer proposals to make higher education more affordable for the poor, for instance, or to bolster vocational training. So far, there’s little evidence he’s making such spending a priority.

China, by contrast, is expanding access to education on a huge scale.

Tags: , ,

« Older entries § Newer entries »