Science/Tech

You are currently browsing the archive for the Science/Tech category.

China-robot-factory (1)

When it comes to robotics, China is developing a yawning credibility gap. More than most nations, it desperately needs machines of varying levels of intelligence to deal with work that can’t be fulfilled by a rapidly graying population. “Robots will show up in China just in time,” predicts Daniel Kahneman. The nation certainly hopes so.

Hoping and executing are two different things, however. In 2011, Foxconn promised a million robots would be installed in its factories within three years. That did not transpire. More recently, the Apple-enabler was reported as saying it was on the verge of automating 60,000 jobs. According to an article by Adam Minter of Bloomberg, that appears to have been an empty promise as well. You could dismiss the hype as the irresponsibility of one giant company except the writer reports that bureaucrats were intimately involved with the deception.

From Minter:

The story first turned up in mid-May: Foxconn, Apple’s favorite manufacturer, was replacing 60,000 of its workers with robots. Everyone from the BBC to Apple fan sites soon reported the ground-shifting news. There was just one problem: It was mostly false.

Last weekend, a Foxconn spokesperson told Chinese media that the company hadn’t laid off anyone, much less replaced them with automation. That part of the story came from overly enthusiastic bureaucrats in Kunshan, a manufacturing town keen to promote itself as a hub for innovation.

The incident seemed like an apt metaphor. Across China, officials are hoping that robots are the future. Thirty-six cities claimed last year that robotics was critical to their development. More than 40 government-funded robot industrial parks have recently opened or are in the works. Shenzhen, the southern Chinese tech hub, is now home to more than 3,000 robotics companies — up from 200 just two years ago.

In theory, this should be great news for a country hoping to encourage innovation In reality, it’s a sign that China has subsidized yet another investment bubble with capital that would’ve been better invested elsewhere.•

Tags:

screenshot-2016-06-09-123615jpg

In 1921, before there were Talkies, Arthur Blanchard invented a machine to create plots for big-screen pictures. Thirty years later, B-movie Hollywood director Edward Ludwig believed the time was soon when computers would do the screenwriting. Is such a thing possible now? Not exactly, though there’s a new AI that probably could replace Michael Bay and his incoherent, big-budget Hal-Needham-in-space crap. Bay’s someone who needs to be technologically unemployed.

In an Ars Technica article, Annalee Newitz writes about “Sunspring,” a short sci-fi film about a futuristic love triangle that was wholly written by a neural network named Benjamin, the brainchild of NYU AI researcher Ross Goodwin. The resulting work is odd and spirited, an offbeat and stilted regurgitation of current sci-fi tropes but with something of an eccentric auteur’s touch and the Dada poet’s pen. In its own way, it’s compelling.

Newitz writes of her reporting on the film: “As I was talking to [director Oscar] Sharp and Goodwin, I noticed that all of us slipped between referring to Benjamin as ‘he’ and ‘it.'” (You can watch the movie if you go to the article.) An excerpt:

Knowing that an AI wrote Sunspring makes the movie more fun to watch, especially once you know how the cast and crew put it together. Director Oscar Sharp made the movie for Sci-Fi London, an annual film festival that includes the 48-Hour Film Challenge, where contestants are given a set of prompts (mostly props and lines) that have to appear in a movie they make over the next two days. Sharp’s longtime collaborator, Ross Goodwin, is an AI researcher at New York University, and he supplied the movie’s AI writer, initially called Jetson. As the cast gathered around a tiny printer, Benjamin spat out the screenplay, complete with almost impossible stage directions like “He is standing in the stars and sitting on the floor.” Then Sharp randomly assigned roles to the actors in the room. “As soon as we had a read-through, everyone around the table was laughing their heads off with delight,” Sharp told Ars. The actors interpreted the lines as they read, adding tone and body language, and the results are what you see in the movie. Somehow, a slightly garbled series of sentences became a tale of romance and murder, set in a dark future world. It even has its own musical interlude (performed by Andrew and Tiger), with a pop song Benjamin composed after learning from a corpus of 30,000 other pop songs.

Building Benjamin

When Sharp was in film school at NYU, he made a discovery that changed the course of his career. “I liked hanging out with technologists in NYU’s Interactive Telecommunications Program more than other filmmakers,” he confessed. That’s how he met Goodwin, a former ghost writer who just earned a master’s degree from NYU while studying natural language processing and neural networks. Speaking by phone from New York, the two recalled how they were both obsessed with figuring out how to make machines generate original pieces of writing. For years, Sharp wanted to create a movie out of random parts, even going so far as to write a play out of snippets of text chosen by dice rolls. Goodwin, who honed his machine-assisted authoring skills while ghost writing letters for corporate clients, had been using Markov chains to write poetry. As they got to know each other at NYU, Sharp told Goodwin about his dream of collaborating with an AI on a screenplay. Over a year and many algorithms later, Goodwin built an AI that could.•

Tags: , ,

subway-3

When I posted a passage from science writer Fred Hapgood’s overly ambitious 1990 Omni piece which had Venter-esque visions of micro-organisms doing our bidding, it reminded me of another of his articles. In 2003, he wrote in Wired that he believed automation coming to underground drilling technology would soon make entire subterranean cities, even a supersonic global subway, possible. 

Well, a lot of things are possible, but that doesn’t mean they’re politically attractive, financially feasible or even desired. But it’s still a fun read. An excerpt:

Among the first wave of tunneling projects under way are subway extensions, highway re-siting projects, and petrochemical repositories. These will pave the way to further standardization and automation needed for transnational, Chunnel-type digs. The East – which has never been shy about big engineering – will likely plow down first, linking Japan and Korea, China and Japan, and Taiwan and China. The West might follow by tunneling under the Gibraltar and Bering straits.

The last stop on this train is the ultimate TBM megaproject: a supersonic world subway. Maglev trains running through depressurized tunnels are the logical successor to airplanes, at least between large cities. Magnetic levitation would eliminate rolling resistance, and the vacuum does the same to air resistance. The trains could “fly” down the tracks at many times the speed of the Concorde – without creating a sonic boom. In a couple of decades, we may see a world where major international cities are within a few hours’ commute of each other.

By 2005, some under-urban highway projects will start to include parking lots. Where there is parking, malls will spring up. By 2008, developers might offer these retailers subterranean warehouse space, then offices, and, finally, full-fledged industrial parks. By 2013, we could see some hotels, probably marketed to international commuters and located just below the financial centers of Tokyo, London, and New York.•

Tags:

vintagecomputer2

In 1969, computer-processing magnate Ross Perot had a McLuhan-ish dream: an electronic town hall in which interactive television and computer punch cards would allow the masses, rather than elected officials, to decide key American policies. In 1992, he held fast to this goal–one that was perhaps more democratic than any society could survive–when he bankrolled his own populist third-party Presidential campaign.

Today Elon Musk wants to blast this vision of direct democracy to Mars, writes Loren Grush of the Verge, asserting that representational government is too prone to corruption. Whether or not Musk realizes his dream of dying on Mars–but not on impact–his grand ambitions speak to the insanity of wealth inequality in the second Gilded Age. The SpaceX technologist seems one of the more well-intentioned thinkers among Silicon Valley’s freshly minted billionaires, but think how preposterous it is that any individual is declaring what type of government a planet we’ve never visited most likely will have. 

Walter Isaacson famously compared Musk to Benjamin Franklin, but the latter flew kites any child could purchase. Musk’s toys are far more expensive and in the hands of the few. That’s not really good for a democracy, direct or otherwise.

An excerpt:

Elon Musk has been pretty focused on setting up a colony on Mars, so naturally he has a few ideas as to the type of government the Red Planet should have. Speaking at ReCode’s Code Conference on Wednesday night, the SpaceX CEO said he envisions a direct democracy for Martian colonies, as a way to avoid corruption.

“Most likely the form of government on Mars would be a direct democracy, not representative,” said Musk. “So it would be people voting directly on issues. And I think that’s probably better, because the potential for corruption is substantially diminished in a direct versus a representative democracy.”

Musk also suggested that on Mars it should be harder to create laws than it is to get rid of ones that aren’t working well. “I think I would recommend some adjustment for the inertia of laws would be wise. It should probably be easier to remove a law than create one,” said Musk. “I think that’s probably good, because laws have infinite life unless they’re taken away.”•

Tags: ,

film-intolerance-1916

Mussolini built his own Hollywood in the 1930s to spread his Fascist message. Today he would just tweet.

Artifice used to be more real in a sense when the movie industry was in the business of “nation-building,” when sets were an elaborate, eye-popping selling point and simulacra was not sacred but esteemed, since there was not yet the technical acumen to create any sort of profound special effects. “A cast of thousands” was the un-humble brag used to peddle Cecil B. DeMille’s 1956 remake of his own epic, The Ten Commandments, and there was another “cast” of a similar size behind the scenes making the Nile run and bushes burn.

Then the collapse of the studio system hit in the 1960s, and moguls lost their religion, mostly downsizing scale and Labor. For a while, relatively cheap, personal productions by Hoppers and Fondas and Coppolas and Scorseses ruled the day. Eventually, the studios were ready dream big again, and in 1975, the robot-shark technology of Jaws captured the summer in its animatronic maw. Two years later, Star Wars relied heavily on Industrial Light & Magic to realize its vision. It was still a long way to the technology behind today’s tentpoles, but the rise of the machines and the diminishment of human craft began in Hollywood–as it did in a big-picture way all across America–decades ago. The Herculean returned, but Hercules was now a bit player.

From “True Fakes on Location,” Tom Carson’s excellent Baffler article about auteurs and architecture:

2016 marks Intolerance’s centenary, and that shouldn’t be a milestone only to high-minded fans of cinema’s artistic dawn. Because [D.W.] Griffith predicted everything in movies, it’s also a milestone for any garden-variety filmgoer who’s ever been wowed by coarse and costly Hollywood spectacle. I suspect only prigs are completely immune to the delights of whole foreign environments—whether antique, exotically international, familiar but exaggerated, or just plain fantastical—that have been erected, populated, and photographed for no better reason than to knock our socks off. For my money, Intolerance is where fake movie architecture began its complicated dance with the real thing, affecting how audiences perceive the past, reconfigure their present, and anticipate the future.

The ambition of Intolerance did have precursors. Griffith himself had built a biblical town in the San Fernando Valley for Judith of Bethulia two years earlier. The imported Italian period epicsQuo Vadis? (1913) and Cabiria (1914) had stimulated both his ambition and his envy. But in scale and pull-out-the-stops grandeur, nothing like Belshazzar’s Court had ever been seen before—except by, well, Belshazzar and some two hundred thousand other lucky but very dead Babylonians in the sixth century BCE. Even Griffith’s own 1915 epic The Birth of a Nation hadn’t required particularly extravagant exterior sets, however unprecedented in scope (and vicious in sentiment—Intolerance was conceived in part to rebut its critics) his love song to the Ku Klux Klan had otherwise been.

One reason Intolerance’s Babylon still looks stunning is that the age of computer-generated imagery has all but ruined our capacity to experience Hollywood’s imagineering as something nonetheless rooted in the material world.•

Tags: ,

horseskeletonmuybridge4

There’s no easy answer if it’s different this time than during the Industrial Revolution and the tens of millions of jobs that are automated into oblivion aren’t replaced by equal or better positions. Most often the best possible solution offered is that we need an education system that enables adult Americans to transition into higher-skilled positions and instills children with greater critical thinking that will allow for a more flexible mindset as industries rapidly rise and fall. That would be wonderful, but I think it ignores reality to some extent. If the new normal is abnormal by the standards we’ve come to expect, then, regardless of schooling, some–perhaps many, too many–will be left behind. What becomes of them? What becomes of us? 

In a New York Times piece, Eduardo Porter, who doesn’t support Universal Basic Income, tries to think through this potentially scary scenario in which scarcity isn’t a problem but distribution is a big one. The opening:

They replaced horses, didn’t they? That’s how the late, great economist Wassily Leontief responded 35 years ago to those who argued technology would never really replace people’s work.

Horses hung around in the labor force for quite some time after they were first challenged by “modern” communications technologies like the telegraph and the railroad, hauling stuff and people around farms and cities. But when the internal combustion engine came along, horses — as a critical component of the world economy — were history.

Cutting horses’ oat rations might have delayed their replacement by tractors, but it wouldn’t have stopped it. All that was left to do, for those who cared for 20 million newly unemployed horses, was to put them out to pasture.

“Had horses had an opportunity to vote and join the Republican or Democratic Party,” Leontief wrote, they might have been able to get “the necessary appropriation from Congress.”

Most economists still reject Professor Leontief’s analogy, but the conventional economic consensus is starting to fray. The productivity figures may not reflect it yet but new technology does seem more fundamentally disruptive than technologies of the past. Robots are learning on their own. Self-driving cars seem just a few regulations away from our city streets.

As the idea sinks in that humans as workhorses might also be on the way out, what happens if the job market stops doing the job of providing a living wage for hundreds of millions of people? How will the economy spread money around, so people can afford to pay the rent?•

nytimes (1)

People wearing hats used to sit in subway cars reading newspapers. First the hats disappeared, then the papers.

Anyone who’s lived on both sides of peak-print news knows the industry has shrunk precipitously as the Internet enjoyed its meteoric rise. If it were as simple as trading one medium for another, that would be no problem. But in the last few decades of good health for newspapers, the industry was propped up on print ads and classifieds and such, the cover price no longer able to float the enterprise. Once those crutches were yanked away by new tools, the business wasn’t really a going concern anymore. Online journalism hasn’t come close to filling the void, so there’s more information than ever, but the day is largely ruled by free-floating headlines, “citizen journalists,” soundbites and 140 characters.

From Jessica Conditt at Engadget:

Anyone reading this, an article that exists only on the internet, is aware of the dramatic shift that’s taken place in the media world since the 1990s. As internet penetration has grown, newspaper sales have dipped dramatically, as have traditional newspaper jobs. New research from the US Bureau of Labor Statistics quantifies these losses — and they’re hefty.

Between 1990 and 2016, the newspaper publishing industry shrunk by nearly 60 percent, from roughly 458,000 jobs to 183,000 jobs, the bureau found. In this same time, the number of internet publishing and broadcasting jobs rose from 30,000 to 198,000. In just under three decades, the newspaper industry has transformed from a media juggernaut into a secondary form of communication, and there are no signs this trend will reverse any time soon.•

Tags:

klee23456

Science writer Fred Hapgood dreamed big when Omni asked him, in 1990, to pen “No Assembly Required,” an article that predicted how insect-sized microorganisms would be serving our needs by 2029. None of his Venter-esque visions of designer bugs seem even remotely possible 13 years from now. They’re not theoretically impossible, but they’re likely to arrive tomorrow than tomorrow. Three excerpts follow, about futuristic dental care, housecleaning and home security.


Dental Microsnails That Brush Your Teeth for You While You Sleep

During the average lifetime a human spends a total of 40 days of his life brushing his teeth. (Sixty if he flosses.) Recent breakthroughs in microtractor technology, however, have now made it possible for us to offer our customers the dental microsnaii.

Just rub onto teeth before sleeping: During the night each microsnaii glued to a pair of traction balls, systematically explores the entire surface of the tooth on which it lands. As it moves, powered by the mouth’s own natural electrochemistry, it secretes minute quantities of bioengineered enzymes that detect and epoxy microcracks in enamel, remove plaque, and shred organic material caught between teeth. You awake to find your smile polished to a high gloss. Microsnails are small enough to be barely detectable by the tongue and harmless if swallowed. They vanish down the gut after they’ve finished their job.

For those interested in the latest in decorative dentistry, Microbots also makes an “artist microsnaii” that colors your incisors in the pattern of your choice, from a simple checkerboard to selected graphics based on works of Braque, Klee, Mondrian, and De Kooning. lmages fade after 24 hours.


Tiny Quicker Picker-Uppers

Let your fingers do the housecleaning. Order Micromaids from our catalog and put a thousand domestic servants in the palm of your hand.

Arrange “anthills” (small containers, each the size of a bagel) inconspicuously under chairs and behind furniture (autocamouflaging is standard with this year’s models). When the colony has detected no footfalls in that room for an hour, thousands of Micromaids, legged vehicles the size and shape of a clove, spread-out through the room. They locate loose grains of sand, grit, lint, skin, hair, and other debris, then carry the refuse back to the anthill. If the hill detects vibrations, it releases a high-pitched acoustic signal, summoning the Micromaids to return.

These home bases serve as tiny waste disposal plants. Each contains specialized microbots that process the trash. Some secrete enzymes and bacteria to break down and sanitize organic matter. Others use tiny pincers to crush and cut up larger items. The anthill then seals the garbage in a polymer bag, which it custom-produces to surround the excreted refuse. The Micromaids carry this package to a preprogrammed location, such as a chute leading to a trash compactor in the basement of your house.


RoboHornets: The Ultimate Weapon for Home Security

Let’s face it — as wonderful as the  twenty-first century can be, home security is a growing challenge for all of us. Here’s how Microbots can help you deal with it: Whenever the nest detects a possible intruder entering a zone you have designated as “private,” a mosquito-size probe takes off and lands quietly on the person’s clothing and locates a flake of skin caught in the garment. An onboard DNA sampler then radios the raw biological data back to the nest, where a DNA fingerprinting lab performs an analysis and checks the results against a list of those individuals cleared for access to the area. If the person is unauthorized, the mosquito probe triggers a loud and explicit warning message from a rooftop speaker while summoning a cloud of other RoboHornets, each carrying a vicious-looking one-inch-long crimson-colored stinger. Any intruder continuing to ignore the warning message will receive a lesson in the sanctity of private property, the memory of which will linger for several months.•

brainsoctopus

Always great reading Michael Graziano as he wrestles with the nature and mechanics of consciousness. In an Atlantic piece, the Princeton psychologist traces the emergence of consciousness hundreds of millions of years to a process known as “selective signal enhancement,” a primitive system not even requiring a central brain, and then marches forward explaining its development from that point. 

Graziano asserts that the brain is more a prioritizing machine that edits out what’s unnecessary rather than one that needs to be in possession of all information at every moment. “The brain has no need to know those details,” he writes. “The attention schema is therefore strategically vague.”

The opening:

Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.•

Tags:

americaninjawarrior

If someone had told you 20 years ago that TV was to soon be dominated by Real Housewives, Biggest Losers and Kardashians, that network sitcoms and police procedurals would become secondary not only to great cable programs but also cheap reality shows, you might have thought they were nuts. Who would trade Jennifer Aniston for Kris Jenner? But the center did not hold, the barbarians stormed the gates, and now the sideshow of Bachelors and Bachelorettes has moved to the front of the aisle. In a decentralized medium, the financials no longer made sense for expensive offerings, so cheap content became king.

In an excellent Wall Street Journal article, Jason Gay wonders if multibillion-dollar professional sports could be destabilized by American Ninja Warrior and the like. You know, junk sports that serve to post-millennials’ minds (and smartphones) thrilling pseudo-athletics in which spectacle is more important than winning and losing. I have no doubt that in the coming decades video games and virtual reality and gadgets will change not only the way we watch competitions but the competitions themselves, but could they be surprisingly deep alterations?

It all depends on technological shifts, something beyond the control of sports. Baseball has been richly rewarded in recent years with outsize regional cable contracts because Fox Sports wanted to challenge ESPN, and MLB could offer a huge slate of live, family friendly content. But MLB and the other major sports leagues are a couple of new tech tools they didn’t anticipate from being back on their heels. Money and history are on the side of MLB, NBA, NFL and NHL, but that’s been the case with many supposedly unsinkable entities. I would bet against some American Gladiators knockoff KO-ing big-time team sports, but I also though Star Search a silly afterthought just a short time before American Idol ruled the airwaves.

Gay’s opening:

Last summer, on a family vacation in a house with 10 very loud children, I attempted to watch a baseball game on the only available television set. It did not go well. My nieces and nephews acted like I was forcing them to watch a process hearing in the state legislature. They groaned and booed. They rolled their eyes. They dropped to the floor and pretended to sleep.

Frantic to please, I turned the channel, and happened upon a reality show I’d never seen before: a wacky obstacle-course event called “American Ninja Warrior.” Situated on an outdoor stage bathed in red, white and blue lights, it featured sinewy men and women of all ages, jumping and scurrying from platforms to ropes to monkey bars, plunging into water traps when they missed.

The room erupted. It was as if Taylor Swift and Justin Bieber had just shown up with free pizza and iPhones. It turned out my loud, young in-laws all loved “American Ninja Warrior.” They crammed around the TV, rapt.

The scene made me feel like an out-of-touch geezer—how had I missed this phenomenon?—and also made me think about sports, and their future.•

Tags:

321robert-de-niro-taxi-driver-3-1-e1438031459375 (1)

Despite the fears of really brilliant people like Stephen Hawking, superintelligent machines aren’t likely to enslave or eradicate humans anytime soon. It’s not impossible that eventually brains can be put into machines (and vice versa), but none of us will be alive to see that day. Hopefully our descendants will make good decisions.

The more pressing problem is that Weak AI has a good chance over the next few decades to eliminate millions of solid jobs, and then what do all the truckers, cabbies, delivery drivers, front-desk people, bellhops, fast-food workers and others do? It’s been said that we should retrain them for positions that are more analytical and cerebral, but that’s easier said than done. Some will be left behind by the sweep of history. How many?

In Brian Fung’s smart Washington Post piece “Everything You Think You Know About AI Is Wrong,” the writer tries to identify the challenges ahead and the course we can take to meet them. An excerpt:

So who is going to lose their job?

Partly because we’re better at designing these limited AI systems, some experts predict that high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones that will see the most disruption. When the Obama administration studied the issue, it found that as many as 80 percent of jobs currently paying less than $20 an hour might someday be replaced by AI.

“That’s over a long period of time, and it’s not like you’re going to lose 80 percent of jobs and not reemploy those people,” Jason Furman, a senior economic advisor to President Obama, said in an interview. “But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95 percent of those people, it’s still a big jump up in the structural number not working. So I think it poses a real distributional challenge.”

Policymakers will need to come up with inventive ways to meet this looming jobs problem. But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively affected by automation. Compared to the low-wage jobs, roughly a third of those who earn between $20 and $40 an hour are expected to fall out of work due to robots, according to Furman. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.

Those numbers might look very different if researchers were truly on the brink of creating sentient AI that can really do all the same things a human can. In this hypothetical scenario, even high-skilled workers might have more reason to fear. But the fact that so much of our AI research right now appears to favor narrow forms of artificial intelligence at least suggests we could be doing a lot worse.•

Tags:

girlgunflag

Charles Murray is an academic given to racist pseudoscience and an alleged meritocrat who embraces Sarah Palin, but politics make for strange bedfellows, so he’s currently aligned with liberal progressives and Silicon Valley libertarians in promoting Universal Basic Income.

Beyond the questions of if UBI is the right tack to take during the early stages of the Digital Age and whether it’s fiscally feasible, there’s the matter of how it would be executed if we were to do it. A hammer can be a tool or a weapon depending on how you swing it, and UBI could be a means to mitigate a struggling Americans or it could be a punitive measure. Even a grandmother-murdering machine like P90X bro Paul Ryan might get excited about Basic Income should he be able to use it to dismantle all other safety nets, Social Security included. Even for retired folks who never made great salaries, replacing Social Security with a UBI check would markedly reduce their incomes, which are pretty bare existences to begin with.

Not really surprised that Murray is in this camp as well, hoping to seem like a big-hearted person worried about technological unemployment while he’s really jonesing to do away with the so-called “welfare state.” In his WSJ “Saturday Essay” on the topic, he writes, “The UBI is to be financed by getting rid of Social Security, Medicare, Medicaid, food stamps, Supplemental Security Income, housing subsidies, welfare for single women and every other kind of welfare and social-services program, as well as agricultural subsidies and corporate welfare.” Think of all the jobs this would create in the funeral-parlor sector!

The opening:

When people learn that I want to replace the welfare state with a universal basic income, or UBI, the response I almost always get goes something like this: “But people will just use it to live off the rest of us!” “People will waste their lives!” Or, as they would have put it in a bygone age, a guaranteed income will foster idleness and vice. I see it differently. I think that a UBI is our only hope to deal with a coming labor market unlike any in human history and that it represents our best hope to revitalize American civil society.

The great free-market economist Milton Friedman originated the idea of a guaranteed income just after World War II. An experiment using a bastardized version of his “negative income tax” was tried in the 1970s, with disappointing results. But as transfer payments continued to soar while the poverty rate remained stuck at more than 10% of the population, the appeal of a guaranteed income persisted: If you want to end poverty, just give people money. As of 2016, the UBI has become a live policy option. Finland is planning a pilot project for a UBI next year, and Switzerland is voting this weekend on a referendum to install a UBI.

The UBI has brought together odd bedfellows. Its advocates on the left see it as a move toward social justice; its libertarian supporters (like Friedman) see it as the least damaging way for the government to transfer wealth from some citizens to others. Either way, the UBI is an idea whose time has finally come, but it has to be done right.

First, my big caveat: A UBI will do the good things I claim only if it replaces all other transfer payments and the bureaucracies that oversee them. If the guaranteed income is an add-on to the existing system, it will be as destructive as its critics fear.

Second, the system has to be designed with certain key features. In my version, every American citizen age 21 and older would get a $13,000 annual grant deposited electronically into a bank account in monthly installments. Three thousand dollars must be used for health insurance (a complicated provision I won’t try to explain here), leaving every adult with $10,000 in disposable annual income for the rest of their lives.•

Tags:

morgue11_games_bart_tank

Elon Musk has been on a Nick Bostrom bender for awhile now, spending big money hoping to counter Homo sapiens-eradicating AI, after devouring the Oxford philosopher’s book Superintelligence. This week, the Mars-positive mogul contended humans are almost definitely merely characters in a more advanced civilization’s video game, something Bostrom has theorized for quite some time. Two excerpts follow: 1) The opening of John Tierney’s excellent 2007 NYT article, “Our Lives, Controlled From Some Guy’s Couch,” and 2) Ezra Klein’s Vox piece about Musk’s Sims-friendly statements.


From Tierney:

Until I talked to Nick Bostrom, a philosopher at Oxford University, it never occurred to me that our universe might be somebody else’s hobby. I hadn’t imagined that the omniscient, omnipotent creator of the heavens and earth could be an advanced version of a guy who spends his weekends building model railroads or overseeing video-game worlds like the Sims.

But now it seems quite possible. In fact, if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation.

This simulation would be similar to the one in The Matrix, in which most humans don’t realize that their lives and their world are just illusions created in their brains while their bodies are suspended in vats of liquid. But in Dr. Bostrom’s notion of reality, you wouldn’t even have a body made of flesh. Your brain would exist only as a network of computer circuits.

You couldn’t, as in The Matrix, unplug your brain and escape from your vat to see the physical world. You couldn’t see through the illusion except by using the sort of logic employed by Dr. Bostrom, the director of the Future of Humanity Institute at Oxford.

Dr. Bostrom assumes that technological advances could produce a computer with more processing power than all the brains in the world, and that advanced humans, or “posthumans,” could run “ancestor simulations” of their evolutionary history by creating virtual worlds inhabited by virtual people with fully developed virtual nervous systems.•


From Klein:

By far the best moment of Recode’s annual Code Conference was when Elon Musk took the stage and explained that though we think we’re flesh-and-blood participants in a physical world, we are almost certainly computer-generated entities living inside a more advanced civilization’s video game.

Don’t believe me? Here’s Musk’s argument in full: 

The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were.

Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it’s getting better every year. Soon we’ll have virtual reality, augmented reality.

If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let’s imagine it’s 10,000 years in the future, which is nothing on the evolutionary scale.

So given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we’re in base reality is one in billions.

Tell me what’s wrong with that argument. Is there a flaw in that argument?•

Tags: , , ,

brainenhancement (1)

Currently marching to the end of everything else I’m reading so I can start Robin Hanson’s The Age of Em: Work, Love and Life when Robots Rule the Earth, which sounds like remarkable science fiction, though the author insists it’s science–or will be soon enough. 

“Em” refers to brain emulations, computer reproductions of top-notch human brains which will provide gray matter for robots. These ems will then grow that intelligence far beyond our abilities. It’s will be something like Moore’s Law for intellect. We can use this method to produce inexpensive armies of ems to handle all the work, with Hanson predicting the world economy could continually increase at a heretofore impossible pace. Or maybe the ems will grow resentful and harm us. Perhaps a little of both.

Here’s a fuller description from the book’s website:

Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer, and you have a robot brain, but recognizably human.

Train an em to do some job and copy it a million times: an army of workers is at your disposal. When they can be made cheaply, within perhaps a century, ems will displace humans in most jobs. In this new economic era, the world economy may double in size every few weeks.•

The writer has a timeframe of roughly a century for when his outré vision can be realized. You know me: I always bet the way, way over when it comes to such dizzying visions. 

Hanson just conducted an AMA at Reddit on this topic and others. A few exchanges follow.


Question:

I understand how brain emulations could make things cheaper by flooding labour markets, but they will still only be as smart as the brains they were emulated from. Won’t scientific progress still be constrained by the upper limits of human intellect? Is there any way for brain emulations to get smarter than humans? I am aware that they could think faster than humans because they run on computers.

In your talks about brain emulations, you say that biological humans will have to buy assets to make money. Since the economy will grow very quickly with lots of emulated workers, it won’t take very many assets to generate a decent income. You also say that brain emulations will not earn very much money because there will be so many of them that wages will fall to the cost of utilities. Why don’t brain emulations buy assets like humans are supposed to in this future economy, and where are humans supposed to get the wealth to buy assets from since they won’t be able to work?

Robin Hanson:

Eventually, ems will find ways to make their brains smarter. But I’m not sure that will make much difference.

Humans need to buy assets before they lose their ability to earn wages. After is too late.


Question:

If and when Em like entities come into existence do you think society will embrace them be against them and actively try to stop them or will it be a case of “ready or not here I come” and they will force themselves upon us as their emergence will be like evolution?

Robin Hanson:

Most places will probably try to go slow, with commissions, reports, small trials, etc. A few places will let ems go wild, perhaps just due to neglect. Those few places can quickly grow to dominate the world economy. This may induce conflict, but eventually places allowing ems will win. Ems may resent and even retaliate against the places that tried to prevent them or hold them back.


Question:

So in this new economy humans wont actually be getting anymore “stuff” as all the growth will come from demand created by these Em?

Robin Hanson:

Humans will own a big % of the em economy, and use it to buy lots of “stuff” from ems.


Question:

Will we live in a utopia in 100 years?

Robin Hanson:

I don’t think humans are capable of seeing any world, no matter how nice, as “utopia”. We raise our standards and compete for relative status.•

Tags:

321BillGatesmugshot_700 (1)

Bill Gates is concerned Artificial Intelligence might become evil, but could it ever be more evil than the young Bill Gates? No such algorithm exists!

The former Dear Leader of the authoritarian state known as Microsoft, now a sweater-clad, avuncular philanthropist, is excited about where machine intelligence is headed in the near-term future but knows both Weak AI (which is happening) and Strong AI (which theoretically could) pose challenges. 

From Ina Fried at Recode:

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge. 

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates on Wednesday at the Code Conference. “This is what it was all leading up to.”

However, as he said in an interview with Recode last year, such machine capabilities will pose two big problems.

The first is, it will eliminate a lot of existing types of jobs. Gates said that creates a need for a lot of retraining but notes that until schools have class sizes under 10 and people can retire at a reasonable age and take ample vacation, he isn’t worried about a lack of need for human labor.

The second issue is, of course, making sure humans remain in control of the machines.•

Tags: ,

seconds-post-surgery

The Immortality Industrial Complex will not make you live eternally, but there still will likely be benefits to the research. We might get more bang or the buck if we were focused on incremental improvements rather than moonshots, sure, but the Silicon Valley megabucks privately funding the search for a “permanent cure” wouldn’t be available at all if it were not for the promise of people with stock options getting to live forever. 

From Adam Piore’s “The Immortality Hype” at Nautilus:

The quest to extend longevity makes perfect sense in Silicon Valley, explains Lindy Fishburne, a longtime lieutenant of Thiel’s, in her stately office in San Francisco’s Presidio, a former military base that sits on a pictorial tip of the San Francisco Bay. “It’s the engineering culture which says we’ll build our way out of it, we’ll code our way out of it, there has to be a solution. I also think it’s coupled with a very unique optimism that is pervasive in Silicon Valley.”

The big goal of the Silicon Valley titans is not to extend longevity by beating back cancer, heart disease, Alzheimer’s disease, or any of the other diseases that most of us succumb to. Rather it’s to use molecular biology to decode the very mechanics behind the process that is the biggest single risk factor in all of these diseases—the process of aging itself—and to attempt to halt it in its tracks. In recent years, researchers have made undeniable strides in decoding the cellular processes that go awry as we age.

The mainstream press has amplified the research into the second coming of Ponce de Leon. “Can Google Solve Death?” read a Time magazine cover in 2013. Veteran aging scientists bristle at the invocation of the “I” word (immortality). Even the most leading-edge studies in molecular biology today, they point out, including those done by the top scientists recruited to work for Google’s Calico, do not promise aging—let alone death—can be solved or cured.

The hype “has a bad effect because it makes the field look like we’re focusing on something that is not achievable,” says Felipe Sierra, the director of the Division of Aging Biology at the National Institute on Aging, of the NIH. It also obscures the significant research that is being done to identify the mechanisms of aging. “The positive side is that people are starting to understand that our goal is to improve health for everybody and not the particular patient that has one disease. It’s a more holistic approach.”

Tags: ,

groupheadphonesrecordplayerlisteningparty5-e1450490716508

Although it has a Magic 8 Ball vibe, the artificial hive mind UNU can’t offer vague retorts, so it’s a good thing the “brain of brains,” which operates on a swarm-intelligence principle, fared well with Oscar predictions and nailed the Kentucky Derby Superfecta. Turning its attention to the volatile realm of politics, UNU conducted a Reddit AMA, answering all things Trump, Hillary, Bernie and more. A few exchanges follow.


Question:

Where do you source your swarm intelligence from?

UNU:

UNU is built as an open platform, so anyone can create their own Swarm Intelligence and populate it with people. When UNU predicted the Kentucky Derby and got the Superfecta right, we put an ad on Reddit and asked for volunteers who know about horse racing. We also put ads out on other sources like Amazon.

That said, a totally different group predicted the Trifecta correctly for the Preakness, two weeks after the Kentucky Derby and that one was fielded by a reporter, herself (Hope Reese, TechRepublic). She pulled together her own swarm, made her own predictions, and they more than doubled their money on Preakness day.

So, there’s lots of ways to form a swarm. The one thing that seems to always be true – the swarm will out-perform the individual members. For both the Preakness and Kentucky Derby, for example, none of the individual participants got the prediction right on their own. Only as a swarm did they win.


Question:

How is this different from a real time poll?

UNU:

Since the system relies entirely on human knowledge and even instinct, it’s easy to think of it as a kind of crowdsourcing platform for opinions and intelligence. But according to Rosenberg, UNU doesn’t work like a poll or a survey that finds the average of the opinions in a group. Instead, it creates an artificial swarm that amplifies a group’s intelligence to create its own. For instance, when predicting the Derby winners, the group picked the first four horses accurately to win $11,000 in a grand bet called Superfecta. But individually, when asked to make the same predictions, none of the participants had more than one winning horse.


Question:

Hi UNU, I’ll ask the obvious question. Who will be the next President?

UNU:

UNU SAYS: “Hillary Clinton”

COMMENTARY: This was a difficult decision for UNU, with the swarm highly divided.


Question:

Which of the current running candidates have the best skills suited for president of the United States?

UNU:

UNU SAYS: “Bernie Sanders”

COMMENTARY: UNU was asked to pick among Trump, Clinton, and Sanders and had a preference for Sanders.


Question:

IF Bernie wins the nomination, how would he do against Trump?

UNU:

UNU SAYS: “WIN’S BIG”

COMMENTARY: UNU expressed strong conviction that Bernie Sanders would win big against TRUMP.


Question:

Voter turn out will be driven most by support for a candidate or dislike of a candidate?

UNU:

UNU SAYS: “DISLIKE OF A CANDIDATE”

COMMENTARY: UNU had VERY strong conviction on this point – 100% certainty.


Question:

What are the odds of campaign finance reform during a Clinton presidency (or any upcoming presidency for that matter)?

UNU:

UNU SAYS: 0% CHANCE

COMMENTARY: UNU has strong conviction on this point, expressing little faith that real campaign finance reform will occur.


Question:

Who will Donald Trump pick for Vice President?

UNU:

UNU SAYS: “Chris Christie”

COMMENTARY: UNU has high conviction at the present time, although it’s still very early to make such a pick.


Question:

How similar would Trump be to Ronald Reagan if he won the presidency?

UNU:

UNU SAYS: “NOT SIMILAR AT ALL”

COMMENTARY: UNU expressed high conviction, showing 90% certainty in his answer.•

321paulnewmannewyorktimes6

In the 1990s, it was often said that Salon was the future of journalism. In the saddest possible way, that’s pretty much what happened.

If the rise of the Internet meant lots of great traditional news publications were usurped by wonderful new online ones, something roughly equal to what was lost would have been gained. It’s possible we could even have come out ahead. As the mighty have been laid low, however, the great publications of tomorrow never arrived. Salon still publishes some talented writers, but its grand ambitions have been long buried under a mountain of debt and now makes desperate attempts at clickbait. The world’s best publications, from the New York Times to the Guardian to the Financial Times, all soldier on wounded, hopefully not mortally, as digital revenue hasn’t come close to replacing vanished print advertising dollars.  

Occasionally, these publications report on each other’s decline. The New York Times, which recently covered the chaos at the Las Vegas Review-Journal, is going through its own latest round of turbulence. The New York Post is routinely gleeful about the Times’ troubles, though even during greener times, Rupert Murdock’s crummy tabloid lost tens of millions a year and has no business-related reason to exist. Vanity Fair also has a report on the latest turmoil at the Times, which will likely end in hundreds of layoffs. Of course, Conde Nast has gone through its own rounds of deep staff cuts and now seems to be pinning part of its future on video, which may be more false idol than savior.

It’s not so much a circular firing squad as a wake attended only by the walking dead.

Excerpts from two articles follow: 1) Kelsey Sutton and Peter Sterne’s Politico piece “The Fall of Salon.com,” and 2) Sarah Ellison’s Vanity Fair report “Can Anyone Save the New York Times From Itself?


From Politico:

Eleven current and former staffers also said Daley assigned staffers to write repeatedly about certain subjects that he believed would drive high traffic. If traffic was too low, according to six former staffers, Daley would go into the CMS and write posts himself, often posting them under the byline “Salon Staff.” To improve the amount of traffic on Saturdays and Sundays, certain staffers were asked to work over the weekend and post short video clips from television programs like “The Daily Show.”

It became harder to find “high-quality” work amid all the clutter. Twelve current and former employees said they were discouraged from doing original journalism out of a concern that time spent reporting could be better spent writing commentary and aggregate stories. Even the site’s marquee names, like Walsh and Miller, were expected to produce quick hits and commentary on trending topics, staffers said.

The strategy alienated some of Salon’s longtime journalists.

“The low point arrived when my editor G-chatted me with the observation that our traffic figures were lagging that day and ordered me to ‘publish something within the hour,’” Andrew Leonard, who left Salon in 2014, recalled in a post. “Which, translated into my new reality, meant ‘Go troll Twitter for something to get mad about — Uber, or Mark Zuckerberg, or Tea Party Republicans — and then produce a rant about it.’ … I performed my duty, but not without thinking, ‘Is this what 25 years as a dedicated reporter have led to?’ That’s when it dawned on me: I was no longer inventing the future. I was a victim of it. So I quit my job to keep my sanity.”•


From Vanity Fair:

As such, in February, former economics reporter and Washington bureau chief David Leonhardt was tasked with re-examining the entire structure of the newsroom. In the past, layoffs have been treated as a numbers game. Now, larger questions are being asked about the existence of sections and the traditional desk structure. There’s also much more pressure to toe the business line. In his announcement of Leonhardt’s role, Baquet referenced “cost” twice. “It’s made everyone uneasy,” one editor told me. Soon after Leonhardt’s appointment, the New York Post reported that the Times was preparing to lay off a “few hundred staffers.”

Times spokespeople dismissed the report, but a few days later the company announced the dismissal of 70 employees in its Paris editing and production facility alone. In May, the company announced a new round of buyouts, a move largely seen as a precursor to at least 200 newsroom layoffs early next year, according to three Times staffers. “Every time this happens,” one former editor told me, “it’s a dark cloud that hangs over the newsroom for months.” Prior to the buyout announcement, Baquet put out a memo explaining that the newsroom “will have to change significantly—swiftly and fearlessly.” When I asked him about the “at least 200” figure, he said, “I’ve said there will be cuts, but I don’t know what the right size is at this point.”

It is impossible to imagine a world without The New York Times. But it is also increasingly impossible to imagine how The New York Times, as it is currently configured, continues to exist in the modern media world.•

Tags: , ,

321horsegasmask

Just one last thing I wanted to mention about John Markoff’s Machines of Loving Grace:The Quest for Common Ground Between Humans and Robots, which I read earlier this year and enjoyed, even though I have a sharp disagreement with the book’s underlying principle. 

The writer is concerned that as Artificial Intelligence and Intelligence Augmentation battle for our research dollars, we may ultimately head down a path that sees humans replaced rather than fortified. It’s noble that Markoff wants us to question the technologists of today about tomorrow’s machines, but believing we can cooly and soberly choose between these two outcomes seems farfetched to me. Humans consistently make perplexing choices, as exemplified by our glacial transition from fossil fuels when the large majority of us accept that their use could doom us. 

Three points:

  1. Competition for machine dominance doesn’t occur in a vacuum, and the race for the future will occur within companies and among companies, within countries and among countries. If China or the U.S. or some other state develops an A.I. which would give it a sizable edge economically or militarily, other players would try to replicate.
  2. You can’t discount the human need to discover answers, to work a puzzle to completion, even one that results in an endgame for us. In our search for greater intelligence, it’s possible we’re clever enough to finish ourselves. Humans are commanded by many non-rational forces.
  3. Negatives aren’t always known at the outset. When the internal-combustion engine made electric- and steam-powered vehicles obsolete, nobody thought someday a remarkably useful conveyance being powered by fossil fuels might doom humanity. We won’t always know about the next unintended consequences when working on AI and IA.

To the book’s end, Markoff maintains these decisions will be conscious ones, though a late passage asks a confounding question that (somewhat) undermines his theory. The excerpt:

In 2013, when Google acquired DeepMind, a British artificial intelligence firm that specializes in machine learning, popular belief held that roboticists were close to building completely autonomous robots. The tiny start-up had produced a demonstration that showed its software playing video games, in some cases better than human players. Reports of the acquisition were also accompanied by the claim that Google would set up an “ethics panel” because of concerns about potential uses and abuses of the technology. Shane Legg, one of the cofounders of DeepMind, acknowledged that the technology would ultimately have dark consequences for the human race. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this.” For an artificial intelligence researcher who had just reaped hundreds of millions of dollars, it was an odd position to take. If someone believes that technology will likely evolve to destroy humankind, what could motivate them to continue developing that same technology?•

Tags:

life_booth_1

Cities are pretty much cities throughout history, and tomorrow’s urban centers won’t differ so greatly physically from today’s in the more obvious ways. There will probably be some new infrastructure to try to deal with rising sea levels and occasionally something like phone booths will come and go, but the buildings will still look like buildings. 

The real changes will be more subtle, so quiet you won’t even hear a hum. In the same way driverless cars will carry on “conversations” with one another and all types of gadgets in the cloud, the Internet of Things will allow a city’s skyscrapers and furniture to communicate with its inhabitants and collect endless information about them. Much of that new reality will be beneficial, helping to ease traffic and lower crime, but it will also place all of us inside of a machine with no opt-out button. 

In a Curbed interview conducted by Patrick Sissons, MIT’s Carlo Ratti, author of The City of Tomorrow, discusses smart buildings, among other topics. An excerpt:

Question:

One of the topics you discuss in your book is this idea of buildings being more reactive and smart. How interactive will architecture get, and how will it change the look of our cities?

Carlo Ratti:

I think it’ll be very interactive. But overall, the interaction will happen through people;  our lives will change a lot, but public space won’t. A city from Roman times doesn’t look terribly different from a city today. The shift is more about how our human life and interactions in the city will change, not the shapes of buildings. That’s where we’ll see a lot of transformation.

Question:

It’s not really as much about infrastructure changes, but how we interact with the infrastructure.

Carlo Ratti:

Yes. The city will talk to us more. We’ll have new buildings, new materials, and more interactive facades, but overall, the key components will remain the same. Buildings are about horizontal floors for living, vertical walls for partitions, facades that protect us from the outside, and windows that give us a view of the outside. They were like that a hundred years ago, and they’ll be there tomorrow and in the future.

Question:

What are some great examples of these new types of buildings and architecture?

Carlo Ratti:

The project we did at the World Expo in Zaragoza, Spain, the Digital Water Pavilion, offered a vision of digital, fluid architecture. Think about a park; there are so many things you can do, between interactive lights and more responsive technology. This coming technological change is like the internet. That transformed so many parts of our lives, and the upcoming Internet of Things will do the same to our environment and cities. For instance, the city of Melbourne successfully developed an “internet of trees,” which allows residents to visualize and map urban forests.. It’s a platform, like an open street map for trees, that will help them grow, monitor, and measure, and help people take care of their parks, and compare them against those of other cities.•

Tags: ,

frankenstein1235 (3)

Yuval Noah Harari’s Sapiens, a book of history and speculation, was my favorite read of 2015. He has a follow-up coming later this year, Homo Deus: A Brief History of Tomorrow, which extends the forecasting element of his last, which was probably the most debated section. The hopeful cover line, “What made us sapiens will make us gods,” is offset by dire predictions that AI and automation will lead to a class of people “useless” politically and economically. Harari thinks solutions will have to be found in policy, something that’s true if even part of his prognostications pan out, but in America we’re currently not great at bipartisan problem solving.

From Ian Sample at the Guardian:

AIs do not need more intelligence than humans to transform the job market. They need only enough to do the task well. And that is not far off, Harari says. “Children alive today will face the consequences. Most of what people learn in school or in college will probably be irrelevant by the time they are 40 or 50. If they want to continue to have a job, and to understand the world, and be relevant to what is happening, people will have to reinvent themselves again and again, and faster and faster.”

Even so, jobless humans are not useless humans. In the US alone, 93 million people do not have jobs, but they are still valued. Harari, it turns out, has a specific definition of useless. “I choose this very upsetting term, useless, to highlight the fact that we are talking about useless from the viewpoint of the economic and political system, not from a moral viewpoint,” he says. Modern political and economic structures were built on humans being useful to the state: most notably as workers and soldiers, Harari argues. With those roles taken on by machines, our political and economic systems will simply stop attaching much value to humans, he argues.

None of this puts us in the realm of the gods. In fact, it leads Harari to even more bleak predictions. Though the people may no longer provide for the state, the state may still provide for them. “What might be far more difficult is to provide people with meaning, a reason to get up in the morning,” Harari says. For those who don’t cheer at the prospect of a post-work world, satisfaction will be a commodity to pay for: our moods and happiness controlled by drugs; our excitement and emotional attachments found not in the world outside, but in immersive VR.

All of which leads to the question: what should we do?•

Tags: ,

children-and-dead-horse-street-in-new-york-c-1895 (1)

In our time, the wrong-minded and dangerous anti-vaccination movement has frustrated efforts to control and eradicate a variety of devastating diseases. Historically there have been numerous flies in the ointment that have similarly inhibited efforts to control contagions, from the rise and fall of religions to global exploration to government malfeasance to economic shifts. An interesting passage on the topic from Annie Sparrow’s New York Review of Book‘s piece on Sonia Shah’s Pandemic: Tracking Contagions, from Cholera to Ebola and Beyond:

Shah describes those conditions in “Filth,” a chapter devoted to human excrement. She attributes the decline in sanitation in the Middle Ages to the rise of Christianity. Hindus, Buddhists, Muslims, and Jews all have built hygiene into their daily rituals, but Christianity is remarkable for its lack of prescribed sanitary practices. Jesus didn’t wash his hands before sitting down to the Last Supper, setting a bad example for centuries of followers. Christians wrongly blamed plague on water, leading to bans on bathhouses and steam-rooms. Sharing homes with livestock was normal and dung disposal a low priority. Toilets took the form of buckets or open defecation. The perfume industry, covering the stink, thrived.

During the seventeenth century, these medieval practices were exported to Manhattan, where wells for drinking water were only thirty feet deep, easily contaminated by the nightly dump of human waste. Nineteenth-century New Yorkers tried to make their water palatable by boiling it into tea and coffee, which killed cholera. But the arrival of tens of thousands of immigrants overwhelmed these weak defenses, and the city succumbed to two devastating cholera epidemics.

Corrupt economic gain, a recurrent theme in the history of cholera, is illustrated by the story of how a powerful Manhattan company—the future JPMorgan again—was established by diverting money from public waterworks to 40 Wall Street. This resulted in half a century of unsafe drinking water as the city abandoned plans to pump clean water from the Bronx and substituted well water from lower Manhattan slums. In a more recent case, the 2008 subprime mortgage collapse fostered by JPMorgan Chase and others in the banking industry left thousands of homes abandoned in South Florida. Their swimming pools of stagnant water provided ideal breeding grounds when Aedes mosquitoes arrived in 2009 carrying dengue fever. In part as a result, this tropical disease is now reestablished in Florida and Texas, transmitted by the same mosquito that carries yellow fever, West Nile, and Zika virus.•

Tags: ,

If and when 3D printers become excellent and cheap and ubiquitous–3D printers printing out more 3D printers-it will be fascinating to see the effect that tool has on manufacturing. Will small start-up car companies become a possibility? Will brands be besieged? Will large corporations be usurped?

It’s worth remembering the rise of the personal computer did not lead to everyone writing their own software. Individuals still deferred to experts. It just made room for new companies to elbow aside yesterday’s giants. 3D printers could operate along the same lines, but my guess is they’ll have a more destabilizing effect. Maybe not for companies that traffic primarily in information but for those that deal in physical products.

In a Singularity Hub article by Jason Dorrier, Deloitte’s John Hagel looks at a couple of possible business scenarios of tomorrow. He believes companies will have to pivot quickly when threatened and individual workers will soon have “freedom and flexibility,” which sound (unintentionally) like Gig Economy euphemisms. An excerpt:

Speaking at Singularity University’s Exponential Manufacturing conference in Boston, Hagel outlined a powerful, decades-long economic trend his group calls the “big shift.”

Hagel believes understanding the big shift is key to navigating an increasingly uncertain economy driven by digital technology, liberalization, and globalization. The question is less about whether the big shift is on and more about where it’s taking us. And according to Hagel, two competing visions vie for our economic future.

“There’s one side of the debate which argues that the impact of all this digital technology is to fragment everything,” Hagel says. “We’re all going to become free agents—independent contractors will loosely affiliate when we need to around specific projects. But basically, companies are dinosaurs. We’re going to fragment down to the individual. The gig economy to the max. That’s one side.”

Another view, Hagel says, suggests we’re moving toward a winner-take-all economy in which network effects enable a few organizations—the Googles or Facebooks of the world—to capture most of the wealth while everyone else is marginalized.

“You couldn’t have two more extreme positions,” Hagel said. “Which one is right?”•

Tags: ,

A lot of Jimmy "The Greek" Snyder success is due to his tele

In an age of small, endless choices and a few spectacles, the fast-paced violence of the NFL has come to dominate television in the U.S. Key to the adrenaline rush is, of course, gambling in its many forms, ubiquitous in our decentralized age. Jimmy “the Greek” Snyder, the ego-driven Vegas oddsmaker, did as much as anyone in the pre-Internet Era to legitimize gambling in America, to prep us for what was to come. The point-spread playa lived for decades on the edge before going over it, crapping out thanks to jaw-dropping bigoted comments. Come to think of it, not only has his yen for wagering reached its fullest expression in our time, but his disqualifying ethnic remarks have sadly entered into our mainstream politics.

In addition to his casino and TV work, “the Greek” did public relations for the eccentric billionaire Howard Hughes, who essentially buried himself alive. From a 1974 People article:

People:

What do you do for a living?

Jimmy the Greek:

Basically, I’m a PR man. I have a firm called Jimmy the Greek’s Public Relations, Inc. We have offices in Las Vegas and Miami, 19 people on the staff, and we gross about $800,000 a year, representing companies like National Biscuit Company—the candy division—and Aurora Toys. For three-and-a-half years, I handled PR for Howard Hughes.

People:

What did you do for Hughes?

Jimmy the Greek:

Different things. Hughes was opposed to atomic testing so close to Las Vegas. Every time there was a megaton-plus test, the windows of the hotel shook, and there were already cracks in some of the buildings. He didn’t want the people he brought to Vegas hurt. Mostly, he was afraid of the radiation. Mr. Maheu, his manager, would call and say, ‘Mr. Hughes is against megaton-plus testing, Jimmy.’ And I’d say, ‘Well, what else?’ And he’d say, ‘That’s it, Jimmy.’ And you were on your own from there on. I was very happy working for him. And $175,000 a year isn’t hay.•


“We are saddened that our 12-year association with him ended this way.”

Hulk Hogan, Terry Bollea

It’s been said by some economists that widening wealth inequality is unimportant if everyone is getting somewhat richer. I’ve never agreed. That much money concentrated at the very upper region of a society will come back to haunt, in the form of undue political power or in other ways. The Libertarian billionaire Peter Thiel insinuating himself in the Gawker-Hulk Hogan trial is just such a case in point.

If you’d told me five years ago that Gawker might go under because it needlessly published a Hulk Hogan sex tape, I would have thought, Yes, that sounds about right. A dicey if occasionally righteous publication from the start, the site had come to house a few too many immature, prurient, destructive employees. They chose their fights stupidly, myopically, maybe fatally.

That seemed to be the end of the story: Hulk Hogan is dumb, and Gawker even dumber, somehow making him seem sympathetic. That’s not how it had to be. If the site had leaked just the part of the video in which the former professional wrestler made his ugly racist remarks, the company would have been widely supported. But Gawker being Gawker, it pointlessly aimed for the crotch. End of story, it seemed.

But then it was revealed that Thiel had been quietly bankrolling the Hogan suit, trying to use his endless cash to put the publication out of business as part of a personal vendetta. It’s a chilling action, one that creates a template for the megarich to cow our press, a bloodless analogue to Russian plutocrats “relieving” journalists of their duties. It’s even worse behavior than Thiel being a delegate for a bigoted, xenophobic horror like Donald Trump, who has himself threatened to curb the powers of the press should he become President. If the country is guided by the thin skin of the super-rich rather than the parchment of the Constitution, we’re not exactly America.

In the Washington Post, Vivek Wadhwa writes of Thiel’s wrongheaded gambit and Silicon Valley’s general resistance to media scrutiny. The opening:

Gawker infringes on privacy and publishes tabloid-like stories that damage reputations. It is one of the most sensationalist and objectionable media outlets in the country. It also has not been kind to me. So it’s not a company that I would expect to be defending. But I worry that the battle that billionaire Peter Thiel has clandestinely been waging against it will be damaging to Silicon Valley by furthering distrust of its motives.

For better or worse, Gawker is entitled to the same freedom as any other news outlet. If it crosses the line, as it likely did with wrestler Hulk Hogan, the courts should deal with it. Silicon Valley’s power brokers should not get involved because they have access to resources that rival those of governments. They can outspend any other entity and manipulate public opinion.

Silicon Valley has more than an unfair advantage; its technologies exceed anything that the titans of the industrial age had. These technologies were built on the trust of the public — and that is needed for an industry that asks customers to share with them with literally every part of their lives.  This enormous influence should come with restraint and an understanding that those with power will be scrutinized — sometimes unfairly and unjustly.•

Tags: ,

« Older entries § Newer entries »