Science/Tech

You are currently browsing the archive for the Science/Tech category.

The endless fetishization of food is mind-numbing, but Alice Driver’s story at Vice “Munchies” takes a smart, offbeat approach to the topic, wondering about the future of nutrition, how we’re going to feed a growing population without further imperiling the environment. She does so while in Mexico, trailing outré chef Andrew Zimmern, who fears he will be viewed as the “fat white guy [who] goes around world eats fermented dolphin anus, comes home.” Zimmern thinks Soylent will eventually be the meal of the poor–or maybe something else we can’t even yet visualize. I would think lab-grown food will play a significant role. 

An excerpt:

I was skeptical of the argument that Soylent was simply a McDonald’s alternative, but I found Zimmern’s second point—that Soylent would be the food of the future for the poor—more compelling. He explained, “I’m at this strange intersection where I’m talking to all these different people about it. You can’t tell me when you’re turning crickets into cricket flour to put in a protein bar and masking it with ground up cranberries and nuts—you can’t tell me that that’s eating crickets or grasshoppers. It’s not. You’re eating a ground-up natural protein source. I would think that solving hunger problems in poverty-stricken areas, it’s probably better to give people a healthy nutri-shake or something once a day. What drives a lot of investigation of alternative foods is hunger and poverty. Ten years ago I told everybody, ‘Yes, it’s going to be bugs. It’s going to be crickets.’ Today, I think it’s going to be something else that we just don’t know yet because you’re talking about 50 years from now or 20 or ten years from now—who knows what we’re going to have invented by then?”

I tried to imagine the world’s poor subsisting off of Soylent, but I couldn’t help feel that there was something perverse about that solution to world hunger.

Meanwhile, in Oaxaca, Zimmern focused on learning about traditional pre-Hispanic cuisine, in which insects played a prominent role.•

Tags: ,

There is a fascinating premise underpinning Steven Levy’s Backchannel interview with Jerry Kaplan, the provocatively titled, “Can You Rape a Robot?”: AI won’t need become conscious for us to treat it as such, for the new machines to require a very evolved sense of morality. Kaplan, the author of Humans Need Not Apply, believes that autonomous machines will be granted agency if they can only mirror our behaviors. Simulacrum on an advanced level will be enough. The author thinks AI can vastly improve the world, but only if we’re careful to make morality part of the programming.

An exchange:

Steven Levy:

Well by the end of your book, you’re pretty much saying we will have robot overlords — call them “mechanical minders.”

Jerry Kaplan:

It is plausible that certain things can [happen]… the consequences are very real. Allowing robots to own assets has severe consequences and I stand by that and I will back it up. Do I have the thing about your daughter marrying a robot in there?

Steven Levy:

No.

Jerry Kaplan:

That’s a different book. [Kaplan has a sequel ready.] I’m out in the far future here, but it’s plausible that people will have a different attitude about these things because it’s very difficult to not have an emotional reaction to these things. As they become more a part of our lives people may very well start to inappropriately imbue them with certain points of view.•

Tags: ,

It’s not for sure that this time will be different, that automation will lead to technological unemployment on a large scale, but all the ingredients are in place. Such a shift would make us richer in the aggregate, but how do we extend the new wealth beyond the owners of capital? 3-D printers may become ubiquitous and make manufacturing much less expensive, leading to cheaper prices and abundance. But the basic needs of food, shelter, etc. will still be required by those no longer employable in the new arrangement. 

Sure, it’s possible thus-unimagined fields will bloom in which humans won’t be “redundancies,” but if they don’t or if there aren’t enough of them? What then?

In a FT Alphaville post, Izabella Kaminska writes of Citi’s latest “Disruptive Innovations” report, which suggests, among other remedies, universal basic income. An excerpt:

Could this time be different, in that where previous manifestations of “robot angst” created new and usually better jobs and sectors to replace those lost, this time there is no automatism for better job creation once existing jobs become redundant?

If that’s the case, Citi says there may indeed be some feedback between weak aggregate demand and growing polarisation of productivity across workers and firms. And this inevitably leads to larger inequalities in income and wealth.

So what’s to be done?

According to Citi a list of potentially desirable policy measures includes:

a) improve and adapt education and training to better align workers’ skills with the demands of firms and technologies,
b) reduce barriers to reallocating resources, including by reducing barriers to labour mobility and simplifying bankruptcy procedures,
c) increase openness to trade and FDI to facilitate knowledge transfers,
d) increase support for entrepreneurship,
e) improve access to credit for restructuring and retraining, and
f) use the tax-transfer mechanism (e.g. through a guaranteed minimum income for all, or an ambitious negative income tax, public funding of health care and long-term care etc.) to support those left behind by technological advances.

Note with particular attention that last policy recommendation: a basic income for one and all to help society adjust to the new hyper technological environment, in a way that encourages competition and productivity in laggard firms, and dilutes the power of the winner-takes-all corporates.•

Tags:

More than anything, Steve Jobs was a salesman, maybe the greatest one ever, with a taste for auto-hagiography. Sure, that’s not the total picture. While he had absolutely nothing to do with the creation of Apple I and Apple II, he did ultimately (twice) become the company’s Nudge-in-Chief who hectored his teams to perfection, the way Ahab urged his to the great white whale. 

I can’t wait to see Alex Gibney’s new doc, Steve Jobs: The Man in the Machine, which wonders why the late Apple founder was mourned deeply in office parks as well as Zuccotti Park. In an L.A. Weekly piece, Amy Nicholson sees Gibney’s latest as almost a sequel to his last work, Going Clear, the Cult of Mac being analogous in some ways to Scientology. An excerpt: 

Both Scientology and Apple were founded by now-dead gurus who commanded devotion. Both are corporations that claim to stand for something purer than greed. Neither pays fair taxes. And neither functions openly, speaks freely or tolerates critics.

Where the two films differ is us. Dismantle Scientology, and audiences will cheer. Chink away at the cult of Apple, and we all feel accused. I imagine that people will slink out of Steve Jobs keeping their iPhones guiltily stashed. When they make it a safe distance from the theater, they’ll glide their smartphones in front of their faces, swipe the black monoliths awake and disappear into the dream machines of their own desires: where they want to visit, what they want to hear and who they want to reach. As MIT professor Sherry Turkle describes it, the iPhone that was meant to connect the globe instead made us “alone together.” In the future, will historians wondering how society fractured look to Jobs’ Apple as the original sin?

We love our smartphones. In the eight years since the iPhone 1, they’ve become necessities — almost a human right. Though they’re made of circuits and wires, our attachment to these external brains is personal. They keep us company, and in turn we fondle them, sleep with them, flip out when they break. Which is why we have this documentary about their creator and not docs about the inventors of the subway, the shower, the fridge. Gibney’s film asks “Why did Jobs’ death make us mourn?”•

Tags: ,

robot-face

All knowledge cannot be reduced to pure information–not yet anyway.

Machines may eventually rise to knowledge, or perhaps humans will be reduced to mere information. The first outcome poses challenges, while the second is the triumph of a new sort of fascism.

In a NYRB piece that argues specifically against MOOCs and more broadly against humans being replaced by machines or encouraged to be more machine-like, David Bromwich is convinced that virtual education is a scary step toward the mechanization of people. 

I’m not so dour about MOOCs, especially since everyone doesn’t have the privilege of a high-quality classroom situation. Their offerings seem an extension to me of the mission of public libraries: Make the tools of knowledge available to everyone. The presence of both online education and physical colleges simultaneously is the best-case scenario. Having one without the other is far less good. Bromwich’s fear, a realistic one, is that traditional higher education will be seriously disrupted by the new order.

From Bromwich:

American society is still on the near side of robotification. People who can’t conjure up the relevant sympathy in the presence of other people are still felt to need various kinds of remedial help: they are autistic or sociopathic, it may be said—those are two of a range of clinical terms. Less clinically we may say that such people lack a certain affective range. However efficiently they perform their tasks, we don’t yet think well of those who in their everyday lives maximize efficiency and minimize considerate, responsive, and unrehearsed interaction, whether they neglect such things from physiological incapacity or a prudential fear of squandering their energy on emotions that are not formally necessary.

This prejudice continues to be widely shared. But the consensus is visibly weaker than it was a decade ago. As people are replaced by machines—in Britain, they call such people “redundant”—the survivors who remain in prosperous employment are being asked to become more machinelike. This fits with the idea that all the valuable human skills and varieties of knowledge are things that can be assimilated in a machinelike way. We can know the quantity of information involved, and we can program it to be poured into the receiving persons as a kind of “input” that eventually yields the desired “product.” Even in this short summary, however, I have introduced an assumption that you may want to stand back and question. Is it really the case that all knowledge is a form of information? Are there some kinds of learning or mental activity that are not connected with, or properly describable as, knowledge?•

 

Tags:

All fast-casual dining won’t likely be automated nor will restaurants with human staff soon be an overwhelming minority. It will not in the near future resemble the way a few shoes are still made by hand while almost all of them are manufactured by machines. I don’t think that happens so quickly or absolutely.

But not all (or almost all) of these jobs have to disappear for the sector’s workers to be devastated. In most places, anything out of sight in the kitchen that can be robotized will be, and some visible positions will as well. Of course, some restaurants and hotels and other corners of the hospitality industry will go all in and completely disappear the human element.

I’m not suggesting we dash robot heads with rocks, but we probably need to have some political solutions at hand, should, say, popular dining and the trucking and taxi industries no longer be there to employ tens of millions of Americans. A Plan B would be handy then.

One of the trailblazers in disappearing visible workers is the new digital automat known as Eatsa, the San Francisco cafe I blogged about a couple of days ago. In a smart Atlantic piece, Megan Garber looks at the underlying meaning of this nouveau restaurant beyond its threat of technological unemployment, how it’s selling not just meals but social withdrawal. An excerpt:

The core premise here, though, is that at Eatsa, you will interact with no human save the one(s) you are intentionally dining with. The efficiencies are maximized; the serendipities are minimized. You are, as it were, bowl-ing alone.

That in itself, is noteworthy, no matter how Eatsa does as a business—another branch is slated to open in Los Angeles later this year. If fast food’s core value was speed, and fast casual’s core value was speed-plus-freshness, Eatsa’s is speed-plus-freshness-plus-a lack of human interaction. It’s attempting an automat-renaissance during the age of Amazon and Uber, during a time when the efficiency of solitude has come to be seen, to a large extent, as the ultimate luxury good. Which is to say that it has a very good chance of success.•

Tags:

Industrial robots are built to be great (perfect, hopefully) at limited, repetitive tasks. But with Deep Learning experiments, the machines aren’t programmed for chores but rather to teach themselves to learn how to master them from experience. Since every situation in life can’t be anticipated and pre-coded, truly versatile AI needs to autonomously conquer obstacles that arise. In these trials, the journey has as much meaning–more, really–than the destination.

Of course, not everyone would agree that humans are operating from such a blank slate, that we don’t already have some template for many behaviors woven into our neurons–a collective unconsciousness of some sort. Even if that’s so, I’d think there’ll soon be a way for robots to transfer such knowledge across generations.

One current Deep Learning project: Berkeley’s Brett robot, designed to be like a small child, though a growing boy. The name stands for “Berkeley Robot for the Elimination of Tedious Tasks,” and you might be tempted to ask how many of them it would take to screw in a light bulb, but it’s already far beyond the joke stage. As usual with this tricky field, it may take longer than we’d like for the emergence of such highly functional machines, but perhaps not as long as we’d expect.

Jack Clark of Bloomberg visited the motherless “child” at Berkeley and writes of it and some of the other current bright, young things. An excerpt from his report:

What makes Brett’s brain tick is a combination of two technologies that have each become fundamental to the AI field: deep learning and reinforcement learning. Deep learning helps the robot perceive the world and its mechanical limbs using a technology called a neural network. Reinforcement learning trains the robot to improve its approach to tasks through repeated attempts. Both techniques have been used for many years; the former powers Google and other companies’ image and speech recognition systems, and the latter is used in many factory robots. While combinations of the two have been tried in software before, the two areas have never been fused so tightly into a single robot, according to AI researchers familiar with the Berkeley project. “That’s been the holy grail of robotics,” says Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington.

After years of AI and robotics research, Berkeley aims to devise a system with the intelligence and flexibility of Rosie from The Jetsons. The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot. Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people. These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

 

Tags: , ,

man-rocket_180892k

John Lanchester, who wrote one of my favorite articles of the year with “The Robots Are Coming in the London Review of Books, returns to that same publication to think about more tinkerers and their machines, namely the Wright brothers and Elon Musk.

The occasion is a dual review of David McCullough’s new work about the former and Ashlee Vance’s of the latter. As the piece notes, the aviation pioneer Wrights were ignored, disbelieved and mocked during their first couple of successful flights, the press too skeptical to accept what was clear as the sky if only they would open their eyes.

Puzzlingly, Lanchester is of the notion that the SpaceX founder Elon Musk is less than a household name, which is a curious thing since the Iron Man avatar is one of the most famous people on Earth, receiving the type of wide acclaim before coming close to Mars that was denied the Wrights even after they successfully took flight in Kitty Hawk. Just strange.

Otherwise it’s a very well-written piece, and one that astutely points out that tinkerers today who want to do more than merely create apps often need a planeload of cash, something the Wrights didn’t require. Perhaps 3-D printers will change that?

A passage in which Lanchester compares the siblings to their spiritual descendant:

When David McCullough’s book came out, it went straight to the top of the US bestseller list, taking up a position right next to Ashlee Vance’s biography of Elon Musk. At which point you may well be asking, who he? The answer is that Musk is the South African-born entrepreneur who runs three of the most interesting companies in America, in the fields of clean energy and interplanetary exploration: SolarCity (solar batteries), Tesla (electric cars), and SpaceX (commercial spaceflight). It’s the third of these companies which is the maddest and most entertaining. Where most corporate mission statements are so numbing they’d be useful as a form of medical anaesthesia, SpaceX’s is ‘creating the technology needed to establish life on Mars’. ‘I would like to die thinking that humanity has a bright future,’ Musk explained to Vance. ‘“If we can solve sustainable energy and be well on our way to becoming a multiplanetary species with a self-sustaining civilisation on another planet – to cope with a worst-case scenario happening and extinguishing human consciousness – then,” and here he paused for a moment, “I think that would be really good.”’

There are a number of suggestive parallels between Musk and the Wrights, beyond the obvious ones to do with an interest in flight. The bishop had very high standards and set no limits on the intellectual curiosity he encouraged in his children; Musk’s father had the same standards and the same insistence on no limits, but was (is) a tortured and difficult presence, ‘good at making life miserable’, in Musk’s words: ‘He can take any situation no matter how good it is and make it bad.’ The Wrights were poorish, the Musks affluentish, but both grew up with an emphasis on learning things first-hand. ‘It is remarkable how many different things you can get to explode,’ Musk says about his childhood experiments. ‘I’m lucky I have all my fingers.’ One very odd thing is a parallel to do with bullies: Musk was set on and beaten half to death by a gang of thugs at his school in Johannesburg; Wilbur Wright was attacked so badly at the age of 18 – beaten with a hockey stick – that he took years to recover from his injuries and missed a college education as a result. His assailant, Oliver Crook Haugh, went on to become a notorious serial killer. Something about these very bright young men set off the bullies’ hatred for difference.

The Wrights took calculated risks. Musk does the same.•

Tags: , , , ,

dt2

If Donald Trump grew a small, square mustache above his lip, would his poll numbers increase yet again? For a candidate running almost purely on attention, can any shock really be deleterious?

Howard Dean was the first Internet candidate and Barack Obama the initial one to ride those new rules to success. But things have already markedly changed: That was a time of bulky machines on your lap, and the new political reality rests lightly in your pocket. A smartphone’s messages are brief and light on details, and its buzzing is more important than anything it delivers.

The diffusion of media was supposed to make it impossible for a likable incompetent like George W. Bush to rise. How could such a person survive the scrutiny of millions of “citizen journalists” like us? If anything, it’s made it easier, even for someone who’s unlikable and incompetent. For a celeb with a Reality TV willingness to be ALL CAPS all the time, facts get lost in the noise, at least for awhile.

That doesn’t mean Donald Trump, an adult baby with an attention span that falls somewhere far south of 15 months, will be our next President, but it does indicate that someone ridiculously unqualified and hugely bigoted gets to be on the national stage and inform our political discourse. The same way Jenny McCarthy used her platform to play doctor and spearhead the anti-vaccination movement, Trump gets to be a make-believe Commander-in-Chief for a time.

Unsurprisingly, Nicholas Carr has written the best piece on the dubious democracy the new tools have delivered, a Politico Magazine article that analyzes election season in a time that favors a provocative troll, a “snapchat personality,” as he terms it. The opening:

Our political discourse is shrinking to fit our smartphone screens. The latest evidence came on Monday night, when Barack Obama turned himself into the country’s Instagrammer-in-Chief. While en route to Alaska to promote his climate agenda, the president took a photograph of a mountain range from a window on Air Force One and posted the shot on the popular picture-sharing network. “Hey everyone, it’s Barack,” the caption read. “I’ll be spending the next few days touring this beautiful state and meeting with Alaskans about what’s going on in their lives. Looking forward to sharing it with you.” The photo quickly racked up thousands of likes.

Ever since the so-called Facebook election of 2008, Obama has been a pacesetter in using social media to connect with the public. But he has nothing on this year’s field of candidates. Ted Cruz live-streams his appearances on Periscope. Marco Rubio broadcasts “Snapchat Stories” at stops along the trail. Hillary Clinton and Jeb Bush spar over student debt on Twitter. Rand Paul and Lindsey Graham produce goofy YouTube videos. Even grumpy old Bernie Sanders has attracted nearly two million likers on Facebook, leading the New York Times to dub him “a king of social media.”

And then there’s Donald Trump. If Sanders is a king, Trump is a god. A natural-born troll, adept at issuing inflammatory bulletins at opportune moments, he’s the first candidate optimized for the Google News algorithm.•

Tags: ,

There’s good news about life on Earth after climate change, but first the bad news: Death, massive amounts of death.

As Lizzie Wade states in her smart Wired article, we’ll likely be around to see the disaster we’ve created, but we don’t have a great shot at waiting out the recovery. That will take eons. The positive side doesn’t involve us, but rather the creatures that may thrive and replenish the landscape after we’re gone. But first they’ll have to survive us. Godspeed to them. An excerpt:

The flip side of mass extinction, however, is rapid evolution. And if you’re willing to take the long view—like, the million-year long view—there’s a ray of hope to be found in today’s rare species. The Amazon, in particular, is packed with plant species that pop up few and far between and don’t even come close to playing a dominant role in the forest. But they might have treasure buried in their genes.

Rare species—especially those that are only distantly related to today’s common ones—“have all kind of traits that we don’t even know about,” says [evolutionary geneticist Christopher] Dick. Perhaps one will prove to thrive in drought, and another will effortlessly resist new pests that decimate other trees. “These are the species that have all the possibilities for becoming the next sets of dominant, important species after the climate has changed,” Dick says.

That’s why humans can’t cut them all down first, he argues. If rainforests are going to have a fighting chance of recovering their biodiversity and ecological complexity, those rare species and their priceless genes need to be ready and able to step into the spotlight. It might to be too late to save the world humanity knows and loves. But it still can still do its best to make sure the new one is just as good—someday.•

Tags: ,

A digitized Automat with no visible workers roughly describes Eatsa, a San Francisco fast-casual eatery for tomorrow that exists today. Tamara Palmer of Vice visited the restaurant and found it “much more reminiscent of an Apple store than a fast food franchise.” Its design may be too cool to work everywhere in America, but I bet some variation of it will. Sooner or later, Labor in the sector will be noticeably dinged by technological unemployment. The opening:

People often muse on a future controlled by machines, but that is already well in motion here in the Bay Area, where hotels are employing robot butlers, Google and Tesla are putting driverless vehicles on the road, and apps that live every aspect of your life for you continue to proliferate. The rush to put an end to human contact is at a fever pitch around these parts, where a monied tech elite has the deep pockets to support increasingly absurd services.

Right on trend, this week marks the debut of Eatsa, a quick-service quinoa bowl “unit” (as one owner called it) billing itself as San Francisco’s premiere “automated cafe.”

I attended a media preview lunch at Eatsa last week to test out the concept before the doors officially opened. Pushing a button to summon an Uber ride to my door, I wondered how good automated food might be.

I realized it doesn’t really matter, because as California inches towards a $15 per hour minimum wage, that’s the direction we’re headed in, starting with a people-free fast food world.•

Tags:

I’m mixing my 20th-century sci-fi authors, but like Billy Pilgrim naked in a Tralfamadore zoo, we may be kept as pets by intelligent machines. That’s what Philip K. Dick Android, who can learn new words in real-time, promises his NOVA interlocutors.

Or perhaps they’ll eliminate us. Or maybe by the time they exist, we will be very different. We might become those conscious machines we so fear. We might be them. Nobody knows.

My first Virtual Reality experience was during the 1990s while working in a non-profit media place that had a clunky VR helmet for visitors to experience. One guest was rock icon Lou Reed, who sat in a chair and pulled the device over his head. He paused a moment, and then said to the woman who was assisting, “What happens now? Does someone pull on my cock?”

Perhaps because it didn’t come with free tug jobs or maybe because the technology was still lacking, Virtual Reality was a bomb two decades ago. Those who’ve tested the latest models are awed by what years of development and greater computing power has wrought. The tool certainly could be a tremendous boon to education, but you could say the same of gaming, and that’s never been leveraged correctly. 

The opening of “Grand Illusions,” an Economist report:

YOUR correspondent stands, in a pleasingly impossible way, in orbit. The Earth is spread out beneath. A turn of the head reveals the blackness of deep space behind and above. In front is a table full of toys and brightly coloured building blocks, all of which are resolutely refusing to float away—for, despite his being in orbit, gravity’s pull does not seem to have vanished. A step towards the table brings that piece of furniture closer. A disembodied head appears, and pair of hands offer a toy ray-gun. “Go on, shoot me with it,” says the head, encouragingly. Squeezing the trigger produces a flash of light, and the head is suddenly a fraction of its former size, speaking in a comic Mickey-Mouse voice (despite the lack of air in low-Earth orbit) as the planet rotates majestically below.

It is, of course, an illusion, generated by a virtual-reality (VR) company called Oculus. The non-virtual reality is a journalist wearing a goofy-looking headset and clutching a pair of controllers in a black, soundproofed room at a video-gaming trade fair in Germany. But from the inside, it is strikingly convincing. The virtual world surrounds the user. A turn of the head shifts the view exactly as it should. Move the controllers and, in the simulation, a pair of virtual arms and hands moves with them. The disembodied head belongs to an Oculus employee in another room, who is sharing the same computer-generated environment. The blocks on the table obey the laws of physics, and can be stacked up and knocked down just like their real-world counterparts. The effect, in the words of one VR enthusiast, is “like sticking your head into a wormhole that leads to some entirely different place.”

Matrix algebra

The idea of virtual reality—of building a convincing computer-generated world to replace the boring old real one—has fuelled science fiction’s novels and movies since the 1950s. In the 1990s, as computers became commonplace, several big firms tried to build headsets as a first attempt to realise the idea. They failed. The feeble computers of the time could not produce a convincing experience. Users suffered from nausea and headaches, and the kit was expensive and bulky. Although VR found applications in a few bits of engineering and science, the consumer version was little more than a passing fad in the world’s video-game arcades. But now a string of companies are betting that information technology, both hardware and software, has advanced enough to have another go. They are convinced that their new, improved virtual reality will shake up everything from video-gaming to social media, and from films to education.•

 

Terrible products that fail miserably delight us not only because of the time-tested humor of a spectacular pratfall, but because it’s satisfying to feel now and then that we’re not just a pack of Pavlovian dogs prepared to lap up whatever is fed us, especially if it’s a Colgate Ready Meal and a Crystal Pepsi.

In a really smart Financial Times column, Tim Harford takes a counterintuitive look at how companies can avoid launching surefire duds. The usual manner has been to find out which products representative people want, but he writes of an alternative strategy: Discover what consumers of horrible taste embrace and then bury those products deep in a New Mexico desert alongside Atari’s E.T. video games. Of course, it does say something that companies can’t just identify what’s awful. Why do almost all businesses become echo chambers?

An excerpt:

If savvy influential consumers can help predict a product’s success, might it not be that there are consumers whose clammy embrace spells death for a product? It’s a counter-intuitive idea at first but, on further reflection, there’s a touch of genius about it.

Let’s say that some chap — let’s call him “Herb Inger” — simply adored Clairol’s Touch of Yogurt shampoo. He couldn’t get enough of Frito-Lay’s lemonade (nothing says “thirst-quenching” like salty potato chips, after all). He snapped up Bic’s range of disposable underpants. Knowing this, you get hold of Herb and you let him try out your new product, a zesty Cayenne Pepper eyewash. He loves it. Now you know all you need to know. The product is doomed, and you can quietly kill it while it is still small enough to drown in your bathtub.

A cute idea in theory — does it work in practice? Apparently so. Management professors Eric Anderson, Song Lin, Duncan Simester and Catherine Tucker have studied people, such as Herb, whom they call “Harbingers of Failure.” (Their paper by that name is forthcoming in the Journal of Marketing Research.) They used a data set from a chain of more than 100 convenience stores. The data covered more than 100,000 customers with loyalty cards, more than 10 million transactions and nearly 10,000 new products. Forty per cent of those products were no longer stocked after three years, and were defined as “flops.”•

Tags: , , , ,

The future seldom arrives in a hurry, which is usually a good thing from a practical standpoint. Today and tomorrow don’t always mix so well.

In an opinion piece at The Conversation, David Glance of the University of Western Australia argues that fears of near-term technological unemployment are overstated. He may be right in the big picture, but if just one significant area is realized in short order, defying business-as-usual stasis–driverless cars is the most obvious example–a large swath of Labor will be blown sideways. 

From Glance:

The trouble with predicting the future is that the more dramatic the prediction the more likely the media will pick it up and amplify it in the social media-fed echo chamber. What is far less likely to be reported are the predictions that emphasise that it is unlikely that things will change that radically because the of the massive inertia that is built into industry, governments and the general workers’ appetite for change.

Economists at the OECD may have another explanation for why it is unwise to equate the fact that something “could” be done with the fact that it “will” be done. In a report on the future of productivity, the authors detail how it is only a small number of “frontier companies” have managed to implement changes to achieve high levels of productivity growth. The companies that haven’t achieved anywhere near the same productivity growth are the “non-frontier companies” or simply “laggards.” The reasons for this are probably many but lack of leadership, vision, skills or ability may factor into it.

The point is that since 2000 many companies didn’t adopt technology and change their business processes to see improvements in productivity even though they clearly “could” have done.•

Tags:

Some people don’t know how to accept a gift. America has many such people among its government, as apparently do numerous other developed nations. 

One of the few upsides to the colossal downside of the 2008 economic collapse is the rock-bottom interest rates that offer countries the opportunity to rebuild their infrastructure at virtually no added cost. It’s a tremendous immediate stimulus that also pays long-term dividends. But deficit hawks have made it impossible for President Obama to take advantage of this rare and relatively short-term opportunity. While some of it is certainly partisanship, it does seem like a large number of elected officials have pretty much no idea of basic economics.

From the Economist:

IT IS hard to exaggerate the decrepitude of infrastructure in much of the rich world. One in three railway bridges in Germany is over 100 years old, as are half of London’s water mains. In America the average bridge is 42 years old and the average dam 52. The American Society of Civil Engineers rates around 14,000 of the country’s dams as “high hazard” and 151,238 of its bridges as “deficient”. This crumbling infrastructure is both dangerous and expensive: traffic jams on urban highways cost America over $100 billion in wasted time and fuel each year; congestion at airports costs $22 billion and another $150 billion is lost to power outages.

The B20, the business arm of the G20, a club of big economies, estimates that the global backlog of spending needed to bring infrastructure up to scratch will reach $15 trillion-20 trillion by 2030. McKinsey, a consultancy, reckons that in 2007-12 investment in infrastructure in rich countries was about 2.5% of GDP a year when it should have been 3.5%. If anything, the problem is becoming more acute as some governments whose finances have been racked by the crisis cut back. In 2013 in the euro zone, general government investment—of which infrastructure constitutes a large part—was around 15% below its pre-crisis peak of €3 trillion ($4 trillion), according to the European Commission, with drops as high as 25% in Italy, 39% in Ireland and 64% in Greece. In the same year government spending on infrastructure in America, at 1.7% of GDP, was at a 20-year low.

This is a missed opportunity. Over the past six years, the cost of repairing old infrastructure or building new projects has been much cheaper than normal, thanks both to rock-bottom interest rates and ample spare capacity in the construction industry.•

Sad to hear of the passing of Dr. Oliver Sacks, the neurologist and writer, who made clear in his case studies that the human brain, a friend and a stranger, was as surprising as any terrain we could ever explore. It feels like we’ve not only lost a great person, but one who was uniquely so. He became hugely famous with the publication of his 1985 collection, The Man Who Mistook His Wife For A Hat, which built upon the template of A.R. Luria’s work with better writing and a wider array of investigations. Two years prior, he published an essay In the London Review of Books that became the title piece. An excerpt: 

I stilled my disquiet, his perhaps too, in the soothing routine of a neurological exam – muscle strength, co-ordination, reflexes, tone. It was while examining his reflexes – a trifle abnormal on the left side – that the first bizarre experience occurred. I had taken off his left shoe and scratched the sole of his foot with a key – a frivolous-seeming but essential test of a reflex – and then, excusing myself to screw my ophthalmoscope together, left him to put on the shoe himself. To my surprise, a minute later, he had not done this.

‘Can I help?’I asked.

‘Help what? Help whom?’

‘Help you put on your shoe.’

‘Ach,’ he said, ‘I had forgotten the shoe,’ adding, sotto voce: ‘The shoe! The shoe?’ He seemed baffled.

‘Your shoe,’ I repeated. ‘Perhaps you’d put it on.’

He continued to look downwards, though not at the shoe, with an intense but misplaced concentration. Finally his gaze settled on his foot: ‘That is my shoe, yes?’

Did I mishear? Did he mis-see? ‘My eyes,’ he explained, and put a hand to his foot. ‘This is my shoe, no?’

‘No, it is not. That is your foot. There is your shoe.’

‘Ah! I thought that was my foot.’

Was he joking? Was he mad? Was he blind? If this was one of his ‘strange mistakes’, it was the strangest mistake I had ever come across.

I helped him on with his shoe (his foot), to avoid further complication. Dr P. himself seemed untroubled, indifferent, maybe amused. I resumed my examination. His visual acuity was good: he had no difficulty seeing a pin on the floor, though sometimes he missed it if it was placed to his left.

He saw all right, but what did he see? I opened out a copy of the National Geographic Magazine, and asked him to describe some pictures in it. His eyes darted from one thing to another, picking up tiny features, as he had picked up the pin. A brightness, a colour, a shape would arrest his attention and elicit comment, but it was always details that he saw – never the whole. And these details he ‘spotted’, as one might spot blips on a radar-screen. He had no sense of a landscape or a scene.

I showed him the cover, an unbroken expanse of Sahara dunes.

‘What do you see here?’I asked.

‘I see a river,’ he said. ‘And a little guesthouse with its terrace on the water. People are dining out on the terrace. I see coloured parasols here and there.’ He was looking, if it was ‘looking’, right off the cover, into mid-air, and confabulating non-existent features, as if the absence of features in the actual picture had driven him to imagine the river and the terrace and the coloured parasols.

I must have looked aghast, but he seemed to think he had done rather well. There was a hint of a smile on his face. He also appeared to have decided the examination was over, and started to look round for his hat. He reached out his hand, and took hold of his wife’s head, tried to lift it off, to put it on. He had apparently mistaken his wife for a hat!•

Tags:

Wernher von Braun wasn’t worried about helping to murder millions of people, but he was concerned about the solitude of astronauts during space travel. Odd priorities.

The philosophical spelunker Michel Siffre went so far as to embed himself in caves and icebergs for months at a time in the 1960s and 1970s to understand prolonged isolation. Time stopped having meaning for him. The pristine terrain he ultimately explored was inside his own head.

It’s perplexing in this age of robotics that extended space trips to Mars and such need to have humans at all. They’re far cheaper with just robots and can collect the same information. While colonization is the ultimate goal, it needn’t be the immediate one.

But we’re likely going up sooner than later, since peopled space flights are an easier sell. They flatter us, remind us of ourselves. Therefore, the loneliness of the long distance “runner” is a complicated problem for NASA and private programs. The longest such experiment testing human endurance in seclusion has just begun.

From the BBC:

A team of NASA recruits has begun living in a dome near a barren volcano in Hawaii to simulate what life would be like on Mars.

The isolation experience, which will last a year starting on Friday, will be the longest of its type attempted.

Experts estimate that a human mission to the Red Planet could take between one and three years.

The six-strong team will live in close quarters under the dome, without fresh air, fresh food or privacy.

They closed themselves away at 15:00 local time on Friday (01:00 GMT Saturday).

A journey outside the dome – which measures only 36ft (11m) in diameter and is 20ft (6m) tall – will require a spacesuit.

A French astrobiologist, a German physicist and four Americans – a pilot, an architect, a journalist and a soil scientist – make up the NASA team.

The men and women will each have a small sleeping cot and a desk inside their rooms. Provisions include powdered cheese and canned tuna.•

Tags:

Back when people were impressed by those who possessed lots of fairly useless facts, I was always good at trivia, and it never once made me feel smart or satisfied. Because it was just a parlor trick, really. Read a lot and in an irregular pattern and you too can be crammed with minutiae. Now that everyone can look up every last thing on their phones in just seconds, all of life has become an open-book test. Trivial knowledge is (thankfully) no longer valued.

From Douglas Coupland’s FT column about his participation in a Trivia Night contest:

The larger question for me during the trivia contest evening was, “Wait — we used to have all of this stuff stored in our heads but now, it would appear, we don’t. What happened?” The answer is that all of this crap is still inside our heads — in fact, there’s probably more crap than ever inside our heads — it’s just that we view it differently now. It’s been reclassified. It’s not trivia any more: it’s called the internet and it lives, at least for the foreseeable future, outside of us. The other thing that happened during the trivia contest is the realisation that we once had a thing called a-larger-attention-span-than-the-one-we-now-have. Combine these two factors together and we have a reasonably good reason to explain why a game of trivia in 2015 almost feels like torture. I sat there with four other reasonably bright people, not necessarily knowing the answers to all of the questions, but knowing that the answers, no matter how obtuse, could be had in a few seconds without judgment on my iPhone 6 Plus. But then I decided the evening was also a good reminder of how far things have come since the early 1980s heyday of the board game Trivial Pursuit.

Q: What country is north, east, south and west of Finland?

A: Norway.

Q: Clean, Jerk and Snatch are terms used in which sport?

A: Weightlifting.

Q: Why was trivia such a big thing in the late 20th century?

A: Because society was generating far more information than it was generating systems with which to access that information. People were left with constellations of disconnected, randomly stored facts that could leave one feeling overwhelmed. Trivia games flattered 20th-century trivia players by making them feel that there was both value to having billions of facts in one’s head, and that they were actually easily retrieved. But here in 2015 we know that facts are simply facts. We know where they’re stored and we know how to access them. If anything, we’re a bit ungrateful, given that we know the answer to just about everything.•

Tags:

It’s logical if not desirable that war becomes more automated, since it only takes one nation pursuing the dream of a robot army to detonate a new arms race. I’ve thought more about weapons systems discrete from human beings than I have about enhanced soldiers, but the U.S. Army Research Laboratory has already given great consideration to the latter. The recent report “Visualizing the Tactical Ground Battlefield in  the Year 2050imagines fewer of us going into battle but those that do being “super humans” augmented by exoskeletons, implants and internal sensors. It certainly ranges into what currently would be considered sci-fi territory.

From Patrick Tucker at Defense One:

People, too, will be getting a technological upgrade. “The battlefield of the future will be populated by fewer humans, but these humans would be physically and mentally augmented with enhanced capabilities that improve their ability to sense their environment, make sense of their environment, and interact with one another, as well as with ‘unenhanced humans,’ automated processes, and machines of various kinds,” says the report.

What exactly constitutes an enhanced human is a matter of technical dispute. After all, night-vision goggles represent a type of enhancement, as does armor. The military has no problem discussing future plans in those areas, but what the workshop participants anticipate goes well beyond flak jackets and gear. …

The report envisions enhancement taking several robotic steps forward. “To enable humans to partner effectively with robots, human team members will be enhanced in a variety of ways. These super humans will feature exoskeletons, possess a variety of implants, and have seamless access to sensing and cognitive enhancements. They may also be the result of genetic engineering. The net result is that they will have enhanced physical capabilities, senses, and cognitive powers. The presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution,” says the report.•

Tags:

Attempting to reverse aging–even defeat death–seems like science-fiction to most, but it’s just science to big-picture gerontologist Aubrey de Grey, who considers himself a practical person. Given enough time it certainly makes sense that radical life-extension will be realized, but the researcher is betting the march toward a-mortality will begin much sooner than expected. It frustrates him to no end that governments and individuals alike usually don’t accept death as a sickness to be treated. Some of those feelings boiled over when he was interviewed by The Insight. An excerpt:

The Insight:

I’m interested in the psychology of people, I guess you can put them into two camps: one doesn’t have an inherent understanding of what you’re doing or saying, and the other camp willingly resign themselves to living a relatively short life.

You’ve talked to a whole wealth of people and come across many counter-opinions, have any of them had any merit to you, have any of them made you take a step back and question your approach?

Aubrey de Grey:

Really, no. It’s quite depressing. At first, really, I was my own only affective critic for the feasibility – certainly never a case or example of an opinion that amounted to a good argument against the desirability of any of this work; that was always 100% clear to me, that it would be crazy to consider this to be a bad idea. It was just a question of how to go about it. All of the stupid things that people say, like, “Where would we put all the people?” or, “How would we pay the pensions?” or, “Is it only for the rich?” or, “Wont dictators live forever?” and so on, all of these things… it’s just painful. Especially since most of these things have been perfectly well answered by other people well before I even came along. So, it’s extraordinarily frustrating that people are so wedded to the process of putting this out of their minds, by however embarrassing their means; coming up with the most pathetic arguments, immediately switching their brains off before realising their arguments might indeed be pathetic.

The Insight:

It might be a very obvious question, but it just sprung to mind – maybe you’ve been asked this before, it’s extremely philosophical and speculative – what do you think happens when you die?

Aubrey de Grey:

Oh, fuck off. I don’t give a damn. I’m a practical kind of guy – I’m not intending to be that experiment.•

Tags:

The main difference between rich people and poor people is that rich people have more money. 

That’s it, really. Those with wealth are just as likely to form addictions, get divorces and engage in behaviors we deem responsible for poverty. They simply have more resources to fall back on. People without that cushion often land violently, land on the streets. Perhaps they should be extra careful since they’re in a more precarious position, but human beings are human beings: flawed. 

In the same ridiculously simple sense, homeless people are in that condition because they don’t have homes. A lot of actions and circumstances may have contributed to that situation, but the home part is the piece of the equation we can actually change. The Housing First initiative has proven thus far that it’s good policy to simply provide homes to people who have none. It makes sense in both human and economic terms. But it’s unpopular in the U.S. because it falls under the “free lunch” rubric, despite having its roots in the second Bush Administration. Further complicating matters is the shortage of urban housing in general.

In a smart Aeon essay, Susie Cagle looks at the movement, which has notably taken root in the conservative bastion of Utah, a state which has reduced homelessness by more than 90% in just ten years. An excerpt:

A new optimistic ideology has taken hold in a few US cities – a philosophy that seeks not just to directly address homelessness, but to solve it. During the past quarter-century, the so-called Housing First doctrine has trickled up from social workers to academics and finally to government. And it is working. On the whole, homelessness is finally trending down.

The Housing First philosophy was first piloted in Los Angeles in 1988 by the social worker Tanya Tull, and later tested and codified by the psychiatrist Sam Tsemberis of New York University. It is predicated on a radical and deeply un-American notion that housing is a right. Instead of first demanding that they get jobs and enroll in treatment programmes, or that they live in a shelter before they can apply for their own apartments, government and aid groups simply give the homeless homes.

Homelessness has always been more a crisis of empathy and imagination than one of sheer economics. Governments spend millions each year on shelters, health care and other forms of triage for the homeless, but simply giving people homes turns out to be far cheaper, according to research from the University of Washington in 2009. Preventing a fire always requires less water than extinguishing it once it’s burning.

By all accounts, Housing First is an unusually good policy. It is economical and achievable.•

Tags:

The square-jawed hero astronauts of 1960s NASA went through marriages at a pretty ferocious clip, as you might expect from careerist monomaniacs, but none has had a more colorful, complicated life than Buzz Aldrin, who successfully walked on the moon but failed at selling used cars after he fell to Earth with a thud. Dr. Aldrin, as he prefers to be called, is now spearheading a plan to build Mars colonies. 

From Marcia Dunn at the AP:

MELBOURNE, Fla. (AP) — Buzz Aldrin is teaming up with Florida Institute of Technology to develop “a master plan” for colonizing Mars within 25 years.

The second man to walk on the moon took part in a signing ceremony Thursday at the university, less than an hour’s drive from NASA’s Kennedy Space Center. The Buzz Aldrin Space Institute is set to open this fall.

The 85-year-old Aldrin, who followed Neil Armstrong onto the moon’s surface on July 20, 1969, will serve as a research professor of aeronautics as well as a senior faculty adviser for the institute.

He said he hopes his “master plan” is accepted by NASA and the country, with international input. NASA already is working on the spacecraft and rockets to get astronauts to Mars by the mid-2030s.

Aldrin is pushing for a Mars settlement by approximately 2040. More specifically, he’s shooting for 2039, the 70th anniversary of his own Apollo 11 moon landing, although he admits the schedule is “adjustable.”

He envisions using Mars’ moons, Phobos and Deimos, as preliminary stepping stones for astronauts. He said he dislikes the label “one-way” and imagines tours of duty lasting 10 years.•

Tags: ,

There are many reasons, some more valid than others, that people are wary of so-called free Internet services like Facebook and Google, those companies of great utility which make money not through direct fees but by collecting our information and encouraging us to create content we’re not paid for.

Foremost, there are fears about surveillance, which I think are very valid. Hacks have already demonstrated how porous the world is now and what’s to come. More worrisome, beyond the work of rogue agents it’s clear the companies themselves cooperated in myriad ways with the NSA in handing over intel, some of which may have been necessary and most of which is troubling. Larry Page has said that we should trust the “good companies” with our information, but we shouldn’t trust any of them. Of course, there’s almost no alternative but to allow them into our lives and play by their rules.

Of course, the government isn’t alone in desiring to learn more about us. Advertisers certainly want to and long have, but there’s never before been this level of accessibility, this collective brain to be picked. These companies aren’t just looking to peddle information but also procure it. Their very existence depends on coming up with better and subtler ways of quantifying us.

I think another reason these businesses give pause isn’t because of something they actually do but what they remind us of: Our anxieties about the near-term future of Labor. By getting us to “work” for nothing and create content, they tell us that even information positions have been reduced, discounted. The dwindling of good-paying jobs, the Gig Economy and the fall of the middle class all seem to be encapsulated in this new arrangement. When a non-profit like Wikipedia does it, it can be destabilizing but doesn’t seem sinister. They same can’t be said for Silicon Valley giants.

In his latest Financial Times blog post, Andrew McAfee makes an argument in favor of these zero-cost services, which no doubt offer value, though I believe he gives short shrift to privacy concerns.

An excerpt:

Web ads are much more precisely targeted at me because Google and Facebook have a lot of information about me. This thrills advertisers, and it’s also OK with me; once in a while I actually see something interesting. Yes, we are “the product” in ad-supported businesses. Only the smallest children are unaware of this.

The hypothetical version of the we’re-being-scammed argument is that the giant tech companies are doing or planning something opaque and sinister with all that data that we’re giving them. As law professor Tim Wu wrote recently about Facebook: “[T]he data is also an asset. The two-hundred-and-seventy-billion-dollar valuation of Facebook, which made a profit of three billion dollars last year, is based on some faith that piling up all of that data has value in and of itself… One reason Mark Zuckerberg is so rich is that the stock market assumes that, at some point, he’ll figure out a new way to extract profit from all the data he’s accumulated about us.”

It’s true that all the information about me and my social network that these companies have could be used to help insurers and credit-card companies pick customers and price discriminate among them. But they already do that, and do it within the confines of a lot of regulation and consumer protection. I’m just not sure how much “worse” it would get if Google, Facebook and others started piping them our data.•

 

Tags:

Impresario is what they used to call those like Steve Ross of Warner Communications, whose mania for mergers allowed him a hand in a large number of media and entertainment ventures, making him boss and handler at different times to the Rolling Stones, Pele and Dustin Hoffman. One of those businesses the erstwhile funeral-parlor entrepreneur became involved with was Qube, an interactive cable-TV project that was a harbinger if a money-loser. That enterprise and many others are covered in a brief 1977 People profile. The opening:

In our times, the courtships and marriages that make the earth tremble are no longer romantic but corporate. The most legendary (or lurid) figures are not the Casanovas today. They are the conglomerateurs, and for sheer seismic impact on the popular culture, none approaches Steven J. Ross, 50, the former slacks salesman who married into a mortuary chain business that he parlayed 17 years later into Warner Communications Inc. (WCI). In founder-chairman Ross’s multitentacled clutch are perhaps the world’s predominant record division (with artists like the Eagles, Fleetwood Mac, the Rolling Stones, Led Zeppelin and Joni Mitchell); one of the Big Three movie studios (its hot fall releases include Oh, God! and The Goodbye Girl); a publishing operation (the paperback version of All the President’s Men, which was also a Warner Bros, film); the Atari line of video games like Pong, which inadvertently competes with Warner’s own TV producing arm, whose credits include Roots, no less. The conglomerate is furthermore not without free-enterprising social consciousness (WCI put up $1 million and owns 25 percent of Ms. magazine) or a redeeming sense of humor (it disseminates Mad).

Warner’s latest venturesome effort is bringing the blue-sky of two-way cable TV down to earth in a limited experiment in Columbus, Ohio. There, subscribers are able to talk back to their TV sets (choosing the movie they want to see or kibitzing the quarterback on his third-down call). An earlier Ross vision—an estimated $4.5 million investment in Pelé by Warner’s New York Cosmos—was, arguably, responsible for soccer’s belated breakthrough in the U.S. this year after decades of spectator indifference. Steve is obviously in a high-rolling business—Warners’ estimated annual gross is approaching a billion—and so the boss is taking his. Financial writer Dan Dorfman pegs Ross’s personal ’77 earnings at up to $5 million. That counts executive bonuses but not corporate indulgences. On a recent official trip to Europe in the Warner jet, Steve brought along his own barber for the ride.

En route to that altitude back in the days of his in-laws’ funeral parlor operation, Ross expanded into auto rentals (because he observed that their limos were unprofitably idle at night) and then into Kinney parking lots. “The funeral business is a great training ground because it teaches you service,” he notes, though adding: “It takes as much time to talk a small deal as a big deal.” So, come the ’70s, Ross dealt away the mortuary for the more glamorous show world. Alas, too, he separated from his wife. •

Tags:

« Older entries § Newer entries »