Science/Tech

You are currently browsing the archive for the Science/Tech category.

Most scenarios of AI dominance end, for humans, with extinction, but Steve Wozniak no longer feels that way, believing we can lose the war but be happy captives–pets, even. His scenario seems unlikely. From Samuel Gibbs at the Guardian:

Apple’s early-adopting, outspoken co-founder Steve Wozniak thinks humans will be fine if robots take over the world because we’ll just become their pets.

After previously stating that a robotic future powered by artificial intelligence (AI) would be “scary and very bad for people” and that robots would “get rid of the slow humans,” Wozniak has staged a U-turn and says he now thinks robots taking over would be good for the human race.

“They’re going to be smarter than us and if they’re smarter than us then they’ll realise they need us,” Wozniak said at the Freescale technology forum in Austin. “We want to be the family pet and be taken care of all the time.” …

For Wozniak, it will be “hundreds of years” before AI is capable of taking over, but that by the time it does it will no longer be a threat to our existence: “They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally.”•

Tags: ,

In a recent episode of EconTalk, host Russ Roberts invited journalist Adam Davidson of the New York Times to discuss, among other things, his recent articleWhat Hollywood Can Teach Us About the Future of Work.” In this “On Money” column, Davidson argues that short-term Hollywood projects–a freelance, piecemeal model–may be a wave of the future. The writer contends that this is better for highly talented workers and worrisome for the great middle. I’ll agree with the latter, though I don’t think the former is as uniformly true as Davidson believes. In life, stuff happens that talent cannot save you from, that the market will not provide for.

What really perplexed me about the program was the exchange at the end, when the pair acknowledges being baffled by Uber’s many critics. I sort of get it with Roberts. He’s a Libertarian who loves the unbridled nature of the so-called Peer Economy, luxuriating in a free-market fantasy that most won’t be able to enjoy. I’m more surprised by Davidson calling Uber a “solution” to the crisis of modern work, in which contingent positions have replaced FT posts in the aftermath of the 2008 financial collapse. You mean it’s a solution to a problem it’s contributed to? It seems a strange assertion given that Davidson has clearly demonstrated his concern about the free fall of the middle class in a world in which rising profits have been uncoupled from hiring.

The reason why Uber is considered an enemy of Labor is because Uber is an enemy of Labor. Not only are medallion owners and licensed taxi drivers (whose rate is guaranteed) hurt by ridesharing, but Uber’s union-less drivers are prone to pay decreases at the whim of the company (which may be why about half the drivers became “inactive”–quit–within a year). And the workers couldn’t be heartened by CEO Travis Kalanick giddily expressing his desire to be rid of all of them before criticism intruded on his obliviousness, and he began to pretend to be their champion for PR purposes.

The Sharing Economy (another poor name for it) is probably inevitable and Uber and driverless cars are good in many ways, but they’re not good for Labor. If Roberts wants to tell small-sample-size stories about drivers he’s met who work for Uber just until their start-ups receive seed money and pretend that they’re the average, so be it. The rest of us need to be honest about what’s happening so we can reach some solutions to what might become a widespread problem. If America’s middle class is to be Uberized, to become just a bunch of rabbits to be tasked, no one should be satisfied with the new normal.

From EconTalk:

Russ Roberts:

A lot of people are critical of the rise of companies like Uber, where their workforce is essentially piece workers. Workers who don’t earn an annual salary. They’re paid a commission if they can get a passenger, if they can take someone somewhere, and they don’t have long-term promises about, necessarily, benefits. They have to pay for their own car, provide their own insurance, and a lot of people are critical of that, and my answer is, Why do people do it if it’s so awful? That’s really important. But I want to say something slightly more optimistic about it which is a lot of people like Uber, working for Uber or working for a Hollywood project for six months, because when it’s over they can take a month off or a week off. A lot of the people I talk to who drive for Uber are entrepreneurs, they’re waiting for their funding to come through, they’re waiting for something to happen, and they might work 80 hours a week while they’re waiting and when the money comes through or when their idea starts to click, they’re gonna work five hours a week, and then they’ll stop, and they don’t owe any loyalty to anyone, they can move in and out of work as they choose. I think there’s a large group of people who really love that. And that’s a feature for many people, not a bug. What matters is–beside your satisfaction and how rewarding your life is emotionally in that world–your financial part of it depends on what you make while you’re working. It’s true it’s only sort of part-time, but if you make enough, and evidently many Uber drivers are former taxi drivers who make more money with Uber for example, if you make enough, it’s great, so it seems to me that if we move to a world where people are essentially their own company, their own brand, the captain of their own ship rather than an employee, there are many good things about that as long as they have the skills that are in demand that people are willing to pay for. Many people will unfortunately will not have those skills. It’s a serious issue, but for many people those are enormous pluses, not minuses. 

Adam Davidson:

Yes, I agree with you. Thinking of life as an Uber driver with that as your only possible source of income, I would guess that might be tough. Price competition is not gonna be your friend. Thinking about a world where you have a whole bunch of options, including Task Rabbit, and who knows what else, Airbnb, to earn money in a variety of ways, that’s at various times and at various levels of intensity, that strikes me as only good. If we could shove that into the 1950s, I think you would have seen a lot more people leaving that corporate model and starting their own businesses or spending more time doing more creative endeavors. That all strikes me as a helpful tool. It does sound like some of the people who work at Uber have kind of been jerks, but it does seem strange to me that some people are mad at the company that’s providing this opportunity. It is tough that lots of Americans are underemployed and aren’t earning enough. That’s a bad situation, but it is confusing to me that we get mad at companies that are providing a solution.•

Tags: , ,

Despite what some narratives say, Bill Gates was completely right about the Internet and mobile. That doesn’t mean he’ll be correct about every seismic shift, but I think his intuition about autonomous cars is almost definitely accurate: Driverless functions will be useful if partially completed and a societal game-changer if completely perfected. Just helpful or a total avalanche. In an interview conducted by Financial Times Deputy Editor John Thornhill, Gates discussed these matters, among many others. An excerpt from Shane Ferro’s article at Business Insider (which relies on Izabella Kaminska tweets from the event):

With regards to robots, the economy, and logistics, the takeaway seems to be that Gates thinks we’re in the fastest period of innovation ever, and it’s still unclear how that will affect the economy.

But there’s still quite a way to go. Robots “will be benign for quite some time,” Gates said. The future of work is not in immediate danger — although the outlook is not good for those who have a high school degree or less. 

Gates was also asked about Uber. He seems to think the real disruption to the driving and logistics industry is not going to come until we have fully driverless cars. That’s the “rubicon,” he says.

Kaminska relays that currently, Gates thinks that Uber “is just a reorganization of labour into a more dynamic form.” However, and this is big, Uber does have the biggest research and development budget out there on the driverless vehicle front. And that’s to its advantage.•

Tags: , ,

“We face a future in which robots will test the boundaries of our ethical and legal frameworks with increasing audacity.” writes Illah Reza Nourbakhsh in his Foreign Affairs article “The Coming Robot Dystopia,” and it’s difficult to envision a scenario in which the pace doesn’t get just faster, cheaper and at least somewhat out of control. 

We live in a strange duality now: On one hand, citizens worry that government has too much access to their information–and that’s true–but government is likely tightening its grip just as it’s losing it. Technology easily outpaces legislation, and it’s possible that at some point in the near future even those who espoused hatred of government may be wistful for a stable center. 

From Nourbakhsh:

Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.

This is a familiar warning when it comes to technological innovations of all kinds. But there is a crucial distinction between what’s happening now and the last great breakthrough in robotic technology, when manufacturing automatons began to appear on factory floors during the late twentieth century. Back then, clear boundaries separated industrial robots from humans: protective fences isolated robot workspaces, ensuring minimal contact between man and machine, and humans and robots performed wholly distinct tasks without interacting.

Such barriers have been breached, not only in the workplace but also in the wider society: robots now share the formerly human-only commons, and humans will increasingly interact socially with a diverse ecosystem of robots.•

Tags:

When I put up a post three days ago about the automated grocery store in Iowa, it brought to mind the first attempt at such a store, the Keedoozle, one of Clarence Saunders attempts at a resurgence in the aftermath of the Wall Street bath the Memphis-based Piggly Wiggly founder took while attempting and failing spectacularly at a corner. In his 1959 New Yorker piece about the Saunders Affair, John Brooks described the Keedoozle:

His hopes were pinned on the Keedoozle, an electrically operated grocery store, and he spent the better part of the last twenty years of his life trying to perfect it. In a Keedoozle store, the merchandise was displayed behind glass panels, each with a slot beside it, like the food in an Automat. There the similarity ended, for, instead of inserting coins in the slot to open a panel and lift out a purchase. Keedoozle customers inserted a key that they were given on entering the store. Moreover, Saunders’ thinking had advanced far beyond the elementary stage of having the key open the panel; each time a Keedoozle key was inserted inside a slot, the identity of the item selected was inscribed in code on a segment of recording tape embedded in the key itself, and simultaneously the item was automatically transferred to a conveyor belt that carried it to an exit gate at the front of the store. When a customer had finished his shopping, he would present his key to an attendant at the gate, who would decipher the tape and add up the bill. As soon as this was paid, the purchases would be catapulted into the customer’s arms, all bagged and wrapped by a device at the end of a conveyor belt. 

A couple of pilot Keedoozle stores were tried out–one in Memphis and the other in Chicago–but it was found that the machinery was too complex and expensive to compete with the supermarket pushcarts. Undeterred, Saunders set to work on an even more intricate mechanism–the Foodlectric, which would do everything the Keedoozle would do and add up the bill as well.•

______________________

From the February 19, 1937 Brooklyn Daily Eagle:

______________________

The Keedoozle inspired a Memphis competitor in 1947:

Tags: , ,

Sometime in the 21st century, you and me and Peter Thiel are going to die, and that’s horrible because even when the world is trying, it’s spectacular.

The Paypal cofounder is spending a portion of his great wealth on anti-aging research, hoping to radically extend life if not defeat death, which is a wonderful thing for people of the distant future, though it likely won’t save any of us. I will say that I wholly agree with Thiel that those who oppose radical life extension because it’s “unnatural” are just wrong.

From a Washington Post Q&A Ariana Eunjung Cha conducted with Thiel:

Question:

Leon Kass — the physician who was head of the President’s Council on Bioethics from 2001 to 2005 — as well as a number of other prominent historians, philosophers and ethicists have spoken out against radical life extension. Kass, for instance, has argued that it’s just not natural, that we’ll end up losing some of our humanity in the process. What do you think of their concerns?

Peter Thiel:

I believe that evolution is a true account of nature, but I think we should try to escape it or transcend it in our society. What’s true of evolution, I would argue, is true of all of nature. Even basic dental hygiene. If it’s natural for your teeth to start falling out, then you shouldn’t get cavities replaced? In the 19th century, people made the argument that it was natural for childbirth to be painful for women and therefore you shouldn’t have pain medication. I think the nature argument tends to go very wrong. . . . I think it is against human nature not to fight death.

Question:

What about the possibility of innovation stagnation? Some argue that if you live forever, you won’t be as motivated to invent new ways of doing this.

Peter Thiel:

That’s the Steve-Jobs-commencement-speech-in-2005 argument — that he was working so hard because he knew he was going to die. I don’t believe that’s true. There are many people who stop trying because they think they don’t have enough time. Because they are 85. But that 85-year-old could have gotten four PhDs from 65 to 85, but he didn’t do it because he didn’t think he had enough time. I think these arguments can go both ways. I think some people could be less motivated. I think a lot of people would be more motivated because they would have more time to accomplish something meaningful and significant.•

 

Tags: ,

In a 2012 Playboy Interview, Richard Dawkins addressed whether a fuller understanding of genetics would allow us to create something akin to extinct life forms, even prehistoric ones. The passage:

Playboy:

Do we know which came first—bigger brains or bipedalism?

Richard Dawkins:

Bipedalism came first.

Playboy:

How do we know that?

Richard Dawkins:

Fossils. That’s one place the fossils are extremely clear. Three million years ago Australopithecus afarensis were bipedal, but their brains were no bigger than a chimpanzee’s. The best example we have is Lucy [a partial skeleton found in 1974 in Ethiopia]. In a way, she was an upright-walking chimpanzee.

Playboy:

You like Lucy.

Richard Dawkins:

Yes. [smiles]

Playboy:

You’ve said you expect mankind will have a genetic book of the dead by 2050. How would that be helpful?

Richard Dawkins:

Because we contain within us the genes that have survived through generations, you could theoretically read off a creature’s evolutionary history. “Ah, yes, this animal lived in the sea. This is the time when it lived in deserts. This bit shows it must have lived up mountains. And this shows it used to burrow.”

Playboy:

Could that help us bring back a dinosaur? You have suggested crossing a bird and a crocodile and maybe putting it in an ostrich egg.

Richard Dawkins:

It would have to be more sophisticated than a cross. It’d have to be a merging.

Playboy:

Could we recreate Lucy?

Richard Dawkins:

We already know the human genome and the chimpanzee genome, so you could make a sophisticated guess as to what the genome of the common ancestor might have been like. From that you might be able to grow an animal that was close to the common ancestor. And from that you might split the difference between that ancestral animal you re-created and a modern human and get Lucy.•

Tags:

Excellent job by Daniel Oberhaus of Vice Motherboard with his smart interview of Noam Chomsky and theoretical physicist Lawrence Krauss about contemporary scientific research and space exploration. Chomsky is disturbed by the insinuiation of private enterprise into Space Race 2.0, a quest for trillions, while Krauss thinks the expense of such an endeavor permanently makes it a moot point. I’m not so sure about the “permanently” part. Both subjects encourage unmanned space missions as a way to speed up science while scaling back costs. The opening:

Vice:

The cost of entry is so high for space, and arguably for science as well, that the general public seems to be excluded from partaking right from the start. In that light, what can really be done to reclaim the commons of space?

Noam Chomsky:

If you look at the whole history of the space program, a lot of things of interest were discovered, but it was done in a way that sort of ranges from misleading to deceitful. So what was the point of putting a man on the moon? A person is the worst possible instrument to put in space: you have to keep them alive, which is very complex, there are safety procedures, and so on. The right way to explore space is with robots, which is now done. So why did it start with a man in space? Just for political reasons.

Lawrence Krauss:

Of course we should [pressure the government to divert more funds to space programs]. But again, if you ask me if we should appropriate funds for the human exploration of space, than my answer is probably not. Unmanned space exploration, from a scientific perspective is far more important and useful. If we’re doing space exploration for adventure, then it’s a totally different thing. But from a scientific perspective, we should spend the money on unmanned space exploration.

Noam Chomsky:

John F. Kennedy made it a way of overcoming the failure of the Bay of Pigs and the fact that the Russians in some minor ways had gotten ahead of us, even though the American scientists understood that that wasn’t true. So you had to have a dramatic event, like a man walking on the moon. There’s not very much point to have a man walking on the moon except to impress people.

As soon as the public got bored with watching some guy stumble around on the moon, those projects were ended. Then space exploration began as a scientific endeavor. Things continue to develop like this to a large extent. Take, again, the development of computers. That was presented under the rubric of defense. The Pentagon doesn’t say, ‘We’re taking your tax money so that maybe your grandson can have an iPad.’ What they say is, ‘We’re defending ourselves from the Russians.’ What we’re actually doing is seeing if we can create the cutting edge of the economy.•

Tags: , ,

French aviation pioneer Robert Esnault-Pelterie, inventor of the joystick flight control, knew 41 years before “the giant leap” that a manned trip to the moon and back was theoretically possible. He believed we were “actually becoming birdmen” and thought atomic energy might aid us in reaching not only the moon but also Mars and Venus, a plan Project Orion scientists worked on in earnest in the 1950s. Below is an article from the February 12, 1928 Brooklyn Daily Eagle.

Tags:

In a Washington Post piece, Vivek Wadha reveals how bullish he is on the near-term future of robotics in the aftermath of the DARPA challenge. He believes Jetsons-level assistants are close, and although he acknowledges such progress would promote technological unemployment, he doesn’t really dwell on that thorny problem. An excerpt:

For voice recognition, we are already pretty close to C-3PO-like capabilities. Both Apple and Google use artificial intelligence to do a reasonably good job of translating speech to text, even in noisy environments. No bot has passed the Turing Test yet, but they are getting closer and closer. When it happens, your droid will be able to converse with you in complex, human-like interactions.

The computational power necessary to enable these robots to perform these difficult tasks is still lacking. Consider, however, that in about seven or eight years, your iPhone will have the computational ability of a human brain, and you can understand where we are headed.

Robots will be able to walk and talk like human beings.

What are presently halting steps moving up stairs will, in the next DARPA challenge, become sure-footed ascents. The ability to merely open a door will become that of opening a door and holding a bag of groceries and making sure the dog doesn’t get out.

And, yes, Rosie will replace lots of human jobs, and that is reason to worry — and cheer.•

Tags:

I was on the subway the other day and a disparate group of six people of different ages, races and genders began a spontaneous conversation about how the they couldn’t afford to live anywhere nice anymore and how the middle class was gone in America, that the country wasn’t for them anymore. Small sample size to be sure, but one that’s backed up by more than four decades of research. Part of the problem could be remedied politically if finding solutions was in vogue in America, but the bigger picture would seem to be a grand sweep of history that announced itself in the aftermath of the Great Recession, as profits returned but not jobs.

I fear Derek Thompson’s excellent Atlantic feature “A World Without Work” may be accurate in its position that this time it’s different, that technological unemployment may take root in America (and elsewhere), and I think one of the writer’s biggest contributions is explaining how relatively quickly the new normal can take hold. (He visits Youngstown, a former industrial boomtown that went bust, to understand the ramifications of work going away.)

I don’t believe a tearing of the social fabric need attend an enduring absence of universal employment provided wealth isn’t aggregated at one end of the spectrum, but I don’t have much faith right now in government to step into the breach should such opportunities significantly deteriorate. Much of Thompson’s piece is dedicated finding potential solutions to a radical decline of Labor–a post-workist world. He believes America can sustain itself if citizens are working fewer hours but perhaps not if most don’t need to punch the clock at all. I’m a little more sanguine than that if basic needs are covered. Then I think we’ll see people get creative.

An excerpt:

After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.

For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.

Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”

Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.

To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.

Tags: ,

Excerpts from a pair of recent Harvard Business Review articles which analyze the increasing insinuation of robots in the workplace. The opening of Walter Frick’s “When Your Boss Wears Metal Pants” examines the emotional connection we quickly make with robots who can feign social cues. In “The Great Decoupling,” Amy Bernstein and Anand Raman discuss technological unemployment, among other topics, with Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age.

___________________________

From Frick:

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society.•

___________________________

From Bernstein and Raman:

Harvard Business Review:

As the Second Machine Age progresses, will there be any jobs for human beings?

Andrew McAfee:

Yes, because humans are still far superior in three skill areas. One is high-end creativity that generates things like great new business ideas, scientific breakthroughs, novels that grip you, and so on. Technology will only amplify the abilities of people who are good at these things.

The second category is emotion, interpersonal relations, caring, nurturing, coaching, motivating, leading, and so on. Through millions of years of evolution, we’ve gotten good at deciphering other people’s body language…

Eric Brynjolfsson:

…and signals, and finishing people’s sentences. Machines are way behind there.

The third is dexterity, mobility. It’s unbelievably hard to get a robot to walk across a crowded restaurant, bus a table, take the dishes back into the kitchen, put them in the sink without breaking them, and do it all without terrifying the restaurant’s patrons. Sensing and manipulation are hard for robots.

None of those is sacrosanct, though; machines are beginning to make inroads into each of them.

Andrew McAfee:

We’ll continue to see the middle class hollowed out and will see growth at the low and high ends. Really good executives, entrepreneurs, investors, and novelists—they will all reap rewards. Yo-Yo Ma won’t be replaced by a robot anytime soon, but financially, I wouldn’t want to be the world’s 100th-best cellist.•

Tags: , , , ,

softbank-pepper-robot-shop-store-staff-humanoid-2

Softbank’s Pepper looks like a child killed by a lightning strike who returned as a ghost to make you pay for handing him a watering can during an electrical storm.

He’s described as an “emotional robot,” which makes me take an immediate disliking to him. Manufactured to express feelings based on stimuli in his surroundings, Pepper is supposed to be shaped by his environment, but I wonder if his behavior will shape those who own him. We may get an answer since the robot sold out in Japan in under a minute and will soon be available for sale internationally.

From Marilyn Malara at UPI:

The humanoid robot is described as one that can feel emotion in a way humans do naturally through a system similar to a human’s hormonal response to stimuli. The robot can generate its own emotions by gathering information from its cameras and various sensors. Softbank says that Pepper is a “he” and can read human facial expressions, words and surroundings to make decisions. He can sigh or even raise his voice; he can get scared from dimming lights and happy when praised.

Along with the product’s launch, 200 applications are available to download into the robot including one that can record everyday life in the form of a robotic scrapbook.

Last year, Nestle Japan used Pepper to sell Nescafe coffee machines in appliance stores all over the country. “Pepper will be able to explain Nescafe products and services and engage in conversation with consumers,” Nestle Japan CEO Kohzoh Takaoka said in October before its roll-out.•

____________________________

“Can you lend me a $100?”

Tags:

In a New York Times review, A.O. Scott, who is quietly one of the funniest writers working anywhere, offers a largely positive review of philosopher Susan Neiman’s new book about perpetual adolescence, something that’s become the norm in this era of fanboy (and -girl) ascendancy, its commodification seemingly having reached a saturation point until, yes, the next comic-book or YA franchise. The opening:

A great deal of modern popular culture — including just about everything pertaining to what French savants like to call le nouvel âge d’or de la comédie américaine — runs on the disavowal of maturity. The ideal consumer is a mirror image of a familiar comic archetype: a man-child sitting in his parents’ basement with his video games and his Star Wars figurines; a postgraduate girl and her pals treating the world as their playground. Baby boomers pursue perpetual youth into retirement. Gen-Xers hold fast to their skateboards, their Pixies T-shirts and their Beastie Boys CDs. Nobody wants to be an adult anymore, and every so often someone writes an article blaming Hollywood, attachment parenting, global capitalism or the welfare state for this catastrophe. I’ve written one or two of those myself. It’s not a bad racket, and since I’m intimately acquainted, on a professional basis, with the cinematic oeuvre of Adam Sandler, I qualify as something of an expert. 

In the annals of anti-infantile cultural complaint, Susan Neiman’s new book, Why Grow Up?, is both exemplary and unusual. An American-born philosopher who lives in Berlin, Neiman has a pundit’s fondness for the sweeping generalization and the carefully hedged argumentative claim. “I’m not suggesting that we do without the web entirely,” she writes in one of her periodic reflections on life in the digital age, “just that we refuse to let it rule.” Elsewhere she observes that “if you spend your time in cyberspace watching something besides porn and Korean rap videos, you can gain a great deal,” a ­hypothesis I for one am eager to test.•

Tags: ,

Wow, this is wonderful: Nicholas Carr posted a great piece from a recent lecture in which he addressed Marshall McLuhan’s idea of automation as media. In this excerpt, he tells a history of how cartography, likely the first medium, went from passive to active player as we transitioned from paper to software:

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving.•

Tags:

I believe Weak AI can remake a wide swath of our society in the coming decades, but the more sci-fi Strong AI moonshots don’t seem within reach to me. When Yuval Harari worries that technologists may play god, he’s saying nothing theoretically impossible. In fact, the innovations he’s discussing (genetic engineering, cyborgism, etc.) will almost definitely occur if we get lucky (and creative) and survive any near-term extinction.

But the thing about Silicon Valley remaking our world is that it’s really tough do that, especially when dealing with such hard problems–even the hard problem (i.e., consciousness). Google is an AI company disguised as a search company, but it’s certainly possible that it never becomes great at anything beyond search and (perhaps) a few Weak AI triumphs. Time will tell. But it probably will take significant time.

In a Washington Post piece, Bhaskar Chakravorti wonders if Google X is more moonshot or crater, though I think it’s too early to be assesssing such things. Creating 100% driverless autos wasn’t going to happen overnight, let alone radical life extension. An excerpt:

In its relentless hunt for innovation, Google is a voracious acquirer of innovative companies. In the two years prior to 2014, it outspent its five closest rivals combined on acquisitions. Here, too, it has failed in dramatic ways. A single acquisition, Motorola Mobility, cost $12 billion — almost half the amount that Google spent on all its acquisitions over a decade — which it sold for $3 billion two years later.

None of these factors deters Google’s leaders or its many admirers. Much of the public focus has shifted recently to its Google X unit, which not only has a chief most appropriately named, Astro Teller, it has a manager with an official job title of Head of Getting Moonshots Ready for Contact With the Real World. Now, the drumbeat has picked up as some of Google’s moonshots come closer to landing. The Google self-driven car is coming around the corner quite literally.  Google’s high-altitude balloons are being tested to offer Internet access to those without access. And the latest: Google intends to take on the myriad urban innovation challenges with its brand new Sidewalk Labs. Beyond the roads, sidewalks and the skies, Google wants to tinker with life itself, from glucose-monitoring contact lenses to longevity research.

Google’s revenue source is essentially unchanged and yet it spends disproportionately to move the needle. But these unprecedented moonshots could simply be money pits.•

Tags:

The unanswered questions that we have about the Snowden Affair are probably a little different than the ones circulating in the head of cybersecurity expert and former fugitive John McAfee, who has written an unsurprisingly strange, paranoid and colorful piece on the topic for International Business Times. An excerpt:

The Russian interviewer also asked me about Snowden: “In your opinion, is Edward Snowden a real character or one invented by the intelligence services?”

And this was my answer

“I doubt everything, even my own senses at times. Is the apparent US government the real US government? Could the real government be a committee of the largest corporate entities who mount this play of democracy to veil the real machinations?

Are the divisions of the world into apparent “countries” even real? Are the apparent divisions within my own country real? Do we really have a tripartite system of government, where the executive, legislative and judicial divisions are, in fact, real divisions? I could go on forever.

As to Edward Snowden, I find the following inconsistencies to be very troubling:

1. He is a man of soft character and limited experience in the difficult and dangerous world into which he so willingly and knowingly thrust himself. I have personally been a fugitive. I have experienced many dangers and difficult situations, and even I with my excellent survival skills would not willingly bring down such wrath upon myself. Why would a man of Snowden’s apparent character do so?

2. He was safe in Hong Kong prior to entering Russia. With no offense to your country, I believe that Snowden was smart enough to know that he could have faded into the back alleys and byways of Hong Kong and, with his talents, have led a thriving existence there. Chinese women are equally as attractive as Russian women and not quite so dangerous. It is cheaper to live in Hong Kong and the weather is better. It is, quite frankly, a colourful place full of opportunity for a clever person. Why did he leave for Russia?

3. I doubt the truth of it all because my only source of information on the subject I have obtained through the world’s press. What truth can there be in it?”•

 

Tags:

In a New Statesman essay, Yuval Noah Harari, author of the great book Sapiens, argues that if we’re on the precipice of a grand human revolution–in which we commandeer evolutionary forces and create a post-scarcity world–it’s being driven by private-sector technocracy, not politics, that attenuated, polarized thing. The next Lenins, the new visionaries focused on large-scale societal reorganization, Harari argues, live in Silicon Valley, and even if they don’t succeed, their efforts may significantly impact our lives. An excerpt:

Whatever their disagreements about long-term visions, communists, fascists and liberals all combined forces to create a new state-run leviathan. Within a surprisingly short time, they engineered all-encompassing systems of mass education, mass health and mass welfare, which were supposed to realise the utopian aspirations of the ruling party. These mass systems became the main employers in the job market and the main regulators of human life. In this sense, at least, the grand political visions of the past century have succeeded in creating an entirely new world. The society of 1800 was completely destroyed and we are living in a new reality altogether.

In 1900 or 1950 politicians of all hues thought big, talked big and acted even bigger. Today it seems that politicians have a chance to pursue even grander visions than those of Lenin, Hitler or Mao. While the latter tried to create a new society and a new human being with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and supercomputers. In the coming decades, technological breakthroughs are likely to change human society, human bodies and human minds in far more drastic ways than ever before.

Whereas the Nazis sought to create superhumans through selective breeding, we now have an increasing arsenal of bioengineering tools at our disposal. These could be used to redesign the shapes, abilities and even desires of human beings, so as to fulfil this or that political ideal. Bioengineering starts with the understanding that we are far from realising the full potential of organic bodies. For four billion years natural selection has been tinkering and tweaking with these bodies, so that we have gone from amoebae to reptiles to mammals to Homo sapiens. Yet there is no reason to think that sapiens is the last station. Relatively small changes in the genome, the neural system and the skeleton were enough to upgrade Homo erectus – who could produce nothing more impressive than flint knives – to Homo sapiens, who produces spaceships and computers. Who knows what the outcome of a few more changes to our genome, neural system and skeleton might be? Bioengineering is not going to wait patiently for natural selection to work its magic. Instead, bioengineers will take the old sapiens body and ­intentionally rewrite its genetic code, rewire its brain circuits, alter its biochemical balance and grow entirely new body parts.

On top of that, we are also developing the ability to create cyborgs.•

Tags:

The robotic store has been a long-held dream, and in and of itself it’s a good thing, but it’s certainly not a positive for Labor unless new work opportunities pop up to replace those disappeared or we come to some sort of political solution to a shrinking need for human hands. In Iowa, a completely automated nonprofit grocery will offer shoppers healthy food, which is wonderful, but not completely wonderful. From Christopher Snyder:

No more long lines at the grocery store – the future of food shopping is getting a high-tech upgrade.

Des Moines, Iowa is planning to build a first-of-a kind robotic grocery store as an experiment to offer food and necessities to locals anytime at their convenience.   

A partnership between the nonprofit Eat Greater Des Moines and the business equipment firm Oasis24seven will see an automated, vending machine-style unit come to the area.

“Throughout Des Moines, there are areas of town where access to quality food is limited,” said Aubrey Alvarez, the nonprofit’s executive director. “We would love for a full service grocery store to move into these areas, but until that time the robotic unit will address the gap in the community.”

She added this “project takes a simple and familiar idea, a vending machine, and turns it on its head. Robotic Retail will be accessible to everyone.”•

Tags: ,

If Marshall McLuhan and Jerome Angel were still alive, they would likely not collaborate with Quentin Fiore (95 this year) on a physical book, not even on one as great as The Medium Is the Massage, a paperback that fit bewtween its covers something akin to the breakneck genius of Godard’s early-’60s explosion. Would they create a Facebook page that comments on Facebook or a Twitter account of aphorisms or maybe an app? I don’t know, but it likely wouldn’t be a leafy thing you could put on a wooden shelf. 

About 10 days ago, I bought a copy of The Age of Earthquakes, a book created by Douglas Coupland, Hans Ulrich Obrist and Shumon Basar, which seems a sort of updating of McLuhan’s most-famous work, a Massage for the modern head and neck. It looks at our present and future but also, by the virtue of being a tree-made thing, the past. As soon as I’m done with the title I’m reading now, I’ll spend a day with Earthquakes and post something about it. 

In his latest Financial Times column, Coupland writes about the twin refiners of the modern mood: pharmacology and the Internet, the former which I think has made us somewhat happier and the latter of which we’ve used, I think, to largely to self-medicate, stretching egos to cover unhappiness rather than dealing with it, and as the misery, untreated, expands, so does its cover. We’re smarter because of the connectivity, but I don’t know that it’s put us in a better mood. 

Coupland is much more sanguine than I am about it all. He’s in a better mood. An excerpt:

If someone time travelled from 1990 (let alone from 1900) to 2015 and was asked to describe the difference between then and now, they might report back: “Well, people don’t use light bulbs any more; they use these things called LED lights, which I guess save energy, but the light they cast is cold. What else? Teenagers seem to no longer have acne or cavities, cars are much quieter, but the weirdest thing is that everyone everywhere is looking at little pieces of glass they’re holding in their hands, and people everywhere have tiny earphones in their ears. And if you do find someone without a piece of glass or earphones, their faces have this pained expression as if to say, “Where is my little piece of glass? What could possibly be in or on that little piece of glass that could so completely dominate a species in one generation?”

 . . . 

To pull back a step or two; as a species we ought to congratulate ourselves. In just a quarter of a century we have completely rewritten the menu of possible human moods, and quite possibly for the better. Psychopharmacology, combined with the neural reconfiguration generated by extended internet usage, has turned human behaviour into something inexplicable to someone from the not too distant past. We forget this so easily. Until Prozac came out in 1987, the only mood-altering options were mid-century: booze, pot and whatever MGM fed Judy Garland to keep her vibrating for three decades. The Prozac ripple was enormous . . .•

Tags: , , , ,

Marshall McLuhan was right, for the most part. 

The Canadian theorist saw Frankenstein awakening from the operating table before others did, so the messenger was often mistaken for the monster. But he was neither Dr. Victor or his charged charge, just an observer with a keen eye, one who could recognize patterns and realized humans might not be alone forever in that talent. Excerpts follow from two 1960s pieces that explore his ideas. The first is from artist-writer Richard Kostelanetz‘s 1967 New York Times article “Understanding McLuhan (In Part)” and the other from John Brooks’ 1968 New Yorker piece “Xerox Xerox Xerox Xerox.”

____________________________

Kostelanetz’s opening:

Marshall McLuhan, one of the most acclaimed, most controversial and certainly most talked-about of contemporary intellectuals, displays little of the stuff of which prophets are made. Tall, thin, middle-aged and graying, he has a face of such meager individual character that it is difficult to remember exactly what he looks like; different photographs of him rarely seem to capture the same man.

By trade, he is a professor of English at St. Michael’s College, the Roman Catholic unit of the University of Toronto. Except for a seminar called “Communication,” the courses he teaches are the standard fare of Mod. Lit. and Crit., and around the university he has hardly been a celebrity. One young woman now in Toronto publishing remembers that a decade ago, “McLuhan was a bit of a campus joke.” Even now, only a few of his graduate students seem familiar with his studies of the impact of communications media on civilization those famous books that have excited so many outside Toronto.

McLuhan’s two major works The Gutenberg Galaxy (1962) and Understanding Media (1964) have won an astonishing variety of admirers. General Electric, I.B.M. and Bell Telephone have all had him address their top executives, so have the publishers of America’s largest magazines. The composer John Cage made a pilgrimage to Toronto especially to pay homage to McLuhan and the critic Susan Sontag has praised his “grasp on the texture of contemporary reality.”

He has a number of eminent and vehement detractors, too. The critic Dwight Macdonald calls McLuhan’s books “impure nonsense, nonsense adulterated by sense.” Leslie Fiedler wrote in Partisan Review “Marshall McLuhan. . .continually risks sounding like the body-fluids man in Doctor Strangelove.

Still the McLuhan movement rolls on.”•

____________________________

From Brooks:

In the opinion of some commentators, what has happened so far is only the first phase of a kind o revolution in graphics. “Xerography is bringing a reign of terror into the world of publishing, because it
means that every reader can become both author and publisher,” the Canadian sage Marshall McLuhan wrote in the spring, 1966, issue of the American Scholar. “Authorship and readership alike can become production-oriented under xerography.… Xerography is electricity invading the world of typography, and it means a total revolution in this old sphere.” Even allowing for McLuhan’s erratic ebullience (“I change my opinions daily,” he once confessed), he seems to have got his teeth into something here. Various magazine articles have predicted nothing less than the disappearance of the book as it now exists, and pictured the library of the future as a sort of monster computer capable of storing and retrieving the contents of books electronically and xerographically. The “books” in such a library would be tiny chips of computer film — “editions of one.” Everyone agrees that such a library is still some time away. (But not so far away as to preclude a wary reaction from forehanded publishers. Beginning late in 1966, the long-familiar “all rights reserved” rigmarole on the copyright page of all books published by Harcourt, Brace & World was altered to read, a bit spookily, “All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information   storage and retrieval system …” Other publishers quickly followed the example.) One of the nearest approaches to it in the late sixties was the Xerox subsidiary University Microfilms, which could, and did, enlarge its microfilms of out-of-print books and print them as attractive and highly legible paperback volumes, at a cost to the customer of four cents a page; in cases where the book was covered by copyright, the firm paid a royalty to the author on each copy produced. But the time when almost anyone can make his own copy of a published book at lower than the market price is not some years away; it is now. All that the amateur publisher needs is access to a Xerox machine and a small offset printing press. One of the lesser but still important attributes of xerography is its ability to make master copies for use on offset presses, and make them much more cheaply and quickly than was previously possible. According to Irwin Karp, counsel to the Authors League of America, an edition of fifty copies of any printed book could in 1967 be handsomely “published” (minus the binding) by this combination of technologies in a matter of minutes at a cost of about eight-tenths of a cent per page, and less than that if the edition was larger. A teacher wishing to distribute to a class of fifty students the contents of a sixty-four-page book of poetry selling for three dollars and seventy-five cents could do so, if he were disposed to ignore the copyright laws, at a cost of slightly over fifty cents per copy.

The danger in the new technology, authors and publishers have contended, is that in doing away with the book it may do away with them, and thus with writing itself. Herbert S. Bailey, Jr., director of Princeton University Press, wrote in the Saturday Review of a scholar friend of his who has cancelled all his subscriptions to scholarly journals; instead, he now scans their tables of contents at his public library and makes copies of the articles that interest him. Bailey commented, “If all scholars followed [this] practice, there would be no scholarly journals.” Beginning in the middle sixties, Congress has been considering a revision of the copyright laws — the first since 1909. At the hearings, a committee representing the National Education Association and a clutch of other education groups argued firmly and persuasively that if education is to keep up with our national growth, the present copyright law and the fair-use doctrine should be liberalized for scholastic purposes. The authors and publishers, not surprisingly, opposed such liberalization, insisting that any extension of existing rights would tend to deprive them of their livelihoods to some degree now, and to a far greater degree in the uncharted xerographic future. A bill that was approved in 1967 by the House Judiciary Committee seemed to represent a victory for them, since it explicitly set forth the fair-use doctrine and contained no educational-copying exemption. But the final outcome of the struggle was still uncertain late in 1968. McLuhan, for one, was convinced that all efforts to preserve the old forms of author protection represent backward thinking and are doomed to failure (or, anyway, he was convinced the day he wrote his American Scholar article). “There is no possible protection from technology except by technology,” he wrote. “When you create a new environment with one phase of technology, you have to create an anti-environment with the next.” But authors are seldom good at technology, and probably do not flourish in anti-environments.•

Tags: , ,

Grantland has many fine writers and reporters, but the twin revelations for me have been Molly Lambert and Alex Pappademas, whom I enjoy reading as much as anyone working at any American publication. The funny thing is, I’m not much into pop culture, which is ostensibly their beat. But as with the best of journalists, the subject they cover most directly is merely an entry into many other ones, long walks that end up in big worlds. 

Excerpts follow from a recent piece by each. In “Start-up Costs,” a look at Silicon Valley and Halt and Catch Fire, Pappademas circles back to Douglas Coupland’s 1995 novel, Microserfs, a meditation on the reimagined office space written just before Silicon Valley became fully a brand as well as a land. In Lambert’s “Life Finds a Way,” the release of Jurassic World occasions an exploration of the enduring beauty of decommissioned theme parks–dinosaurs in and of themselves, at the tail end of an entropic state. Both pieces are concerned with an imposition on the natural order of things by capitalism.

_______________________________

From Pappademas:

Microserfs hit stores in 1995, which turned out to be a pretty big year for Net-this and Net-that. Yahoo, Amazon, and Craigslist were founded; Javascript, the MP3 compression standard, cost-per-click and cost-per-impression advertising, the first “wiki” site, and the Internet Explorer browser were introduced. Netscape went public; Bill Gates wrote the infamous Internet Tidal Wave” memo to Microsoft executives, proclaiming in the course of 5,000-plus words that the Internet was “the most important single development to come along since the IBM PC was introduced in 1981.” Meanwhile, at any time between May and September, you could walk into a multiplex not yet driven out of business by Netflix and watch a futuristic thriller like Hackers or Johnny Mnemonic or Virtuosity or The Net, movies that capitalized on the culture’s tech obsession as if it were a dance craze, spinning (mostly absurd) visions of the (invariably sinister) ways technology would soon pervade our lives. Microserfs isn’t as hysterical as those movies, and its vision of the coming world is much brighter, but in its own way it’s just as wrongheaded and nailed-to-its-context.

“What is the search for the next great compelling application,” Daniel asks at one point, “but a search for the human identity?” Microserfs argues that the entrepreneurial fantasy of ditching a big corporation to work at a cool start-up with your friends can actually be part of that search — that there’s a way to reinvent work in your own image and according to your own values, that you can find the same transcendence within the sphere of commerce that the slackers in Coupland’s own Generation X4 eschewed McJobs in order to chase. The notion that cutting the corporate cord to work for a start-up often just means busting out of a cubicle in order to shackle oneself to a laptop in a slightly funkier room goes unexamined; the possibility that work within a capitalist system, no matter how creative and freeform and unlike what your parents did, might be fundamentally incompatible with self-actualization and spiritual fulfillment is not on the table.•

_______________________________

Lambert’s opening:

I drove out to the abandoned amusement park originally called Jazzland during a trip to New Orleans earlier this year. Jazzland opened in 2000, was rebranded as Six Flags New Orleans in 2003, and was damaged beyond repair a decade ago by the flooding caused by Hurricane Katrina. But in the years since it’s been closed, it has undergone a rebirth as a filming location. It serves as the setting for the new Jurassic World. As I approached the former Jazzland by car, a large roller coaster arced into view. The park, just off Interstate 10, was built on muddy swampland. I have read accounts on urban exploring websites by people who’ve sneaked into the park that say it’s overrun with alligators and snakes.

After the natural disaster the area wasted no time in returning to its primeval state: a genuine Jurassic World. It was in the Jurassic era when crocodylia became aquatic animals, beginning to resemble the alligators currently populating Jazzland. I saw birds of prey circling over the theme park as I reached the front gates, only to be told in no uncertain terms that the site is closed to outsiders. I pleaded with the security guard that I am a journalist just looking for a location manager to talk to, but was forbidden from driving past the very first entrance into the parking lot. I could see the ticket stands and Ferris wheel, but accepted my fate and drove away, knowing I’d have to wait for Jurassic World to see Jazzland. As I drove off the premises, I could still glimpse the tops of the coasters and Ferris wheel, obscured by trees.

I am fascinated by theme parks that return to nature, since the idea of a theme park is such an imposition on nature to begin with — an obsessively ordered attempt to overrule reality by providing an alternate, superior dimension.•

 

Tags: ,

Olaf Stampf, who always conducts smart interviews for Spiegel, has a Q&A with Johann-Dietrich Wörner, the new general director of the European Space Agency. Two quick excerpts follow, one about a moon colony and the other about the potential of a manned Mars voyage.

______________________________

Spiegel:

Which celestial body would you like to travel to most of all?

Johann-Dietrich Wörner:

My dream would be to fly to the moon and build permanent structures, using the raw materials available there. For instance, regolith, or moon dust, could be used to make a form of concrete. Using 3-D printers, we could build all kinds of things with that moon concrete — houses, streets and observatories, for example.

______________________________

Spiegel:

Wouldn’t it be a much more exciting challenge to hazard a joint, manned flight to Mars?

Johann-Dietrich Wörner:

Man will not give up the dream of walking on Mars, but it won’t happen until at least 2050. The challenges are too great, and we don’t have the technologies yet to complete this vast project. Most of all, a trip to Mars would take much too long today. It would be irresponsible, not just from a scientific standpoint, to send astronauts to the desert planet if they could only return after more than two years.•

Tags: ,

Rachel Armstrong, a medical doctor who became an architect, wants to combine her twin passions, believing buildings can be created not only from plastics recovered from our waterways but also biological materials. From Christopher Hume of the Toronto Star:

She also imagines using living organisms such as bacteria, algae and jellyfish as building materials. If that sounds far-fetched, consider the BIQ (Bio Intelligent Quotient) Building in Hamburg. Its windows are filled with water in which live algae that’s fed nutrients. When the sun comes out, the micro-organisms reproduce, raising the temperature of the water. BIQ residents say they love their new digs. It helps that they have no heating bills.

Armstrong then described how objects can be made of plastic dredged from the oceans. It could, she suggested, be a new source of material as well as a way to clean degraded waterways. Her basic desire is to make machinery more biological and unravel the machinery behind the biological. That means figuring out how bacteria talks to bacteria, how algae “communicate.” This isn’t new, of course, but this fusion draws closer all the time.

As that happens, she argues, “consumers can become producers.” In the meantime, the search for “evidence-based truth-seeking systems” continues.

Armstrong, who began her professional life as a doctor, credits her interest in architecture to the time she spent at a leper colony in India in the early ’90s. “What I saw was a different way of life,” she recalls. “I realized we need a more integrated way of being and living so we are at one with our surroundings.”•

Tags: ,

In a Foreign Affairs essay, Martin Wolf has a retort for techno-optimists, contending that wearables are merely the emperor’s new clothes. One of his arguments I’m curious about concerns the statistical evidence that output per worker has recently decreased. How, exactly, does automation fit into that equation? Technology would seem to only improve productivity among workers if it’s complementing, not replacing, them. I do think Wolf makes a great case that “unmeasured value” has been a big part of life long before the Internet. The phonograph, after all, couldn’t be any more fully measured than the iPod. An excerpt:

…the pace of economic and social transformation has slowed in recent decades, not accelerated. This is most clearly shown in the rate of growth of output per worker. The economist Robert Gordon, doyen of the skeptics, has noted that the average growth of U.S. output per worker was 2.3 percent a year between 1891 and 1972. Thereafter, it only matched that rate briefly, between 1996 and 2004. It was just 1.4 percent a year between 1972 and 1996 and 1.3 percent between 2004 and 2012.

On the basis of these data, the age of rapid productivity growth in the world’s frontier economy is firmly in the past, with only a brief upward blip when the Internet, e-mail, and e-commerce made their initial impact.

Those whom Gordon calls “techno-optimists”—Erik Brynjolfsson and Andrew McAfee of the Massachusetts Institute of Technology, for example—respond that the GDP statistics omit the enormous unmeasured value provided by the free entertainment and information available on the Internet. They emphasize the plethora of cheap or free services (Skype, Wikipedia), the scale of do-it-yourself entertainment (Facebook), and the failure to account fully for all the new products and services. Techno-optimists point out that before June 2007, an iPhone was out of reach for even the richest man on earth. Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indexes. Moreover, say the techno-optimists, the “consumer surplus” in digital products and services—the difference between the price and the value to consumers—is huge. Finally, they argue, measures of GDP underestimate investment in intangible assets.

These points are correct. But they are nothing new: all of this has repeatedly been true since the nineteenth century. Indeed, past innovations generated vastly greater unmeasured value than the relatively trivial innovations of today. Just consider the shift from a world without telephones to one with them, or from a world of oil lamps to one with electric light. Next to that, who cares about Facebook or the iPad? Indeed, who really cares about the Internet when one considers clean water and flushing toilets?•

Tags:

« Older entries § Newer entries »