You are currently browsing the archive for the Science/Tech category.

In what’s an otherwise very good Fast Company article about autonomous cars, Charlie Sorrel conveniently elides one really important fact: not all the kinks have yet been worked out of the driverless experience. While Google has done extensive testing on the vehicles, inclement weather is still poses a challenge for them and visual-recognition systems need further enhancement. So, yes, legislation and entrenched human behaviors are significant barriers to be overcome, but the machines themselves continue to need fine-tuning.

Still, it’s an interesting article, especially the section about the nature of future cities that await us should we perfect and accept this new normal. An excerpt:

Famously, Google’s self-driving cars have clocked up 1.7 million miles over six years, all without major incident.

“In more than a million miles of real-world testing, autonomous vehicles have been involved in around a dozen crashes (with no major injuries),” says John Nielsen, AAA’s Managing Director of Automotive Engineering and Repair, “all of which occurred when a human driver was in control, or the vehicle was struck by another car.”

Self-driving cars are already way better than people-piloted cars, so what’s the trouble?

“Current laws never envisioned a vehicle that can drive itself, and there are numerous liability issues that need to be ironed out,” Nielsen says. “If an autonomous vehicle gets in a collision, who is responsible? The “driver,” their insurance company, the automaker that built the vehicle, or the third-party supplier that provided the autonomous control systems?”

How will the laws adapt? And how will we adapt? People are hesitant to embrace change, but the change that driverless cars will bring to our cities and lifestyles is enormous. What will it take to get there?•

Tags: ,

Jerry Kaplan, author of Humans Need Not Apply, thinks technology may make warfare safer (well, relatively). Perhaps, but that’s not the goal of all combatants. He uses the landmine as an example, arguing that a “smarter” explosive could be made to only detonate if enemy military happened across it. But any nation or rogue state using landmines does so precisely because of the terror that transcends the usual rules of engagement. They would want to use new tools to escalate that threat. The internationally sanctioned standards Kaplan hopes we attain will likely never be truly universal. As the implements of war grow cheaper, smaller and more out of control, that issue becomes more ominous.

In theory, robotized weapons could make war less lethal or far more so, but that will depend on the intentions of the users, and both scenarios will probably play out. 

From Kaplan in the New York Times:

Consider the lowly land mine. Those horrific and indiscriminate weapons detonate when stepped on, causing injury, death or damage to anyone or anything that happens upon them. They make a simple-minded “decision” whether to detonate by sensing their environment — and often continue to do so, long after the fighting has stopped.

Now imagine such a weapon enhanced by an A.I. technology less sophisticated than what is found in most smartphones. An inexpensive camera, in conjunction with other sensors, could discriminate among adults, children and animals; observe whether a person in its vicinity is wearing a uniform or carrying a weapon; or target only military vehicles, instead of civilian cars.

This would be a substantial improvement over the current state of the art, yet such a device would qualify as an offensive autonomous weapon of the sort the open letter proposes to ban.

Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.•



In between GE’s clumsy 1968 Pedipulator, an elephant-esque walking truck, and Boston Dynamics’ stunningly agile Big Dog and Cheetah, biomimetics went through plenty of growing pains. It’s a smart concept: Examine how land and marine creatures overcome obstacles and ape it with AI. Easier said than done, though. In “They’re Robots? Those Beasts!” a 2004 New York Times article, Scott Kirsner profiled Northeastern University’s Joseph Ayers and other roboticists exploring nature for inspiration. The opening:

JOSEPH AYERS was crouched over a laptop in a cool cinder block shed barely big enough to house a ride-on lawn mower, watching a boxy-shelled black lobster through a rectangular acrylic window.

Dr. Ayers’s shed is adjacent to a fiberglass saltwater tank that looks like a big above-ground swimming pool, and through the window, he observed as the seven-pound lobster clambered across the sandy bottom and struggled to surmount small rocks.

”He’s pitched backwards onto his tail, and his front legs aren’t really touching the ground,” said Dr. Ayers, a professor of biology at Northeastern University in Boston, sounding vexed.

A few minutes later, Dr. Ayers noticed a screw missing from one of the trio of legs extending from the right side of the lobster’s abdomen. Were this lobster not made of industrial-strength plastic, metal alloys and a nickel metal hydride battery, Dr. Ayers — the author of several lobster cookbooks, including ”Dr. Ayers Cooks With Cognac” — seemed frustrated enough to drop the robotic lobster into a boiling pot of water and serve it up for dinner.

Dr. Ayers was at his university’s Marine Science Center on the peninsula of Nahant, which pokes out into Massachusetts Bay. He was trying to get his robotic lobster ready for a demonstration in late September for the military branch that funds his work, the Office of Naval Research. By then, he hopes to have the lobster using its two claws as bump sensors.

”When it walks into a rock,” he explained, ”it’ll be able to decide whether to go over it or around it, depending on the size of the rock.”

Dr. Ayers is one of a handful of robotics researchers who regard animals as their muses.•


Tags: ,

The Nobel Physicist Frank Wilczek, author of A Beautiful Question, thinks CERN may soon go a long way beyond the discovery of the Higgs-Boson particle and prove supersymmetry. In a Spiegel Q&A conducted by Johann Grolle, the scientist also explains what the consistency of natural laws says to him:


Are you astonished that nature obeys laws that we humans are able to understand?

Frank Wilczek:

This fact has deep meaning, and is not at all guaranteed. As a thought experiment, let us assume that the whole world is just a simulation on a gigantic supercomputer, where we are also just part of this simulation. So, roughly speaking, we are talking about a world in which Super Mario thinks that his Super Mario world is real. The laws in such a world wouldn’t necessarily be beautiful or symmetric. They would be whatever the programmer put in there, which means these laws could be arbitrary, they could suddenly change or be different from place to place. And there would be no simpler description of these laws than a very long computer program. Such a world is logically possible, but our world is different. It is a glorious fact that in our world, when we go really deep, we can understand it.•

Tags: ,

In a Wall Street Journal article, Christopher Mims writes that killer robots aren’t inevitable, spoiling it for everyone. I mean, we need to be obliterated by really smart robots, the sooner, the better. Please.

Mims is right, of course, that banning research on Strong Ai is the wrong tack to take to ensure our future. This work is going to go ahead one way or another, so why not proceed, but with caution? He also points out that many of the scientists and technologists signing the Open Letter on Artificial Intelligence are engaged in creating AI of all sorts.

An excerpt about the bad news:

Imagine the following scenario: It’s 2025, and self-driving cars are widely available. Turning such a vehicle into a bomb isn’t much harder than it is to accomplish the same thing with a conventional vehicle today. And the same goes for drones of every scale and description.

It’s inevitable, say the experts I talked to, that nonstate actors and rogue states will create killer robots once the underpinnings of this technology become cheap and accessible, thanks to its commercial use.

“I look back 10 years, and who would have thought people would be using cellphone technology to detonate IEDs?” says retired Rear Admiral Matthew Klunder, who as chief of research spent four years heading up the Navy’s work on autonomous systems.

And what about killing machines driven by artificial intelligence, which could learn to make decisions themselves, a fear that recently bubbled to the surface in an open letter signed by the likes of Elon Musk and Stephen Hawking. The letter warned that an arms race was “virtually inevitable” between major powers if they continue to develop these kinds of weapons.•


Tags: ,

It’s a very big if, but if Tesla has an autonomous electric-car service on the roads by 2025, as Morgan Stanley analyst Adam Jonas predicts, well, that would change everything. No one, though, can predict precisely what it would mean, except that it likely would be bad for Labor. Still, you have to bet it will take much longer to build such a global, robotic fleet.

From CNN Money:

Jonas believes that within the next 18 months, Tesla will share plans for an app-based, on-demand “mobility service.” Commercial introduction to this Uber-like service could occur in 2018, with the Model 3 serving as the backbone.

The first version of this service would be human-driven, just like today’s other ride hailing services. But then Tesla could move to a model where robots do virtually all the work even though real people sit at the driver’s seat just in case it’s required.

Jonas predicted Tesla could transition to a fully autonomous service by 2025, that it would have nearly 600,000 cars in its global fleet — or roughly the same size as Hertz today.

“The holy grail of shared mobility is replacing the mistake-prone, fatigued and expensive human driver with a robot that drives with greater accuracy and precision,” Jonas wrote.•


If gene-editing was utilized to keep animals from wanting to harm one another–no more predators, no more prey–you think there might be a few unintended consequences? Some, right? David Pearce, a philosopher and Transhumanist, wants to engineer all suffering out of existence, from the ecosystem to the human brain. Given enough time, I suppose anything is possible. Excerpts follow from two interviews with Pearce.


The opening of a 2014 i09 Q&A by George Dvorsky:


The idea of re-engineering the ecosystem such that it’s free from suffering is a radically ambitious project — one that’s been referred to as the “well intentioned lunacy” of a futurist. That said, it’s an idea rooted in history. From where do you draw your ideas and moral philosophy?

David Pearce:

Sentient beings shouldn’t harm each other. This utopian-sounding vision is ancient. Gautama Buddha said “May all that have life be delivered from suffering”. The Bible prophesies that the wolf and the lion shall lie down with the lamb. Today, Jains sweep the ground in front of their feet rather than unwittingly tread on an insect.

My own conceptual framework and ethics are secular — more Bentham than Buddha. I think we should use biotechnology to rewrite our genetic source code; recalibrate the hedonic treadmill; shut down factory farms and slaughterhouses; and systematically help sentient beings rather than harm them.

However, there is an obvious problem. On the face of it, the idea of a pain-free biosphere is ecologically illiterate. Secular and religious utopians tend to ignore the biology of obligate carnivores and the thermodynamics of a food chain. Feed a population of starving herbivores in winter and we’d trigger a population explosion and ecological collapse. Help carnivorous predators and we’d just cause more death and suffering to the herbivores they prey on. Richard Dawkins puts the bioconservative case quite bluntly: “It must be so.” Fortunately, this isn’t the case.•


From a 2007 interview by Ingo Niermann of the German edition of Vanity Fair:

Vanity Fair:

You claim that it is possible to eradicate all suffering on earth, whether physical or mental. When?

David Pearce: 

It will technically be possible to get rid of all suffering within a century or two. Its abolition would be practical only if it were agreed in the sense of something like the moon program or the human genome project – if there was a degree of social consensus. There are certainly technological obstacles, but they are dwarfed by the ethical-ideological ones. Many people’s negative reaction to the idea of a world without suffering comes from a fear that someone is going to be manipulating and controlling them. Partly, too, the abolition of suffering seems to make a mockery of one’s life projects. Most of us spend the greater part of our lives seeking happiness for ourself and others we care about. But we do so in extremely inefficient and in many cases self-defeating ways. This is a problem with existing human society. Even though we have made extraordinary progress technologically and medically, we aren’t any happier than our ancestors. Even if we could arrange society in the most utopian way imaginable, there would be some people who would still be depressed and anxious. There would be some people who would be consumed by jealousy or unhappy love affairs. No amount of environmental reform or manipulation is going to get rid of suffering. Only biotechnology can eradicate its neural substrates.

Vanity Fair: 

Statistics say that on the average people in Bangladesh are happier than in the Western World.

David Pearce: 

In Bangladesh, if you lose a child through malnourishment or disease it’s absolutely dreadful, just as it is if you lose a child here. But yes, statistically the hedonic set-point around which our lives fluctuate is pretty similar whether you live in London, Berlin or Bangladesh. If someone offers you a million dollars, for instance, you get a quick boost in the same way that (to use a more extreme example) crack-addicts do. Even though crack-addicts know that the drug is going to make them awfully miserable in the long-term, they still strive for their next hit. Here in the rich West, we know money won’t make us happy, but we strive for it compulsively.

If you take suffering seriously, the only way to eradicate it is by biological reprogramming. In the short run, this may involve superior designer drugs. In the long run, the only realistic way to abolish suffering is through genetic engineering.

Vanity Fair: 

There would be a very simple method to make all people happy straight away: by putting electrodes in their pleasure centres.

David Pearce: 

Wireheading is offensive to human dignity, to our conception of who we are. The real value of wireheading is that it serves as an existence-proof for people who are sceptical that it is possible to be extremely happy indefinitely. Wireheading shows there is no tolerance to pure pleasure. The normal process of inhibitory feedback doesn’t seem to kick in. We don’t understand why this is the case. When we do, it will be a very important discovery.

Vanity Fair: 

The anaesthetist Stuart Meloy discovered accidentally that by putting an electrode in a certain area of the spinal cord a woman could experience endless orgasms. But he had a hard time finding enough people volunteering for a trial.

David Pearce: 

I can’t see wireheading as an evolutionary stable solution. Wireheads will not want to have children, or want to look after their children.

Vanity Fair: 

But what is your idea of paradise engineering? What should an ever-happy life be like?

David Pearce: 

It is not a uniform happiness but a world with a motivational system based entirely on gradients of well-being. Think of your ideal fantasy. With the right biological substrates, the reality could be millions of times better.•

Tags: , ,

Like many in postwar America, Ray Kroc found it rather easy to make money. It’s different today for the franchise, struggling in a much more competitive global economy. The typical McDonalds restaurant has half the staff it did 50 years ago, and there’s a chance that number could go much lower, owing to automation.

How much of the human element can be sacrificed from the Hospitality Industry (restaurants, hotels, etc.)? Probably a good deal, enough to hollow out staffs peopled by low-skill workers as well as novices and retirees. The push for a national $15 minimum wage (which workers dearly need) has some wondering if the process will be hastened.

From Lydia DePillis at the Washington Post:

Of course, it’s possible to imagine all kinds of dramatic productivity enhancements. Persona ­Pizzeria’s [Harold] Miller predicts that drone delivery systems will eventually get rid of the need to come into a restaurant at all, for example. [Middleby Corp COO Dave] Brewer has a bold prediction: He thinks that all the automation working its way into restaurants could eventually cut staffing levels in half. The remaining employees would just need to learn how to operate the machines and fix things when they break.

“You don’t want a $15-an-hour person doing something that the person who makes $7 an hour can do,” Brewer said. “It’s not downgrading the employees. It’s that the employees become managers of a bunch of different systems. They’ll become smarter and smarter.”

The value of a human touch

Not everybody, however, agrees that machines could make that much of a dent in labor costs. Implementing new systems is expensive, and mistakes can be devastating. And for some concepts, it’s possible that the presence of employees is actually a restaurant’s competitive advantage. Compared with grocery stores and gas stations, many people come to restaurants exactly because they want some human interaction.•


An industrial video from 50 years ago about AMF, which brought automation and computers to bowling, trying to make fast food even more inhuman.


In 1999, Michael Crichton played what he knew to be a fool’s game and predicted the future. He was not so successful about culture. Things he got wrong: Printed matter will be unchanged, movies will soon be dead, communications will be consolidated into fewer hands. Well, he did foresee YouTube.

Crichton, who was fascinated by science and often accused of being anti-science, commenting in a 1997 Playboy interview on technology creating moral quandaries we’re not prepared for:

I think we’re a long way from cloning people. But I am worried about scientific advances without consideration of their consequences. The history of medicine in my lifetime is one of technological advances that outstrip our ethical systems. We’ve never caught up. When I was in medical school—30-odd years ago—people were struggling to deal with mechanical-respiration systems. They were keeping alive people who a few years earlier would have died of natural causes. Suddenly people weren’t going to die of natural causes. They were either going to get on these machines and never get off or—or what? Were we going to turn the machines off? We had the machines well before we started the debate. Doctors were speaking quietly among themselves with a kind of resentment toward these machines. On the one hand, if somebody had a temporary disability, the machines could help get them over the hump. For accident victims—some of whom were very young—who could be saved if they pulled through the initial crisis, the technology saved lives. You could get them over the hump and then they would recover, and that was terrific.

But on the other hand, there was a category of people who were on their way out but could be kept alive. Before the machine, ‘pulling the plug’ actually meant opening the window too wide one night, and the patient would get pneumonia and die. That wasn’t going to happen now. We were being forced by technology to make decisions about the right to die—whether it’s a legal or religious issue—and many related matters. Some of them contradict longstanding ideas in an ethically protected world; we weren’t being forced to make hard decisions, because those decisions were being made for us—in this case, by the pneumococcus.

This is just one example of an ethical issue raised by technology. Cloning is another. If you’re knowledgeable about biotechnology, it’s possible to think of some terrifying scenarios. I don’t even like to discuss them. I know people doing biotechnology research who have decided not to pursue avenues of research because they think they’re too dangerous. But we go forward without sorting out the issues. I don’t believe that everything new is necessarily better. We go forward with the technology while the ethical issues are still up in the air, whether it’s the genetic variability of crop streams, which is a resource in times of plant plagues, to the assumption that we all have to be connected all the time. The technology is here so you must use it. Do you? Do you have to have your cell phone and your e-mail address and your Internet hookup? I was just on holiday in Scotland without e-mail. I had to notify people that I wouldn’t be checking my e-mail, because there’s an assumption that if I send you an e-mail, you’ll get it. Well, I won’t get it. I’m not plugged in, guys. Some people are horrified: “You’ve gone offline?” People feel so enslaved by technology that they will stop having sex to answer the telephone. What could be so important? Who’s calling, and who cares?•

I can’t find a transcript of the recent address by NASA’s Parimal Kopardekar at an unmanned aerial systems conference at the Ames Research Center, but there’s some coverage of it by Elizabeth Weise at The aviation expert thinks we’ll all soon–very soon–have a drone to do our bidding, conducting research and running errands. Of course, once they’re ubiquitous, it will be easy to introduce mayhem into the system, easier than it is with the traditional postal system. That’s something we’ll have to work on.

Weise’s opening:

Forget getting the latest, greatest cell phone. The next indispensable tech tool may be a drone of your own. And daily life may never be the same.

“I see a time when every home will have a drone. You’re going to use a drone to do rooftop inspections. You’re going to be able to send a drone to Home Depot to get a screw driver,” said Parimal Kopardekar, manager of Nasa’s Safe Autonomous System Operations Project at Ames Research Center in Mountain View, California.

And this won’t happen in some long-distant future. “This is in five or 10 years,” Kopardekar said.

Kopardekar gave a keynote talk at a conference on Unmanned Aerial Systems Traffic Management hosted by Nasa and the Silicon Valley Chapter of the Association of Unmanned Vehicle Systems International last week.

“We can completely transform aviation. Quickly,” said Dave Vos, lead of Google’s secretive Project Wing, which is working with Nasa – as are some 100 other companies – on an air traffic control system for small, low-altitude drones.

An effective air traffic system – needed to keep the skies under 500 feet from turning into a demolition derby – will play a major role in turning drones from a plaything into an engine of the economy, one affecting package delivery, agriculture, hazardous waste oversight and more.•

Tags: , ,

Things deemed inconvenient if you are employed at Amazon: getting cancer, having a relative get cancer, miscarriages. If you are “selfish” enough to engage in these activities, you’ll be put on notice and likely reduced to tears. Jeff Bezos’ gigantic success has long been reported to be a ridiculously bruising and demanding workplace only a sociopath could love, a place that attracts the highest achievers and routinely lays them low. 

Tremendous job by Jodi Kantor and David Streitfeld of the New York Times for the deepest profile yet of a company that’s the envy of the business world and a pretty horrible place to work. How can Amazon get away with such practices, a seeming social experiment that preys on workers psychologically? “Unfairness is not illegal,” is the way one lawyer in the piece puts it. The question is whether some of the tools used to quantify employees at the online retail behemoth will become common. Probably.

An excerpt about Elizabeth Willet, a former Army Captain who discovered a new kind of combat during her brief employment at Amazon:

Ms. Willet’s co-workers strafed her through the Anytime Feedback Tool, the widget in the company directory that allows employees to send praise or criticism about colleagues to management. (While bosses know who sends the comments, their identities are not typically shared with the subjects of the remarks.) Because team members are ranked, and those at the bottom eliminated every year, it is in everyone’s interest to outperform everyone else.

Craig Berman, an Amazon spokesman, said the tool was just another way to provide feedback, like sending an email or walking into a manager’s office. Most comments, he said, are positive.

However, many workers called it a river of intrigue and scheming. They described making quiet pacts with colleagues to bury the same person at once, or to praise one another lavishly. Many others, along with Ms. Willet, described feeling sabotaged by negative comments from unidentified colleagues with whom they could not argue. In some cases, the criticism was copied directly into their performance reviews — a move that Amy Michaels, the former Kindle manager, said that colleagues called “the full paste.”

Soon the tool, or something close, may be found in many more offices. Workday, a human resources software company, makes a similar product called Collaborative Anytime Feedback that promises to turn the annual performance review into a daily event. One of the early backers of Workday was Jeff Bezos, in one of his many investments. (He also owns The Washington Post.)

The rivalries at Amazon extend beyond behind-the-back comments. Employees say that the Bezos ideal, a meritocracy in which people and ideas compete and the best win, where co-workers challenge one another “even when doing so is uncomfortable or exhausting,” as the leadership principles note, has turned into a world of frequent combat.•

Tags: , , , , ,

From the May 19, 1869 Brooklyn Daily Eagle:

Anyone who’s studied Silicon Valley for about five minutes knows that community’s shocking success is a hybrid of public-private investment, not just some free-market dream realized. Before the Y Combinator, there’s often an X factor, namely a government incubator like DARPA which births and nurtures ideas until they can crawl into the arms of loving venture capitalists. The Internet, of course, is the most obvious example. Even the transistor itself sprang from Bell Labs, which was essentially a government-sanctioned monopoly.

The economist Mariana Mazzucato hasn’t been shy about shooting down the excesses of the sector’s mythologizing, which boasts that brilliant upstarts with startups simply think (ideate!) their way into billions. Not quite. These lone creators don’t only lack the funds to develop an Internet or transistor, Mazzucato doesn’t believe they have the time or stomachs for such risks, either. The market demands corporations opt for safer short-term gain or the shareholders will revolt. (Look at the blowback Google’s received for its moonshot investments, perhaps one reason it reorganized itself into Alphabet this week.) The companies aren’t, then, caged lions held back by regulation, but, as Mazzucato sees it, usually kittens unable to roar on their own.

From John Thornhill at the Financial Times:

Even Silicon Valley’s much-fabled tech entrepreneurs are not as smart as they like to think. Although Mazzucato lavishes praise on the entrepreneurial genius of the likes of Steve Jobs and Elon Musk, she says their brilliance tells only part of the story. Many of the key technologies used by Apple were first developed by public-sector agencies. Most of the key technologies that do the clever stuff inside your iPhone — including its geo-positioning system, the Siri voice-recognition service and multi-touch screen — were the offspring of state-funded research. “Government has invested in basic research, it has invested in applied research, it has invested in concrete companies [such as Tesla] all the way downstream, doing what venture capital should be doing if it was really playing the role it says it plays,” she says. “It is an incredibly active, mission-oriented role.”

One of the original engines of Silicon Valley’s creativity, she argues, was the Defense Advanced Research Projects Agency (Darpa), founded by President Dwight Eisenhower in 1958 following the alarm caused by the Soviet Union’s launch of the Sputnik rocket. Darpa, run by the US Department of Defense, has since pumped billions of dollars into cutting-edge research and was instrumental in developing the internet. According to Mazzucato, the publicly funded National Institutes of Health has played a similar role in nurturing the US pharmaceuticals industry. The Advanced Research Projects Agency-Energy (Arpa-E), set up by President Barack Obama and run by the US Department of Energy, is designed to stimulate green technology.

Mazzucato points to the critical role played by government agencies in other economies, such as China, Brazil, Germany, Denmark, and Israel, where the state is not just acting as a market regulator, it is actively creating and shaping markets. For instance, the Yozma programme in Israel that provided the funding and expertise to create the so-called “start-up nation”. “My whole point to business is, ‘Hello, if you want to make profits in the future, you had better understand where the profits are coming from’. This is a pro-business story. This is not about socialism,” she says.

Her arguments stray into more radical territory as we discuss how the fruits of this technological innovation should be distributed. If you accept that the state is part responsible for the success of many private sector enterprises, she says, should it not share in more of their economic gains?•


Tags: ,

As we witnessed with horror in Ferguson, the tools we create to fight wars overseas find their way back to the home front, free markets taking over where DARPA and other Defense departments trail off. Beyond guns and drones, surveillance equipment is the latest boomerang returning, and there are few rules in place to moderate their use, the technology, as usual, outstripping legislation. 

From Timothy Williams at the New York Times:

SAN DIEGO — Facial recognition software, which American military and intelligence agencies used for years in Iraq and Afghanistan to identify potential terrorists, is being eagerly adopted by dozens of police departments around the country to pursue drug dealers, prostitutes and other conventional criminal suspects. But because it is being used with few guidelines and with little oversight or public disclosure, it is raising questions of privacy and concerns about potential misuse.

Law enforcement officers say the technology is much faster than fingerprinting at identifying suspects, although it is unclear how much it is helping the police make arrests.

When Aaron Harvey was stopped by the police here in 2013 while driving near his grandmother’s house, an officer not only searched his car, he said, but also took his photograph and ran it through the software to try to confirm his identity and determine whether he had a criminal record.•


Hugo Gernsback may have been America’s first professional futurist, and while he wasn’t always right he was always interesting. Gernsback invented the first home radio kits right after the turn of the nineteenth century and sold his gadgets by mail order from his Brooklyn offices. He loved science fiction as much as science–saw them as complements, really–and published some of the earliest examples of the form in his publications, including Amazing Stories. He coined the term “television,” and when he wasn’t explaining the concept to 1920s newbies, he was conducting early broadcasts, an expensive endeavor that helped bankrupt him.

Just four years before his death, the July 26, 1963 issue of Life profiled the man in “Barnum of the Space Age,” which reported his prophecies for the future. The opening:

Science is now so big, so flamboyant and so barnacled with politicians, press agents, generals and industrialists that Hugo Gernsback, who invented it back in 1908 (and has re-invented it, annually, since) can scarcely make himself heard above the babble of the late-comers. Although he is now 78, Gernsback is still a man of remarkable energy who raps out forecasts of future scientific wonders with the rapidity of a disintegrator gun. He believes that millions will eventually wear television eyeglasses–and has begun work on a model to speed the day. “Instant newspapers” will be printed in U.S. homes by electromagnetic waves, in his opinion, as soon as U.S. publishers wrench themselves out of the pit of stagnant thinking in which Gernsback feels they are wallowing at present. He also believes in the inevitability of teleportation–i.e., reproducing a ham sandwich at a distance by electronic means, much as images are now reproduced on a television screen.Gernsback pays absolutely no attention, while issuing such pronunciamentos, to the fact that the public is rapidly becoming inured to scientific advance and that scientists themselves may not actually stand in need of his advice and counsel. He paid as little attention to the head-tapping some of his announcements set of in the 1920s–a period in which he was often considered nuttier than Albert Einstein himself.

Gernsback, in fact, has felt himself impelled to preach the gospel of science ever since his youth in Luxembourg–not so much, apparently, for the good of science as for his own satisfaction and the delights of seeing his name in the papers. In 55 years as a self-appointed missionary, he has stiffly ignored both the cackling of the heathen and the cries of competing apostles. Moreover, as founder, owner, and guiding spirit of Gernsback Publications, Inc., a New York-based publishing enterprise which has produced a succession of scientific and technical books and magazines (among them Amazing Stories, the first science-fiction monthly), he has not only provided himself with a method of firing endless barrages of opinion, criticism and augury but the means of making a good deal of money as well.

Neither Gernsback’s instinct for the unorthodox, however, nor his unabashed sense of theater has prevented his full acceptance as a member of the science community. Dozens of today’s top scientists were attracted to their calling by reading his magazines as boys, and a good many–including Dr. Donald H. Menzel, director of the Harvard Observatory– earned money for college tuition by writing for them. He is heralded as the “Father” of modern science fiction (the statuettes which are annually awarded to its top writers are, in his honor, known as Hugos, but he is simultaneously a member of the American Physical Society and a lecturer before similar learned groups. The greatest inventors and scientists of the early 20th Century–among them Marconi, Edison, Tesla, Goddard, DeForest and Oberth–corresponded freely with him and came, in many cases, to admire and confide in him as well. The Space Age caused no diminution of this cozy relationship with the great; RCA’s General David Sarnoff is among his friends and pen pals, and so are former Atomic Energy Commissioner Lewis L. Strauss and President Kennedy’s science adviser, Dr. Jerome Wiesner.

This admiration is solidly based. Gernsback, in his unique career, has not only done his best to prepare the public mind for the “wonders” of science but has sometimes managed to tell science itself just what wonders it was about to produce. for instance, he conceived the essential principles of radar aircraft detection in 1911–a year when the airplane itself was barely able to stagger off the ground. This early concept was so complete that Sir Robert Watson-Watt, whose radar tracking devices helped save London in the Battle of Britain, considers him the original inventor.

Gernsback not only coined the word “television” (he refuses to accept credit for that since he has discovered a Frenchman used an equivalent of the word a little earlier) but in 1928, as owner of New York’s radio Station WRNY, actually instituted daily telecasts with crude equipment. His list of successful scientific prophecies is almost endless and the perspicacity with which he has reported scientific thinking on the part of others is remarkable. In the 1920s, to make the point, he was force-feeding his readers all sorts of crazy stuff about atomic energy and about the problems of weightlessness and orbital rendezvous to be encountered in “space flying.”

It is, therefore, difficult not to believe that U.S. science has been influenced in many ways as a result of Gernsback’s extraordinary career in evangelism…•

The future usually arrives gradually, even frustratingly slowly, often wearing the clothes of the past, but what if it got here today or soon thereafter?

The benefits of profound technologies rushing headlong at us would be amazing and amazingly challenging. Gill Pratt, who oversaw the DARPA Robotics Challenge, wonders in a new Journal of Economic Perspectives essay if the field is to have a wild growth spurt, a synthetic analog to the biological eruption of the Cambrian Period. He thinks that once the “generalizable knowledge representation problem” is addressed, no easy feat, the field will speed forward. The opening:

About half a billion years ago, life on earth experienced a short period of very rapid diversification called the “Cambrian Explosion.” Many theories have been proposed for the cause of the Cambrian Explosion, with one of the most provocative being the evolution of vision, which allowed animals to dramatically increase their ability to hunt and find mates (for discussion, see Parker 2003). Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics— a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion. Is a Cambrian Explosion Coming for Robotics?

How soon might a Cambrian Explosion of robotics occur? It is hard to tell. Some say we should consider the history of computer chess, where brute force search and heuristic algorithms can now beat the best human player yet no chess-playing program inherently knows how to handle even a simple adjacent problem, like how to win at a straightforward game like tic-tac-toe (Brooks 2015). In this view, specialized robots will improve at performing well-defined tasks, but in the real world, there are far more problems yet to be solved than ways presently known to solve them.

But unlike computer chess programs, where the rules of chess are built in, today’s Deep Learning algorithms use general learning techniques with little domain-specific structure. They have been applied to a range of perception problems, like speech recognition and now vision. It is reasonable to assume that robots will in the not-too-distant future be able perform any associative memory problem at human levels, even those with high-dimensional inputs, with the use of Deep Learning algorithms. Furthermore, unlike computer chess, where improvements have occurred at a gradual and expected rate, the very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.•


It amazes me that California’s water shortage seems to be viewed in this country as a regional problem for them, when it’s clearly a grave concern for us. As farmers in that state search deeper and deeper for the scarce liquid hoping to stave off personal disaster, we all near a collective one. If California dying of thirst isn’t a national emergency, I don’t know what is. Globally, the water crises may be the most serious threat to world peace. From the Spiegel report “World Without Water“:

“Water is the primary principle of all things,” the philosopher Thales of Miletus wrote in the 6th century BC. More than two-and-a-half thousand years later, on July 28, 2010, the United Nations felt it was necessary to define access to water as a human right. It was an act of desperation. The UN has not fallen so clearly short of any of its other millennium goals than the goal of cutting the number of people without this access in half by 2015.

The question is whether water is public property and a human right. Or is it ultimately a commodity, a consumer good and a financial investment?

The world’s business leaders and decision makers gathered at the annual meeting in snow-covered Davos, Switzerland in January to discuss the most pressing issues of the day. One of the questions was: What is the greatest social and economic risk of the coming decade? The selection of answers consisted of 28 risks, including wars, weapons of mass destruction and epidemics. The answer chosen by the world’s economic elite was: water crises.

Consumers have recognized for years that we need to reduce our consumption of petroleum. But very few people think about water as being scarce, even though it’s the resource of the future, more valuable than oil because it is irreplaceable. It also happens to be the source of all life.•


A few days ago, I posted an excerpt from a New York Times op-ed written by Peter Georgescu, the Young & Rubicam chairman emeritus, who believes wealth inequality must be remedied by corporations (not particularly likely) or we’ll have social uprisings and ginormous tax increases. Well, something’s got to give.

The essay touched a nerve, leading to a raft of Facebook questions directed at the writer. He answered some of them for the Times. Unfortunately, none address automation potentially adding to the short- and medium-term woes with technological unemployment. 

One exchange about what the questioner and Georgescu see as the precarious position of contemporary capitalism:


A quick prelude is that I fear that our capitalist model is in danger. In the early days of capitalism (here in the US and elsewhere) companies were mostly family owned and run even for generations. Now we have the board, stockholders and CEO model, which appears very flawed. The stockholders often are just looking for short term gain, the board has no real ties to or ‘skin’ in the company, and the CEO is often colluding with the stockholders for short term gain.

After that long-winded lead in, do you share those fears? Any thoughts on improving the current public corporate model? How about the German system of requiring public corporations to have a union representative on the board?

Peter Georgescu

I fear for the future of capitalism in our country and around the world. Capitalism really means free enterprise. The name came from the resource that once drove the free-market engine. Capital no longer plays that prominent role. Creativity and innovation drive global business today. Capital is just one resource, important, but no longer the major differentiator. Historically, this so-called capitalist free-enterprise engine achieved extraordinary results. It propelled America into the superpower that is it today. It lifted hundreds of millions of people from deep poverty to a more humane standard of living. (Think China, India, Brazil, countries in Africa and more.)

But that extraordinary engine has been hijacked by a rogue philosophy that says that shareholders’ interests come first and which threatens to destroy both this magnificent engine and our very way of life. The misguided philosophy says that one of a corporation’s stakeholders, the shareholders, deserves to have their value maximized in the short term. The three other vital stakeholders are not adequately represented at the decision-making table and inadequately compensated. First, the employees — who are the real value creators. They have been turned into a cost to be squeezed. Then, the corporation itself, where investment in R&D and innovation is grossly inadequate. Finally, a business’s customers, who should be a corporation’s prime stakeholder — not the shareholders.

Even the moral justification that the shareholder is the owner and an owner gets what they want when they want it is a myth. In fact the shareholder is a mentor at best. They come into stock when they want and leave at their will. And they are of course immune from any corporation liabilities. That’s not ownership. The preponderance of legal opinion is clear. The corporation owns its own assets, not the shareholder.

So yes, we must rebalance a business’s incremental value returns among the key stakeholders — the employees, the shareholders and the corporation itself. And we must always put the customer’s interests first.

If we do that, we can liberate free enterprise from its present-day shackles.



To generate hoopla for the 1950 sci-fi film Destination Moon, the principals of the film, including writer Robert Heinlein, did on-set interviews with KTLA the year before. The author, who makes his entrance near the 12-minute mark, explains that a real space mission only needed money and will, not any new science, to be completed. About 20 years later, he was interviewed as part of Walter Cronkite’s CBS coverage of the actual moon landing.


This is very cool: A 1971 Life magazine report about a Manhattan computer expo in which IBMs wowed visitors by merely playing games of 20 Questions, no chess expertise even necessary. Better yet, the exhibition was curated by Charles Eames, who, along with his wife and business partner, Ray, was as comfortable with computers as he was with furniture. FromA Lively Show with a Robot as the Star,” written by Fortune editor Walter McQuade:

The stroller steps off the sidewalk and into the IBM display room on 57th Street in Manhattan and approaches one of the four shiny input typewriters of an IBM System 360 computer. The game is ’20 Questions.’ The computer ‘thinks up’ one of the 12 stock mystery words, like “duck,” “orange,” “cloud,” “helium,” “knowledge.” The stroller has 20 chances to guess and if, perhaps, the mystery word is “knowledge,” the typical conversation could start like this:

Stroller: “Does it grow?”
Computer: “To answer that question might be misleading.”
Stroller: “Can I eat it? Is it edible?”
Computer: “Only as food for thought.”
Stroller: “Do computers have it?”
Computer: “Strictly speaking, no.”

Twenty Questions is only the pièce de resistance in what is probably the canniest and most successful exhibition on computers ever devised. It should be: its deviser, the protean Charles Eames–poet, architect, painter, mathematician, toymaker, furniture designer and film maker–has had ample exposure at expos. Here, he and his collaborators reach back into the history and prehistory of computers to show how and why calculating machines came about.

Most of the story evolves on a gigantic, 48-foot, three-dimensional wall tapestry. Woven into it are hundreds of souvenirs from 1890 to 1950, the computer’s gestation period. Here are artifacts, documents and photographs, dramatizing six decades of striving, when information began to explode on the world and nobody knew quite what to do with the fallout.

The devices range from “The Millionaire,” one of the first calculators, made of brass, to Elmer Sperry’s gyroscope, to Vannevar Bush’s differential analyzer. Included are the work of such elegant minds as Alan Turing, Wallace Eckert, Norbert Wiener, John von Neumann. Even L. Frank Baum and his “clockwork copper man,” Tik-Tok of Oz, is represented.

The military imperative to handle information quickly is underlined with a Norden bombsight and with ENIAC, an Army ballistics calculator and predecessor of UNIVAC. There are beautifully selected pieces of cultured debris to date it all; election literature in the years each of the Roosevelts ran for President, and one of the big old dollar bills, when they were worth 100 cents. Best of all are the evocations of mental battles fought and sometimes lost. Early in the century an English scientist, Lewis Fry Richardson, devoted many years to developing numerical models in which equations simulated physical systems to predict the weather. He was a dedicated visionary, but his widow wrote, “There came a time of heartbreak when those most interested in his ‘upper air’ research proved to be ‘poison gas’ experts. Lewis stopped his meteorological researches, destroying such as had not been published.”

The wall closes with the birth of the UNIVAC in 1950. Since then the computer has progressed so fast, with computers working their own evolution, that the souvenirs would be just print-out sheets. But Eames demonstrates with models and film displays that if this be witchcraft, there are no witches involved–just the 350,000 full-time programmers (in the U.S. alone) and about two million other nonwitches who operate the machines; in a multiple, rapid-fire slidefilm; they chew gum, scratch themselves, dye their hair and do their work.

And when the stroller, no warlock himself, wanders in off the street with his family (it’s a great show for kids) and confronts the System 360, he is well advised to watch his language and frame his questions well. Eames’ finale to the exhibition can be fairly cheeky. System 360, Model 40, is not above printing out, in response to a muddled thought: “Your grammar has me stumped.”•

Robots may relieve us of much of the work currently monopolizing our time, which sounds great. I mean, life is too short. Unfortunately, the U.S. and many other patches on the globe don’t have economic systems capable of supporting a populace in which near-total employment isn’t the goal. Martin Ford and Andrew McAfee and Eric Brynjolfsson have written that the future is arriving too quickly and, unlike in the ’50s and ’60s, automation leading to massive technological unemployment is a real possibility. 

Add computer scientist and entrepreneur Jerry Kaplan, author of Humans Need Not Apply, to that list. In a lively Ask Me Anything at Reddit, Kaplan lays out his argument that a scary storm is gathering. A few exchanges follow.



Do you feel like people are too fearful of artificial intelligence?

Jerry Kaplan:

The problem is that they are fearing the wrong thing. The robot apocalypse will be economic, not ‘military‘!



What is the minimum wage of an average robot? How cost-effective are they (R&D+Maintenance+Hydro etc…/X hrs. wk.)?

Jerry Kaplan:

Ha interesting way to put the question. You don’t “pay” robots, of course. They are simply machines, like any others, so the question is whether the machine can perform some task in an economically advantageous way. This is a simply buy-vs-hire decision in most cases.

In my experience, it’s almost always better to use the machines, if you can afford it. Go forth and automate, my children!



With the growing increase of machines taking over manual jobs do you feel that the workplace will be made up almost entirely of machines and people will then become less focused on work and more on leisure?

Jerry Kaplan:

What counts as work has shifted over the past centuries. What we do now would be considered optional “leisure” during the agrarian economy 200 years ago. They would think that our farms are made up almost entirely of machines today, and would wonder why on earth we aren’t living more simply and just enjoying ourselves!

But the desire to work is human nature. I think it’s a myth that most people just want to goof off and have fun … they’d rather work and own a fancier car!



How far do you think we are away from living in a world with a ton AI in day to day life?

Jerry Kaplan:

You already are, you just don’t realize it. (Read my book and it will really scare you about what’s going on!)

Amazon, for one, is little more than a giant machine learning algorithm that arbitrages purchase and sale transactions. It watches your every move and decides exactly what is necessary to get you to buy. That’s why you see weird changes in the prices of things in your Cart, just for starters.

The ads you see online are another amazing example of how AI crafts things to get you to act in other people’s interests! I detail this in my book, it’s really unbelievable what happens when you load a web page, as AIs research everything about you in milliseconds, then an auction is performed, and the highest bidder gets to show their ad.



Do you support Basic Income?

What are the machine-replaced workers suppose to do to feed their families?

Jerry Kaplan:

Basic income is a good thing, it will spur innovation. In principle machines make society wealthier — the question is who gets the wealth. We need to ensure that new wealth is distributed more fairly.

Food used to consume more than 50% of the average worker’s income. Now it’s under 10%. That’s real progress!



Does the amount of money that the military invests in AI scare you or excite you?

Jerry Kaplan:

Well the military invests in AI for two reasons:

(1) To ensure that we have a ‘reserve’ of new technology that can both benefit society and is available in times of military threat.

(2) So we have the biggest bat in the league.

The challenge is now to achieve these two goals without bankrupting society or spurring continual arms races. Unfortunately this doesn’t lend itself to simple sound-bite answers. The military types I talk to (and I do have friends in DARPA, among other places) are not war-mongers at all, quite the contrary they want to try to keep us safe with minimum damage to life and property. We don’t always get this balance right, but it’s a hard (and mostly thankless) job.



What makes a futurist? Are there specific credentials and methods?

Jerry Kaplan:

Nope – you just have to believe your own nonsense and talk about it persuasively, as if you were on Fox News.

Just get yourself a crystal ball and one of those weird turbans. LSD works well too (or so I hear?).

Seriously, it’s a ball. Give it a try.•


The only thing trickier than predicting future population is interpreting what those people will mean for the world and its resources. From Malthus to Ehrlich, population bombs have defused themselves, even proved beneficial. Down deep, most likely think there’s a tipping point, a tragic number, but, of course, development of technologies can rework that math, stretch resources to new lengths. And a larger pool of talent makes it easier to create those new tools.

It would seem to make sense that immigrant nations can ride the wave of fluctuations best, not being dependent on internal fertility numbers. Robotics may reduce that advantage, however. Japan is certainly banking on that transformation.

In a Financial Times piece, Robin Harding writes that fertility seems to be on a steep decline globally, leveling off. If so, the ramifications will be many, including for Labor. The opening:

The extent of the plunge in childbearing is startling. Eighty-three countries containing 46 per cent of the world’s population — including every single country in Europe — now have fertility below replacement rate of about 2.1 births per woman. Another 46 per cent live in countries where the birth rate has fallen sharply. In 48 countries the population will decline between now and 2050.

That leaves just 9 per cent of the world’s population, almost all in Africa, living in nations with pre-industrial fertility rates of five or six children per woman. But even in Africa fertility is starting to dip. In a decade, the UN reckons, there will be just three countries with a fertility rate higher than five: Mali, Niger and Somalia. In three decades, it projects only Niger will be higher than four.

These projections include a fertility bounce in countries such as Germany and Japan. If more fecund nations follow this path of declining birth rates, therefore, a stable future population could quickly be locked in.

That would have enormous consequences for the world economy, geo­politics and the sum of human happiness, illustrated by some of the middle-income countries that have gone through a dramatic, and often ignored, fall in fertility.•



The contemporary Western attitude toward architecture is to protect the jewels, preserve them. Not so in Japan, a nation of people Pico Iyer refers to in a striking T Magazine essay as “pragmatic romantics.” Iyer writes of ancient buildings being regularly replaced by replicas in the same manner that some citizens hire elderly actors to portray deceased grandparents at family functions. It’s just a different mindset. The opening:

EVERY 20 YEARS, the most sacred Shinto site in Japan — the Grand Shrine at Ise — is completely torn down and replaced with a replica, constructed to look as weathered and authentic as the original structure built by an emperor in the seventh century. To many of us in the West, this sounds as sacrilegious as rebuilding the Western Wall tomorrow or hiring a Roman laborer to repaint the Sistine Chapel once a generation. But Japan has a different sense of what’s genuine and what’s not — of the relation of old to new — than we do; if the historic could benefit from a little help from art, or humanity, the reasoning goes, then wouldn’t it be unnatural not to provide it?

The motto guiding Japan’s way of being might be: New is the new old. For proof, you need only look at three recent high-profile and much-debated demolition jobs in Tokyo. The Hotel Okura, an icon of Japanese Modernism built in 1962 to commemorate the country’s arrival in the major leagues of nations as the host of the 1964 Olympics and cherished for its unique and atmospheric lobby, is currently being reduced to rubble in favor of two no doubt anonymous glass towers, meant to announce Japan’s continuing position in the big leagues, as the host of the 2020 Olympics. The once state-of-the-art National Olympic Stadium, designed by Mitsuo Katayama for the 1964 event, is being replaced by tomorrow’s idea of futurism: a new structure that was, until recently, set to be designed by Zaha Hadid. Even Tsukiji, the world’s largest fish market and the mainstay of jet-lagged sightseers for decades — is being mostly moved to a shopping mall, with the assurance that a copy of a place can sometimes look more authentic than the place itself. These erasures — most notably of the Okura, which became the personal cause of Tomas Maier, the creative director of Bottega Veneta — have elicited protests from devoted aesthetes the world over: What could the Japanese be thinking?

The answer is simple: The Japanese are different from you and me. They don’t confuse books with their covers.•


Televox was the 1920s robot that reportedly fetched your car from the garage or a bottle of wine the cellar. While these feats, along with many others, were said to have been ably performed, the cost of such a machine made it unmarketable.

Televox was also the star attraction of a very early insinuation of robotics into the American military when, in 1928, he barked out orders to the grunts. It was a bit of a publicity stunt but also the beginnings of robotizing war, which some then thought implausible, though nobody does now. An article follows from the June 11, 1928 Brooklyn Daily Eagle.


According to Paul Mason, author of PostCapitalism, technology has reduced the economic system to obsolescence or soon will. While I don’t agree that capitalism is going away, I do believe the modern version of it is headed for a serious revision.

The extent to which technology disrupts capitalism–the biggest disruption of them all–depends to some degree on how quickly the new normal arrives. If driverless cars are perfected in the next few years, tens of millions of positions will vanish in America alone. Even if the future makes itself known more slowly, employment will probably grow more scarce as automation and robotics insinuate themselves. 

The very idea of work is currently undergoing a reinvention. In exchange for the utility of communicating with others, Facebook users don’t pay a small monthly fee but instead do “volunteer” labor for the company, producing mountains of content each day. That would make Mark Zuckerberg’s company something like the biggest sweatshop in history, except even those dodgy outfits pay some minimal fee. It’s a quiet transition.

Gillian Tett of the Financial Times reviews Mason’s new book, which argues that work will become largely voluntary in the manner of Wikipedia and Facebook, and that governments will provide basic income and services. That’s his Utopian vision at least. Tett finds it an imperfect but important volume. An excerpt:

His starting point is an assertion that the current technological revolution has at least three big implications for modern economies. First, “information technology has reduced the need for work” — or, more accurately, for all humans to be workers. For automation is now replacing jobs at a startling speed; indeed, a 2013 report by the Oxford Martin school estimated that half the jobs in the US are at high risk of vanishing within a decade or two.

The second key point about the IT revolution, Mason argues, is that “information goods are corroding the market’s ability to form prices correctly.” For the key point about cyber-information is that it can be replicated endlessly, for free; there is no constraint on how many times we can copy and paste a Wikipedia page. “Until we had shareable information goods, the basic law of economics was that everything is scarce. Supply and demand assumes scarcity. Now certain goods are not scarce, they are abundant.”

But third, “goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy.” More specifically, people are collaborating in a manner that does not always make sense to traditional economists, who are used to assuming that humans act in self-interest and price things according to supply and demand.•

Tags: ,

« Older entries § Newer entries »