Science/Tech

You are currently browsing the archive for the Science/Tech category.

In an Atlantic piece, Derek Thompson notes that while CDs are under siege, digital music is itself being disrupted, with abundance making profits scarce. Music is desired, but the record store–in any form–is not. The opening:

CDs are dead.

That doesn’t seem like such a controversial statement. Maybe it should be. The music business sold 141 million CDs in the U.S. last year. That’s more than the combined number of tickets sold to the most popular movies in 2014 (Guardians) and 2013 (Iron Man 3). So “dead,” in this familiar construction, isn’t the same as zero. It’s more like a commonly accepted short-cut for a formerly popular thing is now withering at a commercially meaningful rate.

And if CDs are truly dead, then digital music sales are lying in the adjacent grave. Both categories are down double-digits in the last year, with iTunes sales diving at least 13 percent.

The recorded music industry is being eaten, not by one simple digital revolution, but rather by revolutions inside of revolutions, mouths inside of mouths, Alien-style. Digitization and illegal downloads kicked it all off. MP3 players and iTunes liquified the album. That was enough to send recorded music’s profits cascading. But today the disruption is being disrupted: Digital track sales are falling at nearly the same rate as CD sales, as music fans are turning to streaming—on iTunes, SoundCloud, Spotify, Pandora, iHeartRadio, and music blogs. Now that music is superabundant, the business (beyond selling subscriptions to music sites) thrives only where scarcity can be manufactured—in concert halls, where there are only so many seats, or in advertising, where one song or band can anchor a branding campaign.•

_____________________________

“In your home or in your car, protect your valuable tapes.”

Tags:

The Peer Economy may be a good idea whose time has come, but that doesn’t mean it’s good for workers. In America, it’s a crumb tossed to those squeezed from the middle class by globalization, automation, etc. Keeping employees happy isn’t a goal of Uber and others because it treats labor like a dance marathon, the music never stopping, new “employee-partners” continually being supplied by a whirl of desperation. From Douglas MacMillan at the WSJ:

The sheer numbers of Uber’s labor pool and rate of growth are hard to fathom. The company added 40,000 new drivers in the U.S. in the month of December alone. The authors of the paper say the number of new drivers is doubling every six months. At the same time, Uber says nearly half its drivers become inactive after a year – either because they quit or are terminated.

If those trends continue, Uber could end this year with roughly half-a-million drivers in the U.S. alone.

That growth is being driven mainly by UberX, the company’s service for non-professional drivers that first rolled out in 2012. UberX has create a new part-time job opportunity for people who have never driven professionally, which account for 64% of Uber’s total number of drivers.

Most, or 62% of Uber drivers, have at least one additional source of income. Which could mean that at least for some, Uber is not economically feasible as a full-time job.

Uber claims an average driver makes $19.04 an hour, after paying Uber a commission, higher than the $12.90 average hourly wage (including tips, Uber says) that the U.S. Bureau of Labor Statistics estimates for taxis and chauffeurs. Uber drivers make the most average pay in New York, followed by San Francisco and Boston.

The average pay for former taxi drivers on Uber is $23 per hour; for former black car drivers it’s $27 per hour.

But the paper’s authors admit these figures don’t include expenses that come out drivers own pockets, including gas, maintenance and insurance. And a number of people with experience driving for the company say Uber has made it more difficult to make a good wage because it frequently cuts prices as a way to entice new passengers.

A drop in prices can have a profound effect on driver pay.•

Tags:

It took decades longer than predicted in the 1954 Popular Mechanics article “Is the Automatic Factory Here?” for “electronic brains” to mature enough to replace human workers en masse in a concentrated time frame, but the day of reckoning is apparently finally here–and we’re still only at the beginning. The automated workplace’s eventual arrival was clear during the peace-dividend period in post-WWII America, with its bowling-ball return machines and device-driven assembly lines. The question is whether, as in the ’50s and ever since, new opportunities will arise to replace those disappeared by Weak AI. An excerpt:

Automatic brains and tools are even marching out of the shops and into the offices. A new electronic system for a Chicago mail-order firm gulps down catalogue orders as fast as 10 operators can press keys. In much less than a second, an operator can find out the total number of orders on file for a particular item. The machine does the work of 39,000 adding machines and much of the brain work of 40-odd girls who formerly classified and recorded the orders.

Dr. Simon Ramo, head of the Ramo-Woolridge Corporation and one the nation’s leading authorities on “synthetic intelligence” declares flatly: “It is possible for engineers today, on the basis of known science, to produce devices which could displace a very large fraction of the white-collar workers who are doing routine paperwork which can be reduced to simple thought processes.”

Today’s automatic machines with their electronic brains are the advance guard of a new army of workers. An appropriate word has popped into the language to describe these “synthetic” workers and thinkers, a word that likely will be as commonplace as the word electronics within five years. That word is automation.

Down-to-earth engineers, supervisors and even businessmen are becoming experts on automation. This is not a Rube Goldberg dream nor the zany Technocracy of the early ’30s, but a new Industrial Revolution, a revolution that is coming slowly but inevitably.

The first Industrial Revolution replaced man’s muscles with machinery. The new revolution, most automation experts firmly believe, will replace man’s routine brainwork with machines. Dr. Ramo says:

“Surely no one can deny that the replacing of man’s brains will effect some sort of revolution. The biggest factor in changing all business and industry must be the coming of the age of synthetic electronic intelligence.”•

Tags: ,

The 2015 version of the Gates Annual Letter makes bold and hopeful predictions for the world by 2030 (infant mortality halved, an HIV vaccine, Africa a prosperous continent, etc.) In the spirit of the missive, Politico invited other thinkers to consider life 15 years hence. Below are two examples representing polar opposites, neither of which seems particularly likely.

_____________________________

Technology for the good

By Vivek Wadhwa, fellow at the Arthur & Toni Rembe Rock Center for Corporate Governance at Stanford University

Technology is advancing faster than people think and making amazing things possible. Within two decades, we will have almost unlimited energy, food and clean water; advances in medicine will allow us to live longer and healthier lives; robots will drive our cars, manufacture our goods and do our chores. It will also become possible to solve critical problems that have long plagued humanity such as hunger, disease, poverty and lack of education. Think of systems to clean water; sensors to transform agriculture; digital tutors that run on cheap smartphones to educate children; medical tests on inexpensive sensor-based devices. The challenge is to focus our technology innovators on the needs of the many rather than the elite few so that we can better all of humanity.•

_____________________________

No breakthroughs for the better

By Leslie Gelb, president emeritus and board senior fellow at the Council on Foreign Relations

The world of 2030 will be an ugly place, littered with rebellion and repression. Societies will be deeply fragmented and overwhelmed by irreconcilable religious and political groups, by disparities in wealth, by ignorant citizenry and by states’ impotence to fix problems. This world will resemble today’s, only almost everything will be more difficult to manage and solve.

Advances in technology and science won’t save us. Technology will both decentralize power and increase the power of central authorities. Social media will be able to prompt mass demonstrations in public squares, even occasionally overturning governments as in Hosni Mubarak’s Egypt, but oligarchs and dictators will have the force and power to prevail as they did in Cairo. Almost certainly, science and politics won’t be up to checking global warming, which will soon overwhelm us.

Muslims will be the principal disruptive factor, whether in the Islamic world, where repression, bad governance and economic underperformance have sparked revolt, or abroad, where they are increasingly unhappy and disdained by rulers and peoples. In America, blacks will become less tolerant of their marginalization, as will other persecuted minorities around the world. These groups will challenge authority, and authority will slam back with enough force to deeply wound, but not destroy, these rebellions.

A long period of worldwide economic stagnation and even decline will reinforce these trends. There will be sustained economic gulfs between rich and poor. And the rich will be increasingly willing to use government power to maintain their advantages.

Unfortunately, the next years will see a reversal of the hopes for better government and for effective democracies that loomed so large at the end of the Cold War.•

Tags: , , ,

Lee Billings, author of the wonderful and touching 2014 book, Five Billion Years of Solitude, is interviewed on various aspects of exoplanetary exploration by Steve Silberman of h+ Magazine. An exchange about what contact might be like were it to occur:

Steve Silberman:

If we ever make contact with life on other planets, they will be the type of creatures that we could sit down and have a Mos Eisley IPA or Alderaan ale with — even if, by then, we’ve worked out the massive processing and corpus dataset problems inherent in building a Universal Translator that works much better than Google? And if we ever did make contact, what social problems would that meeting force us to face as a species?

Lee Billings:

Outside of the simple notion that complex intelligent life may be so rare as to never allow us a good chance of finding another example of it beyond our own planet, there are three major pessimistic contact scenarios that come to mind, though there are undoubtedly many more that could be postulated and explored. The first pessimistic take is that the differences between independently emerging and evolving biospheres would be so great as to prevent much meaningful communication occurring between them if any intelligent beings they generated somehow came into contact. Indeed, the differences could be so great that neither side would recognize or distinguish the other as being intelligent at all, or even alive in the first place. An optimist might posit that even in situations of extreme cognitive divergence, communication could take place through the universal language of mathematics.

The second pessimistic take is that intelligent aliens, far from being incomprehensible and ineffable, would be in fact very much like us, due to trends of convergent evolution, the tendency of biology to shape species to fit into established environmental niches. Think of the similar streamlined shapes of tuna, sharks, and dolphins, despite their different evolutionary histories. Now consider that in terms of biology and ecology humans are apex predators, red in tooth and claw. We have become very good at exploiting those parts of Earth’s biosphere that can be bent to serve our needs, and equally adept at utterly annihilating those parts that, for whatever reason, we believe run counter to our interests. It stands to reason that any alien species that managed to embark on interstellar voyages to explore and colonize other planetary systems could, like us, be a product of competitive evolution that had effectively conquered its native biosphere. Their intentions would not necessarily be benevolent if they ever chose to visit our solar system.

The third pessimistic scenario is an extension of the second, and postulates that if we did encounter a vastly superior alien civilization, even if they were benevolent they could still do us harm through the simple stifling of human tendencies toward curiosity, ingenuity, and exploration. If suddenly an Encyclopedia Galactica was beamed down from the heavens, containing the accumulated knowledge and history of one or more billion-year-old cosmic civilizations, would people still strive to make new scientific discoveries and develop new technologies? Imagine if solutions were suddenly presented to us for all the greatest problems of philosophy, mathematics, physics, astronomy, chemistry, and biology. Imagine if ready-made technologies were suddenly made available that could cure most illnesses, provide practically limitless clean energy, manufacture nearly any consumer good at the press of a button, or rapidly, precisely alter the human body and mind in any way the user saw fit. Imagine not only our world or our solar system but our entire galaxy made suddenly devoid of unknown frontiers. Whatever would become of us in that strange new existence is something I cannot fathom.

The late Czech astronomer Zdeněk Kopal summarized the pessimist outlook succinctly decades ago, in conversation with his British colleague David Whitehouse. As they were talking about contact with alien civilizations, Kopal grabbed Whitehouse by the arm and coldly said, “Should we ever hear the space-phone ringing, for God’s sake let us not answer. We must avoid attracting attention to ourselves.”•

Tags: , , ,

In Peter Aspden’s Financial Times profile of clock-watcher and turntablist Christian Marclay, the talk turns to how digital technology has refocused our attention from product to production, the process itself now a large part of the show. An excerpt:

When did the medium become more important than the message? Philosopher Marshall McLuhan theorised about the relationship between the two half a century ago — but it is only today that we seem to be truly fascinated by the processes involved in the creation of contemporary art and music, rather than their end result. Nor is this just some philosophical conceit; it extends to the lowest level of popular culture: what are the TV talent shows The X Factor and The Voice if not obsessed by the starmaker machinery of pop, rather than the music itself?

There are two reasons for this shift in emphasis. The first is technology. When something moves as fast and as all-consumingly as the digital revolution, it leaves us in its thrall. Our mobile devices sparkle more seductively than what they are transmitting. The speed of information has more of a rush than the most breakneck Ramones single.

Digital tools also enable the past to be appropriated in thrilling new forms — “The Clock” would not have been possible in an analogue age. Marclay has said he developed calluses on his fingers from his work in the editing suite, echoing the injuries once suffered by the hoariest blues guitarists. “The Clock” is a reassemblage of found objects: that is not a new phenomenon in artistic practice, but never has it been taken to such popular and imaginative heights.

It is the art of the beginnings of the digital age: not something entirely new, but a reordering of great, integral works of the past. “A record is so tangible,” says Marclay when I ask him about his vinyl fixation. “A sound file is nothing.” Sometimes, I say, it feels as if he has taken all the passions of my youth — records, movies, comic books — and thrown them all in the air, fitting them back together with technical bravura and, in so doing, investing them with new, hidden meanings. I ask if his art is essentially nostalgic. “I don’t think it is,” he replies firmly. “But there is a sense of comfort there. These are things we grew up with. They are familiar. And that is literally the right word: they are family.”

The second, less palatable, reason for the medium to overshadow the message is because of a loss of cultural confidence. We are not sure that the end result of whatever it is we are producing with such spectacular technological support can ever get much better than Pet Sounds, or Casablanca, or early Spider-Man. This does lead to nostalgia; not just for the old messages but for the old media, too. Marclay tells me there is a cultish following for audio cassettes, as if the alchemy of that far-from-perfect technology will help reproduce the magic of its age.

______________________________

Marclay re-making music in 1989. 

Tags: ,

Say what you will about Jill Abramson, but she gave the New York Times enduring gifts with the hires of Jake Silverstein and Deborah Needleman, editors respectively of the Magazine and the T Magazine. They’ve both done a lot of excellent work early in their tenures.

Her successor, Dean Baquet, amateur proctologist, is a talented person with a huge job ahead of him at the venerable and wobbly news organization, and he may yet call Mike Bloomberg boss because such a transaction makes a lot of sense financially. In a new Spiegel interview conducted by Isabell Hülsen and Holger Stark, Baquet addresses the technological “Space Race” he’s trying to win–or at least not lose. An excerpt:

Spiegel:

Digital competitors like BuzzFeed and the Huffington Post offer an extremely colorful mix of stories and have outperformed the New York Times website with a lot of buzz.

Dean Baquet:

Because they’re free. You’re always going to have more traffic if you’re a free website. But we’ve always admitted that we were behind other news organizations in making our stories available to people on the web. BuzzFeed and the Huffington Post are much better than we are at that, and I envy them for this. But I think the trick for the New York Times is to stick to what we are. That doesn’t mean: Don’t change. But I don’t want to be BuzzFeed. If we tried to be what they are, we would lose.

Spiegel:

In May, your internal innovation report was leaked along with its harsh conclusion that the New York Times’ “journalistic advantage” is shrinking. Did you underestimate your new digital competitors?

Dean Baquet:

Yes, I think we did. We assumed wrongly that these new competitors, whether it was BuzzFeed or others, were doing so well just because they were doing something journalistically that we chose not to do. We were arrogant, to be honest. We looked down on those new competitors, and I think we’ve come to realize that was wrong. They understood before we did how to make their stories available to people who are interested in them. We were too slow to do it.

Spiegel:

The report was disillusioning for many newspaper executives because the Times is widely seen as a role model when it comes to the question of making money on the web. The report, instead, pointed out that the Times lacks a digital strategy and the newsroom is far away from a “digital first” culture.

Dean Baquet:

First, the Times is and has always been a digital leader. The report only cited some areas where we fell down. Second: Half of the report is critical, and half of it has ideas for things you can do to fix the problem. A lot of things have been done already.

Spiegel:

What has changed?

Dean Baquet:

We have, for example, built a full-bodied audience development team that engages with our readers through social networks. The team has been in operation for three months now and we already have a pretty consistent 20 percent increase in traffic.

Spiegel:

How does this influence the work of your journalists?

Dean Baquet:

It used to be, if you were a reporter, you wrote a story and then you moved on to the next one. We were used to people coming to us. We waited for them to turn on our website or to pick up our print paper and see what we have. We now understand that we have to make our stories available to our readers. A lot of people get their news from Facebook or Twitter and we want to make sure that they see some of our best stories there, too. We do this more aggressively now than we did before.•

Tags: , , , , ,

The arduousness of parallel parking is one way young egos of the technological world learn humility and patience. Try and fail and try and fail and try. Soon enough, those lessons will be learned by other means, if they are to be learned, as cars will be deposited solely by sensors and such in the near future. From John R. Quain at the New York Times:

TECHNOLOGY may soon render another skill superfluous: parking a car.

Sensors and software promise to free owners from parking angst, turning vehicles into robotic chauffeurs, dropping off drivers and then parking themselves, no human intervention required.

BMW demonstrated such technical prowess this month with a specially equipped BMW i3 at the International CES event. At a multilevel garage of the SLS Las Vegas hotel, a BMW engineer spoke into a Samsung Gear S smartwatch.

“BMW, go park yourself,” and off the electric vehicle scurried to an empty parking spot, turning and backing itself perfectly into the open space. To retrieve the car, a tap on the watch and another command, “BMW, pick me up,” returned the car to the engineer.•

Technology can render things faster and cheaper but also, sometimes, out of control. Embedded in our trajectory of a safer and more bountiful world are dangers enabled by the very mechanisms of progress. In “A Fault in Our Design,” a typically smart and thoughtful Aeon essay, Colin Dickey meditates on nautical advances which allowed the Charles Mallory to deliver devastating disease in 1853 to a formerly far-flung Hawaii and considers it a cautionary tale for how modern wonders may be hazardous to our health. An excerpt:

It’s hard not to feel as though history is progressing forward, along a linear trajectory of increased safety and relative happiness.

Even a quick round-up of the technological advances of the past few decades suggests that we’re steadily moving forward along an axis of progress in which old concerns are eliminated one by one. Even once-feared natural disasters are now gradually being tamed by humanity: promising developments in the field of early warning tsunami detection systems might soon be able to prevent the massive loss of life caused by the 2004 Indian Ocean Tsunami and similar such catastrophes.

Technology has rendered much of the natural world, to borrow a term from Edmund Burke and Immanuel Kant, sublime. For Kant, nature becomes sublime once it becomes ‘a power that has no dominion over us’; a scene of natural terror that, viewed safely, becomes an enjoyable, almost transcendental experience. The sublime arises from our awareness that we ourselves are independent from nature and have ‘a superiority over nature’. The sublime is the dangerous thing made safe, a reaffirmation of the power of humanity and its ability to engineer its own security. And so with each new generation of technological innovation, we edge closer and closer towards an age of sublimity.

What’s less obvious in all this are the hidden, often surprising risks. As the story of the Charles Mallory attests, sometimes hidden in the latest technological achievement are unexpected dangers. Hawaii had been inoculated from smallpox for centuries, simply by virtue of the islands’ distance from any other inhabitable land. Nearly 2,400 miles from San Francisco, Hawaii is far enough away from the rest of civilisation that any ships that headed towards its islands with smallpox on board wouldn’t get there before the disease had burned itself out. But the Charles Mallory was fast enough that it had made the trip before it could rid itself of its deadly cargo, and it delivered unto the remote island chain a killer never before known.

Which is to say, the same technologies that are making our lives easier are also bringing new, often unexpected problems.•

Tags:

Two videos about Francis Ford Coppola’s 1974 masterwork The Conversation, a movie about the consequences, intended and unintended, of the clever devices we create and how the tools of security can make us insecure.

The first clip is an interview with the director conducted at the time of the film, in which he recognizes his influences. In the second, Coppola wordlessly receives the Palme d’Or at the Cannes Film Festival, to some applause and a few catcalls. Tony Curtis walks him off stage.

Tags: ,

No one is more moral for eating pigs and cows rather than dogs and cats, just more acceptable. 

Dining on horses, meanwhile, has traditionally fallen into a gray area in the U.S. Americans have historically had a complicated relationship with equine flesh, often publicly saying nay to munching on the mammal, though the meat has had its moments–lots of them, actually. From a Priceonomics post by Zachary Crockett, a passage about the reasons the animal became a menu staple in the fin de siècle U.S.: 

Suddenly, at the turn of the century, horse meat gained an underground cult following in the United States. Once only eaten in times of economic struggle, its taboo nature now gave it an aura of mystery; wealthy, educated “sirs” indulged in it with reckless abandon.

At Kansas City’s Veterinary College’s swanky graduation ceremony in 1898, “not a morsel of meat other than the flesh of horse” was served. “From soup to roast, it was all horse,” reported the Times. “The students and faculty of the college…made merry, and insisted that the repast was appetizing.”

Not to be left out, Chicagoans began to indulge in horse meat to the tune of 200,000 pounds per month — or about 500 horses. “A great many shops in the city are selling large quantities of horse meat every week,” then-Food Commissioner R.W. Patterson noted, “and the people who are buying it keep coming back for more, showing that they like it.”

In 1905, Harvard University’s Faculty Club integrated “horse steaks” into their menu. “Its very oddity — even repulsiveness to the outside world — reinforced their sense of being members of a unique and special tribe,” wrote the Times. (Indeed, the dish was so revered by the staff, that it continued to be served well into the 1970s, despite social stigmas.)

The mindset toward horse consumption began to shift — partly in thanks to a changing culinary landscape. Between 1900 and 1910, the number of food and dairy cattle in the US decreased by nearly 10%; in the same time period, the US population increased by 27%, creating a shortage of meat. Whereas animal rights groups once opposed horse slaughter, they now began to endorse it as more humane than forcing aging, crippled animals to work. 

With the introduction of the 1908 Model-T and the widespread use of the automobile, horses also began to lose their luster a bit as man’s faithful companions; this eased apprehension about putting them on the table with a side of potatoes (“It is becoming much too expensive a luxury to feed a horse,”argued one critic).

At the same time, the war in Europe was draining the U.S. of food supplies at an alarming rate. By 1915, New York City’s Board of Health, which had once rejected horse meat as “unsanitary,” now touted it is a sustainable wartime alternative for meatless U.S. citizens. “No longer will the worn out horse find his way to the bone-yard,” proclaimed the board’s Commissioner. “Instead, he will be fattened up in order to give the thrifty another source of food supply.”

Prominent voices began to sprout up championing the merits of the meat.•

Tags:

I’m not a geneticist, but I doubt successful, educated parents are necessarily more likely to have preternaturally clever children than their poorer counterparts, as is argued in a new Economist article about the role of education in America’s spiraling wealth inequality. Of course, monetary resources can help provide a child every chance to realize his or her abilities, ensuring opportunities often denied to those from families of lesser material means. That, rather than genes, is the main threat to meritocracy. An excerpt:

Intellectual capital drives the knowledge economy, so those who have lots of it get a fat slice of the pie. And it is increasingly heritable. Far more than in previous generations, clever, successful men marry clever, successful women. Such “assortative mating” increases inequality by 25%, by one estimate, since two-degree households typically enjoy two large incomes. Power couples conceive bright children and bring them up in stable homes—only 9% of college-educated mothers who give birth each year are unmarried, compared with 61% of high-school dropouts. They stimulate them relentlessly: children of professionals hear 32m more words by the age of four than those of parents on welfare. They move to pricey neighbourhoods with good schools, spend a packet on flute lessons and pull strings to get junior into a top-notch college.

The universities that mould the American elite seek out talented recruits from all backgrounds, and clever poor children who make it to the Ivy League may have their fees waived entirely. But middle-class students have to rack up huge debts to attend college, especially if they want a post-graduate degree, which many desirable jobs now require. The link between parental income and a child’s academic success has grown stronger, as clever people become richer and splash out on their daughter’s Mandarin tutor, and education matters more than it used to, because the demand for brainpower has soared. A young college graduate earns 63% more than a high-school graduate if both work full-time—and the high-school graduate is much less likely to work at all. For those at the top of the pile, moving straight from the best universities into the best jobs, the potential rewards are greater than they have ever been.

None of this is peculiar to America, but the trend is most visible there. This is partly because the gap between rich and poor is bigger than anywhere else in the rich world—a problem Barack Obama alluded to repeatedly in his state-of-the-union address on January 20th (see article). It is also because its education system favours the well-off more than anywhere else in the rich world.•

In a Backchannel interview largely about strategies for combating global poverty, Steven Levy asks Bill Gates about the existential threat of superintelligent AI. The Microsoft founder sides more with Musk than Page. The exchange:

Steven Levy:

Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates:

I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.•

Tags: ,

In a Potemkin Review interview conducted by Antoine Dolcerocca and Gokhan Terzioglu, Thomas Piketty discusses what he believes is the source of long-term, even permanent, productivity, which, of course, can occur without reasonably equitable distribution, the main focus of his book Capital in the 21st Century. An excerpt:

Question:

What do you see as a source of perpetual productivity growth?

Thomas Piketty:

Simply accumulation of new knowledge. People go to school more, knowledge in science increases and that is the primary reason for productivity growth. We know more right now in semiconductors and biology than we did twenty years ago, and that will continue.

Question:

You argue in the book that while Marx made his predictions in the 19th century, we now know that sustained productivity growth is possible with knowledge (Solow residual, etc.). But do you think this can be a sustained process in the long run?

Thomas Piketty:

Yes, I do think that we can make inventions forever. The only thing that can make it non-sustainable is if we destroy the planet in the meantime, but I do not think that is what Marx had in mind. It can be a serious issue because we need to find new ways of producing energy in order to make it sustainable, or else this will come to a halt. However if we are able to use all forms of renewable energy, immaterial growth of knowledge can continue forever, or at least for a couple of centuries. There is no reason why technological progress should stop and population growth could also continue for a little more.•

Tags: , ,

Along with the progress being made with driverless cars and 3D bio-printers, the thing that has amazed me the most–alarmed me also–since I’ve been doing this blog has been the efforts of Boston Dynamics, the robotics company now owned by Google. The creations are so stunning that I hope the creators will remember that the applications of their machines are at least as important as the accomplishment of realizing the designs. At any rate, the Atlas robot is now untethered, liberated from its safety cord, operating freely via batteries.

In a Business Insider piece, tech entrepreneur Hank Williams neatly dissects the problem of the intervening period between the present and that future moment when material plenty arrives, which is hopefully where technology is taking us. How hungry will people get before the banquet is served? I don’t know that I agree with his prediction that more jobs will move to China; outsourcing will likely come to mean out of species more than out of country. An excerpt:

When you read in the press the oft-quoted concept that “those jobs aren’t coming back” this “reduction of need” is what underlies all of it. Technology has reduced the need for labor. And the labor that *is* needed can’t be done in more developed nations because there are people elsewhere who will happily provide that labor less expensively.

In the long term, technology is almost certainly the solution to the problem. When we create devices that individuals will be able to own that will be able to produce everything that we need, the solution will be at hand. This is *not* science fiction. We are starting to see that happen with energy with things like rooftop solar panels and less expensive wind turbines. We are nowhere near where we need to be, but it is obvious that eventually everyone will be able to produce his or her own energy.

The same will be true for clothing, where personal devices will be able to make our clothing in our homes on demand. Food will be commoditized in a similar way, making it possible to have the basic necessities of life with a few low cost source materials.

The problem is that we are in this awful in-between phase of our planet’s productivity curve. Technology has vastly reduced the number of workers and resources that are required to make what the planet needs. This means that a small number of people, the people in control of the creation of goods, get the benefit of the increased productivity. When we get to the end of this curve and everyone can, in essence, be their own manufacturer, things will be good again. But until we can ride this curve to its natural stopping point, there will be much suffering, as the jobs that technology kills are not replaced.

The political implications of this are staggering.•

Tags:

I’m sure the advent of commercial aviation was met with prejudices about the new-fangled machines, but it took quite a while to perfect automated co-pilots and the navigation of wind shears, so horrifying death was probably also a deterrent. In the article below from the September 22, 1929 Brooklyn Daily Eagle (which is sadly chopped off a bit in the beginning), the unnamed author looks at a selected history of technophobia. 

 

I’m not worried about conscious, superintelligent machines doing away with humans anytime soon. As far as I can see into the future, I’m more concerned about the economic and ethical ramifications of Weak AI and the proliferation of automation. That will be enough of a challenge. If there is to be a people-killing “plague,” it will likely come from environmental devastation of our own making. That’s the “machine” we’ve unloosed.

On the topic of the Singularity, the excellent Edge.org asked a raft of thinkers in various disciplines to ponder this question: “What do you think about machines that think?” Excerpts follow from responses by philosopher Daniel C. Dennett, journalist William Poundstone and founding Wired editor Kevin Kelly.

___________________________

From Dennett:

The Singularity—an Urban Legend?

The Singularity—the fateful moment when AI surpasses its creators in intelligence and takes over the world—is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility (“Well, in principle I guess it’s possible!”) coupled with a deliciously shudder-inducing punch line (“We’d be ruled by robots!”). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts—Elon Musk, Stephen Hawking, and David Chalmers, among others—and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn’t it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?

I think, on the contrary, that these alarm calls distract us from a more pressing problem, an impending disaster that won’t need any help from Moore’s Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can’t think, prematurely putting civilization on auto-pilot. The process is insidious because each step of it makes good local sense, is an offer you can’t refuse. You’d be a fool today to do large arithmetical calculations with pencil and paper when a hand calculator is much faster and almost perfectly reliable (don’t forget about round-off error), and why memorize train timetables when they are instantly available on your smart phone? Leave the map-reading and navigation to your GPS system; it isn’t conscious; it can’t think in any meaningful sense, but it’s much better than you are at keeping track of where you are and where you want to go.•

___________________________

From Poundstone:

Can Submarines Swim?

My favorite Edsger Dijkstra aphorism is this one: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Yet we keep playing the imitation game: asking how closely machine intelligence can duplicate our own intelligence, as if that is the real point. Of course, once you imagine machines with human-like feelings and free will, it’s possible to conceive of misbehaving machine intelligence—the AI as Frankenstein idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I have concluded it’s not.

Here’s the case for overblown. Machine intelligence can go in so many directions. It is a failure of imagination to focus on human-like directions. Most of the early futurist conceptions of machine intelligence were wildly off base because computers have been most successful at doing what humans can’t do well. Machines are incredibly good at sorting lists. Maybe that sounds boring, but think of how much efficient sorting has changed the world.

In answer to some of the questions brought up here, it is far from clear that there will ever be a practical reason for future machines to have emotions and inner dialog; to pass for human under extended interrogation; to desire, and be able to make use of, legal and civil rights. They’re machines, and they can be anything we design them to be.

But that’s the point. Some people will want anthropomorphic machine intelligence.•

___________________________

From Kelly:

Call Them Artificial Aliens

The most important thing about making machines that can think is that they will think different.

Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose” because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.

The kind of thinking done by the emerging AIs in 2014 is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans can do, they don’t do it in a human-like fashion. Facebook has the ability to ramp up an AI that can start with a photo of any person on earth and correctly identifying them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this ability very un-human. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that theywon’t drive like humans, with our easily distracted minds.

In a pervasively connected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences, and entirely new ways of thinking. We don’t know what the full taxonomy of intelligence is right now.•

Tags: , ,

First the bad news: We’re dying people on a dying planet in a dying universe. The good news: We’re hastening the destruction of the delicate balance of factors which enable our transient-but-amazing existence. Oh wait, that’s also bad.

In a New York Times piece, astrophysicist Adam Frank looks out at all the dead space in our solar system to analyze our own precariousness. An excerpt:

The defining feature of a technological civilization is the capacity to intensively “harvest” energy. But the basic physics of energy, heat and work known as thermodynamics tell us that waste, or what we physicists call entropy, must be generated and dumped back into the environment in the process. Human civilization currently harvests around 100 billion megawatt hours of energy each year and dumps 36 billion tons of carbon dioxide into the planetary system, which is why the atmosphere is holding more heat and the oceans are acidifying. As hard as it is for some to believe, we humans are now steering the planet, however poorly.

Can we generalize this kind of planetary hijacking to other worlds? The long history of Earth provides a clue. The oxygen you are breathing right now was not part of our original atmosphere. It was the so-called Great Oxidation Event, two billion years after the formation of the planet, that drove Earth’s atmospheric content of oxygen up by a factor of 10,000. What cosmic force could so drastically change an entire planet’s atmosphere? Nothing more than the respiratory excretions of anaerobic bacteria then dominating our world. The one gas we most need to survive originated as deadly pollution to our planet’s then-leading species: a simple bacterium.

The Great Oxidation Event alone shows that when life (intelligent or otherwise) becomes highly successful, it can dramatically change its host planet. And what is true here is likely to be true on other planets as well.

But can we predict how an alien industrial civilization might alter its world? From a half-century of exploring our own solar system we’ve learned a lot about planets and how they work. We know that Mars was once a habitable world with water rushing across its surface. And Venus, a planet that might have been much like Earth, was instead transformed by a runaway greenhouse effect into a hellish world of 800-degree days.

By studying these nearby planets, we’ve discovered general rules for both climate and climate change.•

Tags:

There’s a line near the end of 1973’s Westworld, after things have gone haywire, that speaks to concerns about Deep Learning. A technician, who’s asked why the AI has run amok and how order can be restored, answers: “They’ve been designed by other computers…we don’t know exactly how they work.”

At Google, search has never been the point. It’s been an AI company from the start, Roomba-ing information to implement in a myriad of automated ways. Deep Learning is clearly a large part of that ultimate search. On that topic, Steven Levy conducted a Backchannel interview with Demis Hassabis, the company’s Vice President of Engineering for AI projects, who is a brilliant computer-game designer. For now, it’s all just games. An excerpt:

Steven Levy:

I imagine that the more we learn about the brain, the better we can create a machine approach to intelligence.

Demis Hassabis:

Yes. The exciting thing about these learning algorithms is they are kind of meta level. We’re imbuing it with the ability to learn for itself from experience, just like a human would do, and therefore it can do other stuff that maybe we don’t know how to program. It’s exciting to see that when it comes up with a new strategy in an Atari game that the programmers didn’t know about. Of course you need amazing programmers and researchers, like the ones we have here, to actually build the brain-like architecture that can do the learning.

Steven Levy:

In other words, we need massive human intelligence to build these systems but then we’ll —

Demis Hassabis:

… build the systems to master the more pedestrian or narrow tasks like playing chess. We won’t program a Go program. We’ll have a program that can play chess and Go and Crosses and Drafts and any of these board games, rather than reprogramming every time. That’s going to save an incredible amount of time. Also, we’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain. As humans, if I show you some new board game or some new task or new card game, you don’t start from zero. If you know to play bridge and whist and whatever, I could invent a new card game for you, and you wouldn’t be starting from scratch—you would be bringing to bear this idea of suits and the knowledge that a higher card beats a lower card. This is all transferable information no matter what the card game is.•

Tags: ,

Companies really want robots to take your job, and pretty much any task that can be performed by either humans or machines will be ceded to our silicon sisters. Your career security may depend on how far engineers can develop these simulacra. Case in point: Toshiba’s Chihira Aico, a female android who can already read the news in only slightly more wooden fashion than your local anchor. What more will she learn? From Susan Kalla at Forbes:

At the CES, the crowd was mesmerized by Toshiba’s talking android in a pink party dress. She stood quietly, looking like a mannequin until she sprang to life exclaiming, “I can be a news reader, consultant or cheerleader!” Throwing her arms up in the air, she squealed,”I can get excited!”

Chihira is a concept model and her creators are exploring applications while working on ways to make her seem more human. The are refining her movements and language skills. She has a limited range of motion, and the abrupt thrusts of her arms can remind you of Frankenstein. She can do a great presentation, but developers are not satisfied, they want her to interact with people.

She’s a complicated machine. Over 40 motors in her joints coordinate her moves, driven by  software developed by Toshiba under the direction of Hitoshi Tokuda. The 15 tiny air pumps in her face control the blinking of her eyes and move her jaws and mouth as she speaks. Osaka University managed the muscle research for Chihira, building on previous work on prosthetic limbs.

Chihira may seem creepy, but businesses are serious about developing androids to cut costs. Hospitals are running trials with the robot, and she’s being retrofitted for assisted living. Of course, life-like robots may eventually take your job. The field of robotics is advancing quickly and many universities are racing stake a claim.•

Tags: ,

If global wealth inequality was merely about envy and not concern over an astounding disproportion unrelated to meritocracy, the issue would have gone away after an election cycle. Even Mitt Romney is now pretending to worry about this systemic failure. From Mona Chalabi at FiveThirtyEight:

Eighty people hold the same amount of wealth as the world’s 3.6 billion poorest people, according to an analysis just released from Oxfam. The report from the global anti-poverty organization finds that since 2009, the wealth of those 80 richest has doubled in nominal terms — while the wealth of the poorest 50 percent of the world’s population has fallen. …

Thirty-five of the 80 richest people in the world are U.S. citizens, with combined wealth of $941 billion in 2014. Together in second place are Germany and Russia, with seven mega-rich individuals apiece. The entire list is dominated by one gender, though — 70 of the 80 richest people are men. And 68 of the people on the list are 50 or older.

If those 80 individuals were to bump into each on Svenborgia, what might they talk about? Retail could be a good conversation starter — 14 of the 80 got their wealth that way. Or they could discuss “extractives” (industries like oil, gas and mining, to which 11 of them owe their fortunes), finance (also 11 of them) or tech (10 of them).•

Tags:

At six, Prince Charles had yet to fly on an airplane but had a slew of technological devices at his disposal. From “The Boy Who Lives in a Palace,” a 1955 Collier’s Weekly portrait by Helen Worden Erskine of the lad raised for a throne which never materialized:

In his five-room nursery suite at Buckingham Palace is a TV set; over his bed is a microphone so sensitive that it instantly registers, in the quarters of both the chief nurse and palace detectives, his slightest cough, as well as his breathing. He has a private telephone connected with the main palace switchboard. It holds no mystery for him; he began talking into it when he was so small he had to stand on a chair to reach it. At that age the calls were usually for Mummy or Papa in some far-off place. Now he has grown enough to use it normally, although he’s still impulsively apt to ring the chef at odd hours and say: “Send Charles ice cream, quickly!”

Mechanical devices intrigue the little prince. He clambers over the fire engines which are part of the equipment on all the crown estates. He is keen about his mother’s private plane and his father’s helicopter, and begs to go out, rain or shine, to see Papa take off from the palace lawn. Once, when court photographer Marcus Adams was taking his picture. Charles climbed a chair and peered into the lens. “Everything’s upside down,” he reported, surprised. Then he ran back to the sofa where he’d been posing and stood on his head—”to make it right for Mr. Adams,” he explained.

Despite his parents’ air-mindedness, Charles has not yet flown. In his journeys among the halfdozen fabulous palaces and castles he can call home his choice of a coach-and-four may be either a Rolls-Royce or a special train bearing the royal coat of arms. On his recent trip to Malta, his first outside Great Britain, he traveled on the queen’s new 412-foot, $2,000,000 royal yacht Britannia.

On his travels, the young prince is always accompanied by his own retinue: Superintendent T. J. Clark of Scotland Yard, chauffeur Jim Cheevers. chief nurse Helen Lightbody, personal nursemaid Mabel Anderson, governess Katherine Peebles, a palace policeman and, as a rule. General Sir Frederick Browning, Comptroller of the Queen’s Household.

As Duke of Cornwall (his most important title), Charles is entitled to an annual income from farm products and rents equivalent to around $300,000 a year. He will draw only $36,000 of the total annually until he is fifteen; from then until his eighteenth birthday he will get another $90,000
a year, after which the entire income will be his.•

Tags: , , , , , , ,

In Jenna Garrett’s “What It’s Like Living in the Coldest Town on Earth,” the author takes a look at the survival strategies locals employ in the iceberg-ish burg of Oymyakon in the Sakha Republic of Russia, which suffers an extreme subarctic climate. No crops can grow so everyone is a carnivore. It once got down to −90 °F. The summers are quite lovely, but, you know, small consolation. Why would anyone live there–or on a fault line or near a volcano? Because humans. An excerpt:

It got down to -24 degrees Fahrenheit in Oymyakon, Russia, over the weekend. As frigid as that seems, it’s typical for this town, long known as the coldest inhabited place on Earth. If that kind of number is hard to wrap your brain around, such a temperature is so cold that people here regularly consume frozen meat, keep their cars running 24/7 and must warm the ground with a bonfire for several days before burying their dead. …

Here arctic chill is simply a fact of life, something to be endured. People develop a variety of tricks to survive. Most people use outhouses, because indoor plumbing tends to freeze. Cars are kept in heated garages or, if left outside, left running all the time. Crops don’t grow in the frozen ground, so people have a largely carnivorous diet—reindeer meat, raw flesh shaved from frozen fish, and ice cubes of horse blood with macaroni are a few local delicacies.•

Tags:

Apart from Las Vegas, few places in America have been enriched by casinos, since almost none become tourist destinations and they’re attended by a raft of costly social problems. Even the casinos on Native-American reservations, which enjoy special tax status, have shown mixed results at best and in many cases they may be increasing and further entrenching poverty. One issue might be per-capita payments, according to an Economist report. Perhaps. But the direct payments are often miniscule, so I’m not completely sold that’s it’s not more a toxic cocktail of complicated issues. An excerpt:

ON A rainy weekday afternoon, Mike Justice pushes his two-year-old son in a pram up a hill on the Siletz Reservation, a desolate, wooded area along the coast of Oregon. Although there are jobs at the nearby casino, Mr Justice, a member of the nearly 5,000-strong Siletz tribe, is unemployed. He and his girlfriend Jamie, a recovering drug addict, live off her welfare payments of a few hundred dollars a month, plus the roughly $1,200 he receives annually in “per capita payments”, cash the tribe distributes each year from its casino profits. That puts the family of three below the poverty line.

It is not ideal, Mr Justice admits, but he says it is better than pouring hours into a casino job that pays minimum wage and barely covers the cost of commuting. Some 13% of Mr Justice’s tribe work at the Chinook Winds Casino, including his mother, but it does not appeal to him. The casino lies an hour away down a long, windy road. He has no car, and the shuttle bus runs only a few times a day. “Once you get off your shift, you may have to wait three hours for the shuttle, and then spend another hour on the road,” he says. “For me, it’s just not worth it.”

Mr Justice’s situation is not unusual. After the Supreme Court ruled in 1987 that Native American tribes, being sovereign, could not be barred from allowing gambling, casinos began popping up on reservations everywhere. Today, almost half of America’s 566 Native American tribes and villages operate casinos, which in 2013 took in $28 billion, according to the National Indian Gaming Commission.

Small tribes with land close to big cities have done well. Yet a new study in the American Indian Law Journal suggests that growing tribal gaming revenues can make poverty worse.•

« Older entries § Newer entries »