Science/Tech

You are currently browsing the archive for the Science/Tech category.

The future usually arrives gradually, even frustratingly slowly, often wearing the clothes of the past, but what if it got here today or soon thereafter?

The benefits of profound technologies rushing headlong at us would be amazing and amazingly challenging. Gill Pratt, who oversaw the DARPA Robotics Challenge, wonders in a new Journal of Economic Perspectives essay if the field is to have a wild growth spurt, a synthetic analog to the biological eruption of the Cambrian Period. He thinks that once the “generalizable knowledge representation problem” is addressed, no easy feat, the field will speed forward. The opening:

About half a billion years ago, life on earth experienced a short period of very rapid diversification called the “Cambrian Explosion.” Many theories have been proposed for the cause of the Cambrian Explosion, with one of the most provocative being the evolution of vision, which allowed animals to dramatically increase their ability to hunt and find mates (for discussion, see Parker 2003). Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics— a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion. Is a Cambrian Explosion Coming for Robotics?

How soon might a Cambrian Explosion of robotics occur? It is hard to tell. Some say we should consider the history of computer chess, where brute force search and heuristic algorithms can now beat the best human player yet no chess-playing program inherently knows how to handle even a simple adjacent problem, like how to win at a straightforward game like tic-tac-toe (Brooks 2015). In this view, specialized robots will improve at performing well-defined tasks, but in the real world, there are far more problems yet to be solved than ways presently known to solve them.

But unlike computer chess programs, where the rules of chess are built in, today’s Deep Learning algorithms use general learning techniques with little domain-specific structure. They have been applied to a range of perception problems, like speech recognition and now vision. It is reasonable to assume that robots will in the not-too-distant future be able perform any associative memory problem at human levels, even those with high-dimensional inputs, with the use of Deep Learning algorithms. Furthermore, unlike computer chess, where improvements have occurred at a gradual and expected rate, the very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.•

Tags:

It amazes me that California’s water shortage seems to be viewed in this country as a regional problem for them, when it’s clearly a grave concern for us. As farmers in that state search deeper and deeper for the scarce liquid hoping to stave off personal disaster, we all near a collective one. If California dying of thirst isn’t a national emergency, I don’t know what is. Globally, the water crises may be the most serious threat to world peace. From the Spiegel report “World Without Water“:

“Water is the primary principle of all things,” the philosopher Thales of Miletus wrote in the 6th century BC. More than two-and-a-half thousand years later, on July 28, 2010, the United Nations felt it was necessary to define access to water as a human right. It was an act of desperation. The UN has not fallen so clearly short of any of its other millennium goals than the goal of cutting the number of people without this access in half by 2015.

The question is whether water is public property and a human right. Or is it ultimately a commodity, a consumer good and a financial investment?

The world’s business leaders and decision makers gathered at the annual meeting in snow-covered Davos, Switzerland in January to discuss the most pressing issues of the day. One of the questions was: What is the greatest social and economic risk of the coming decade? The selection of answers consisted of 28 risks, including wars, weapons of mass destruction and epidemics. The answer chosen by the world’s economic elite was: water crises.

Consumers have recognized for years that we need to reduce our consumption of petroleum. But very few people think about water as being scarce, even though it’s the resource of the future, more valuable than oil because it is irreplaceable. It also happens to be the source of all life.•

 

A few days ago, I posted an excerpt from a New York Times op-ed written by Peter Georgescu, the Young & Rubicam chairman emeritus, who believes wealth inequality must be remedied by corporations (not particularly likely) or we’ll have social uprisings and ginormous tax increases. Well, something’s got to give.

The essay touched a nerve, leading to a raft of Facebook questions directed at the writer. He answered some of them for the Times. Unfortunately, none address automation potentially adding to the short- and medium-term woes with technological unemployment. 

One exchange about what the questioner and Georgescu see as the precarious position of contemporary capitalism:

Question:

A quick prelude is that I fear that our capitalist model is in danger. In the early days of capitalism (here in the US and elsewhere) companies were mostly family owned and run even for generations. Now we have the board, stockholders and CEO model, which appears very flawed. The stockholders often are just looking for short term gain, the board has no real ties to or ‘skin’ in the company, and the CEO is often colluding with the stockholders for short term gain.

After that long-winded lead in, do you share those fears? Any thoughts on improving the current public corporate model? How about the German system of requiring public corporations to have a union representative on the board?

Peter Georgescu:

I fear for the future of capitalism in our country and around the world. Capitalism really means free enterprise. The name came from the resource that once drove the free-market engine. Capital no longer plays that prominent role. Creativity and innovation drive global business today. Capital is just one resource, important, but no longer the major differentiator. Historically, this so-called capitalist free-enterprise engine achieved extraordinary results. It propelled America into the superpower that is it today. It lifted hundreds of millions of people from deep poverty to a more humane standard of living. (Think China, India, Brazil, countries in Africa and more.)

But that extraordinary engine has been hijacked by a rogue philosophy that says that shareholders’ interests come first and which threatens to destroy both this magnificent engine and our very way of life. The misguided philosophy says that one of a corporation’s stakeholders, the shareholders, deserves to have their value maximized in the short term. The three other vital stakeholders are not adequately represented at the decision-making table and inadequately compensated. First, the employees — who are the real value creators. They have been turned into a cost to be squeezed. Then, the corporation itself, where investment in R&D and innovation is grossly inadequate. Finally, a business’s customers, who should be a corporation’s prime stakeholder — not the shareholders.

Even the moral justification that the shareholder is the owner and an owner gets what they want when they want it is a myth. In fact the shareholder is a mentor at best. They come into stock when they want and leave at their will. And they are of course immune from any corporation liabilities. That’s not ownership. The preponderance of legal opinion is clear. The corporation owns its own assets, not the shareholder.

So yes, we must rebalance a business’s incremental value returns among the key stakeholders — the employees, the shareholders and the corporation itself. And we must always put the customer’s interests first.

If we do that, we can liberate free enterprise from its present-day shackles.•

Tags:

cap023

To generate hoopla for the 1950 sci-fi film Destination Moon, the principals of the film, including writer Robert Heinlein, did on-set interviews with KTLA the year before. The author, who makes his entrance near the 12-minute mark, explains that a real space mission only needed money and will, not any new science, to be completed. About 20 years later, he was interviewed as part of Walter Cronkite’s CBS coverage of the actual moon landing.

Tags:

This is very cool: A 1971 Life magazine report about a Manhattan computer expo in which IBMs wowed visitors by merely playing games of 20 Questions, no chess expertise even necessary. Better yet, the exhibition was curated by Charles Eames, who, along with his wife and business partner, Ray, was as comfortable with computers as he was with furniture. FromA Lively Show with a Robot as the Star,” written by Fortune editor Walter McQuade:

The stroller steps off the sidewalk and into the IBM display room on 57th Street in Manhattan and approaches one of the four shiny input typewriters of an IBM System 360 computer. The game is ’20 Questions.’ The computer ‘thinks up’ one of the 12 stock mystery words, like “duck,” “orange,” “cloud,” “helium,” “knowledge.” The stroller has 20 chances to guess and if, perhaps, the mystery word is “knowledge,” the typical conversation could start like this:

Stroller: “Does it grow?”
Computer: “To answer that question might be misleading.”
Stroller: “Can I eat it? Is it edible?”
Computer: “Only as food for thought.”
Stroller: “Do computers have it?”
Computer: “Strictly speaking, no.”

Twenty Questions is only the pièce de resistance in what is probably the canniest and most successful exhibition on computers ever devised. It should be: its deviser, the protean Charles Eames–poet, architect, painter, mathematician, toymaker, furniture designer and film maker–has had ample exposure at expos. Here, he and his collaborators reach back into the history and prehistory of computers to show how and why calculating machines came about.

Most of the story evolves on a gigantic, 48-foot, three-dimensional wall tapestry. Woven into it are hundreds of souvenirs from 1890 to 1950, the computer’s gestation period. Here are artifacts, documents and photographs, dramatizing six decades of striving, when information began to explode on the world and nobody knew quite what to do with the fallout.

The devices range from “The Millionaire,” one of the first calculators, made of brass, to Elmer Sperry’s gyroscope, to Vannevar Bush’s differential analyzer. Included are the work of such elegant minds as Alan Turing, Wallace Eckert, Norbert Wiener, John von Neumann. Even L. Frank Baum and his “clockwork copper man,” Tik-Tok of Oz, is represented.

The military imperative to handle information quickly is underlined with a Norden bombsight and with ENIAC, an Army ballistics calculator and predecessor of UNIVAC. There are beautifully selected pieces of cultured debris to date it all; election literature in the years each of the Roosevelts ran for President, and one of the big old dollar bills, when they were worth 100 cents. Best of all are the evocations of mental battles fought and sometimes lost. Early in the century an English scientist, Lewis Fry Richardson, devoted many years to developing numerical models in which equations simulated physical systems to predict the weather. He was a dedicated visionary, but his widow wrote, “There came a time of heartbreak when those most interested in his ‘upper air’ research proved to be ‘poison gas’ experts. Lewis stopped his meteorological researches, destroying such as had not been published.”

The wall closes with the birth of the UNIVAC in 1950. Since then the computer has progressed so fast, with computers working their own evolution, that the souvenirs would be just print-out sheets. But Eames demonstrates with models and film displays that if this be witchcraft, there are no witches involved–just the 350,000 full-time programmers (in the U.S. alone) and about two million other nonwitches who operate the machines; in a multiple, rapid-fire slidefilm; they chew gum, scratch themselves, dye their hair and do their work.

And when the stroller, no warlock himself, wanders in off the street with his family (it’s a great show for kids) and confronts the System 360, he is well advised to watch his language and frame his questions well. Eames’ finale to the exhibition can be fairly cheeky. System 360, Model 40, is not above printing out, in response to a muddled thought: “Your grammar has me stumped.”•

The only thing trickier than predicting future population is interpreting what those people will mean for the world and its resources. From Malthus to Ehrlich, population bombs have defused themselves, even proved beneficial. Down deep, most likely think there’s a tipping point, a tragic number, but, of course, development of technologies can rework that math, stretch resources to new lengths. And a larger pool of talent makes it easier to create those new tools.

It would seem to make sense that immigrant nations can ride the wave of fluctuations best, not being dependent on internal fertility numbers. Robotics may reduce that advantage, however. Japan is certainly banking on that transformation.

In a Financial Times piece, Robin Harding writes that fertility seems to be on a steep decline globally, leveling off. If so, the ramifications will be many, including for Labor. The opening:

The extent of the plunge in childbearing is startling. Eighty-three countries containing 46 per cent of the world’s population — including every single country in Europe — now have fertility below replacement rate of about 2.1 births per woman. Another 46 per cent live in countries where the birth rate has fallen sharply. In 48 countries the population will decline between now and 2050.

That leaves just 9 per cent of the world’s population, almost all in Africa, living in nations with pre-industrial fertility rates of five or six children per woman. But even in Africa fertility is starting to dip. In a decade, the UN reckons, there will be just three countries with a fertility rate higher than five: Mali, Niger and Somalia. In three decades, it projects only Niger will be higher than four.

These projections include a fertility bounce in countries such as Germany and Japan. If more fecund nations follow this path of declining birth rates, therefore, a stable future population could quickly be locked in.

That would have enormous consequences for the world economy, geo­politics and the sum of human happiness, illustrated by some of the middle-income countries that have gone through a dramatic, and often ignored, fall in fertility.•

 

Tags:

The contemporary Western attitude toward architecture is to protect the jewels, preserve them. Not so in Japan, a nation of people Pico Iyer refers to in a striking T Magazine essay as “pragmatic romantics.” Iyer writes of ancient buildings being regularly replaced by replicas in the same manner that some citizens hire elderly actors to portray deceased grandparents at family functions. It’s just a different mindset. The opening:

EVERY 20 YEARS, the most sacred Shinto site in Japan — the Grand Shrine at Ise — is completely torn down and replaced with a replica, constructed to look as weathered and authentic as the original structure built by an emperor in the seventh century. To many of us in the West, this sounds as sacrilegious as rebuilding the Western Wall tomorrow or hiring a Roman laborer to repaint the Sistine Chapel once a generation. But Japan has a different sense of what’s genuine and what’s not — of the relation of old to new — than we do; if the historic could benefit from a little help from art, or humanity, the reasoning goes, then wouldn’t it be unnatural not to provide it?

The motto guiding Japan’s way of being might be: New is the new old. For proof, you need only look at three recent high-profile and much-debated demolition jobs in Tokyo. The Hotel Okura, an icon of Japanese Modernism built in 1962 to commemorate the country’s arrival in the major leagues of nations as the host of the 1964 Olympics and cherished for its unique and atmospheric lobby, is currently being reduced to rubble in favor of two no doubt anonymous glass towers, meant to announce Japan’s continuing position in the big leagues, as the host of the 2020 Olympics. The once state-of-the-art National Olympic Stadium, designed by Mitsuo Katayama for the 1964 event, is being replaced by tomorrow’s idea of futurism: a new structure that was, until recently, set to be designed by Zaha Hadid. Even Tsukiji, the world’s largest fish market and the mainstay of jet-lagged sightseers for decades — is being mostly moved to a shopping mall, with the assurance that a copy of a place can sometimes look more authentic than the place itself. These erasures — most notably of the Okura, which became the personal cause of Tomas Maier, the creative director of Bottega Veneta — have elicited protests from devoted aesthetes the world over: What could the Japanese be thinking?

The answer is simple: The Japanese are different from you and me. They don’t confuse books with their covers.•

Tags:

Televox was the 1920s robot that reportedly fetched your car from the garage or a bottle of wine the cellar. While these feats, along with many others, were said to have been ably performed, the cost of such a machine made it unmarketable.

Televox was also the star attraction of a very early insinuation of robotics into the American military when, in 1928, he barked out orders to the grunts. It was a bit of a publicity stunt but also the beginnings of robotizing war, which some then thought implausible, though nobody does now. An article follows from the June 11, 1928 Brooklyn Daily Eagle.

Tags:

According to Paul Mason, author of PostCapitalism, technology has reduced the economic system to obsolescence or soon will. While I don’t agree that capitalism is going away, I do believe the modern version of it is headed for a serious revision.

The extent to which technology disrupts capitalism–the biggest disruption of them all–depends to some degree on how quickly the new normal arrives. If driverless cars are perfected in the next few years, tens of millions of positions will vanish in America alone. Even if the future makes itself known more slowly, employment will probably grow more scarce as automation and robotics insinuate themselves. 

The very idea of work is currently undergoing a reinvention. In exchange for the utility of communicating with others, Facebook users don’t pay a small monthly fee but instead do “volunteer” labor for the company, producing mountains of content each day. That would make Mark Zuckerberg’s company something like the biggest sweatshop in history, except even those dodgy outfits pay some minimal fee. It’s a quiet transition.

Gillian Tett of the Financial Times reviews Mason’s new book, which argues that work will become largely voluntary in the manner of Wikipedia and Facebook, and that governments will provide basic income and services. That’s his Utopian vision at least. Tett finds it an imperfect but important volume. An excerpt:

His starting point is an assertion that the current technological revolution has at least three big implications for modern economies. First, “information technology has reduced the need for work” — or, more accurately, for all humans to be workers. For automation is now replacing jobs at a startling speed; indeed, a 2013 report by the Oxford Martin school estimated that half the jobs in the US are at high risk of vanishing within a decade or two.

The second key point about the IT revolution, Mason argues, is that “information goods are corroding the market’s ability to form prices correctly.” For the key point about cyber-information is that it can be replicated endlessly, for free; there is no constraint on how many times we can copy and paste a Wikipedia page. “Until we had shareable information goods, the basic law of economics was that everything is scarce. Supply and demand assumes scarcity. Now certain goods are not scarce, they are abundant.”

But third, “goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy.” More specifically, people are collaborating in a manner that does not always make sense to traditional economists, who are used to assuming that humans act in self-interest and price things according to supply and demand.•

Tags: ,

The perfecting of autonomous cars would do many good things (fight pollution, reduce highway deaths) and some bad (threaten job security for millions, be a scary target for hackers). Like most technologies, the size of the victories will be determined by how we manage the losses.

One thing that almost assuredly happens during a robocar age will be a decrease in traffic, due in large part to the end of the maddening search for parking spots.

From Peter Wayner at the Atlantic:

There’s plenty of research showing that a surprisingly large number of people are driving, trying to find a place to leave their car. A group called Transportation Alternatives studied the flow of cars around one Brooklyn neighborhood, Park Slope, and found that 64 percent of the local cars were searching for a place to park. It’s not just the inner core of cities either. Many cars in suburban downtowns and shopping-mall parking lots do the same thing.

Robot cars could change all that. The unsticking of the urban roads is one of the side effects of autonomous cars that will, in turn, change the landscape of cities— essentially eliminating one of the enduring symbols of urban life, the traffic jam full of honking cars and fuming passengers. It will also redefine how we use land in the city, unleashing trillions of dollars of real estate to be used for more than storing cars. Autonomous cars are poised to save us uncountable hours of time, not just by letting us sleep as the car drives, but by unblocking the roads so they flow faster.•

Tags:

In Andrew Schrank’s Pacific·Standard essay about Labor in the Digital Age, which imagines possible enlightened and benighted outcomes, he says the truest thing anyone can say on the topic: “The future of work and workers will not be dictated by technology alone.” No, it won’t.

An excerpt in which he looks at the Google Glass as half-full:

Is a jobless future inevitable? Do automation, computerization, and globalization necessarily conspire to undercut employment and living standards? Or might they be harnessed to benign ends by farsighted leaders? The answer is anything but obvious, for the relationship between automation and job loss is at best indeterminate, both within and across countries, and the relationship between automation and compensation is similarly opaque. For instance, Germany and Japan boast more robots per capita and less unemployment than the United States, and the stock of industrial robots and the average manufacturing wage have been growing in tandem—at double digit rates, no less—in China.

What excites me about the future of work and workers, therefore, is the possibility that the technological determinists are wrong, and that we will subordinate machinery to our needs and desires rather than vice versa. In this rosy scenario, machines take over the monotonous jobs and allow humans to pursue more leisurely or creative pursuits. Working hours fall and wages rise across the board. And productivity gains are distributed (and re-distributed) in accord with the principles of distributive justice and fairness.

While such a scenario may seem not just rosy but unrealistic, it is not entirely implausible.•

Tags:

Gerald O’Neill’s space dreams were bold–and very unrealistic. The astrophysicist believed 40 years ago, right around the time of his popular paper, “The Colonization of Space,” that Earthlings would be able to make round-trip voyages to other planets for about $3000 before the end of the century. Not quite. 

O’Neill died, however, inspire the famous 1970s space-colonies design, which I’ve used on this site many times. From Brian Merchant at Vice:

The first serious blueprint for building cities in space was drawn almost on a whim. Forty years ago this summer, dozens of scientists gathered in the heart of Silicon Valley for one of NASA’s design studies, which were typically polite, educational affairs. But in 1975, the topic of inquiry was “The Colonization of Space,” a recent paper by the astrophysicist Gerard O’Neill.

“The idea was to review his ideas and to see if they were technically feasible,” said Mark Hopkins, an economist who was there. “Well, they were.” So the scientists had a choice—set about laying the groundwork for real, no-bullshit space colonization, or hold the regularly scheduled series of seminars. “We said, ‘To hell with that,'” Hopkins recalled. The ten-week program became a quest to outline a scientifically possible and economically viable way to build a human habitat in space.

What they came up with—designs for huge, orbital settlements—are still pretty much the basis for all our space digs today, science-fictional or otherwise.•

Tags: ,

You certainly don’t want to be a nation left behind by robotics any more than you’d want to miss out on the Industrial Revolution, but at the same time you need jobs for citizens of all skill levels. What to do?

Indian Prime Minister Narendra Modi’s goal of reducing unemployment among the nation’s many unskilled workers is threatened by automation, a sector other countries in the region (particularly Thailand, Vietnam and Malaysia) are investing heavily in. The need for cheap labor is disappearing just when the nation needs it most. From Natalie Obiko Pearson at Bloomberg:

Robots and automation are invigorating once-sleepy Indian factories, boosting productivity by carrying out low-skill tasks more efficiently. While in theory, improved output is good for economic growth, the trend is creating a headache for Prime Minister Narendra Modi: Robots are diminishing roles for unskilled laborers that he wants to put to work as part of his Make in India campaign aimed at creating jobs for the poor.

India’s largely uneducated labor force and broken educational system aren’t ready for the more complex jobs that workers need when their low-skilled roles are taken over by machines. Meanwhile, nations employing robots more quickly, such as China, are becoming even more competitive.

“The need for unskilled labor is beginning to diminish,” Akhilesh Tilotia, head of thematic research at Kotak Institutional Equities in Mumbai and author of a book on India’s demographic impact. “Whatever education we’re putting in and whatever skill development we’re potentially trying to put out – – does it match where the industry will potentially be five to 10 years hence? That linkage is reasonably broken in India.” …

In the race to create factory jobs, Modi isn’t just competing against Asian rivals. Robots are increasingly helping developed economies. In Switzerland, robots make toothbrushes for export; in Spain, they cut and pack lettuce heads — a job previously done by migrants; in Germany, they fill tubs of ice cream, and in the U.K. they assemble yogurt into multipacks at a rate of 80 a minute.

 

Tags: ,

Easily the best article I’ve read about E.L. Doctorow in the wake of his death is Ron Rosenbaum’s expansive Los Angeles Review of Books piece about the late novelist. It glides easily from Charles Darwin to Thomas Nagel to the hard problem of consciousness to the “electrified meat” in our skulls to the “Jor-El warning” in Doctorow’s final fiction, Andrew’s Brain. That clarion call was directed at the Singularity, which the writer feared would end human exceptionalism, and, of course, it would. More a matter of when than if.

An excerpt:

Not to spoil the mood but I feel a kind of responsibility to pass on Doctorow’s Jor-El warning, even if I don’t completely understand it. I would nonetheless contend that — coming from a person as steeped as he is in the contemplation of the Mind and its possibilities, the close reading of consciousness, of that twain of brain and mind and the mysteries of their relationship — attention should be paid. It seemed like a message he wanted me to convey.

I asked him to expand upon the idea voiced in Andrew’s Brain that once a computer was created that could replicate everything in the brain, once machines can think as men, when we’ve achieved true “artificial intelligence” or “ the singularity” as it’s sometimes called, it would be “catastrophic.”

“There is an outfit in Switzerland,” he says. “And this is a fact — they’re building a computer to emulate a brain. The theory is, of course, complex. There are billions of things going on in the brain but they take the position that the number of things is finite and that finally you can reach that point. Of course there’s a lot more work to do in terms of the brain chemistry and so on. So Andrew says to Doc ‘the twain will remain.’

“But later he has this revelation because he’s read, as I had, a very responsible scientist saying that it was possible someday for computers to have consciousness. That was said in a piece by a very respected neuroscientist by the name of Gerald Edelman. So the theory is this: If we do ever figure out how the brain becomes what we understand as consciousness, our feelings, our wishes, our desires, dreams — at that point we will know enough to simulate with a computer the human brain — and the computer will achieve consciousness. That is a great scientific achievement if it ever occurs. But if it does, all the old stories are gone. The Bible, everything.”

“Why?”

“Because the idea of the exceptionalism of the human mind is no longer exceptional. And you’re not even dealing with the primary consciousness of animals, of different degrees of understanding. You’re talking about a machine that could now think, and the dominion of the human mind no longer exists. And that’s disastrous because it’s earth-shaking. I mean, imagine.”•

Tags: ,

Transhumanist Presidential candidate Zoltan Istvan penned a Vice article about the influence next-wave technologies may have on violent crime, which he views largely as a form of mental disease. A lot of it is pretty far out there–cranial implants modifying behavior, death-row inmates choosing to be cryogenically frozen, etc. I’ll grant that he’s right on two points:

1) Criminal behavior is modified already in many cases by prescription drugs and psychiatry.

2) Surveillance and tracking, for all the issues they bring, will make it increasingly difficult to stealthily commit traditional crimes.

But debates about cerebral reconditioning and lobotomy? Yikes. Sounds almost criminal.

From Istvan:

One other method that could be considered for death row criminals is cryonics. The movie Minority Report, which features precogs who can see crime activity in the future, show other ways violent criminals are dealt with: namely a form of suspended animation where criminals dream out their lives. So the concept isn’t unheard of. With this in mind, maybe violent criminals even today should legally be given the option for cryonics, to be returned to a living state in the future where the reconditioning of the brain and new preventative technology—such as ubiquitous surveillance—means they could no longer commit violent acts.

Speaking of extreme surveillance—that rapidly growing field of technology also presents near-term alternatives for criminals on death row that might be considered sufficient punishment. We could permanently track and monitor death row criminals. And we could have an ankle brace (or implant) that releases a powerful tranquilizer if violent behavior is reported or attempted.

Surveillance and tracking of criminals would be expensive to monitor, but perhaps in five to 10 years time basic computer recognition programs in charge of drones might be able to do the surveillance affordably. In fact, it might be cheapest just to have a robot follow a violent criminal around all the time, another technology that also should be here in less than a decade’s time. Violent criminals could, for example, only travel in driverless cars approved and monitored by local police, and they’d always be accompanied by some drone or robot caretaker.•

 

Tags:

We’ll need to learn how to grow food in space if we’re to inhabit other planets, but such otherworldly experiments will be helpful down here on the home base since we’ll need to produce more food with less impact on the environment. 

For the first time, astronauts are supplementing their menus with vegetables they’ve grown in microgravity environments. From a NASA press release:

Fresh food grown in the microgravity environment of space officially is on the menu for the first time for NASA astronauts on the International Space Station. Expedition 44 crew members, including NASA’s one-year astronaut Scott Kelly, are ready to sample the fruits of their labor after harvesting a crop of “Outredgeous” red romaine lettuce Monday, Aug. 10, from the Veggie plant growth system on the nation’s orbiting laboratory.

The astronauts will clean the leafy greens with citric acid-based, food safe sanitizing wipes before consuming them. They will eat half of the space bounty, setting aside the other half to be packaged and frozen on the station until it can be returned to Earth for scientific analysis.

NASA’s plant experiment, called Veg-01, is being used to study the in-orbit function and performance of the plant growth facility and its rooting “pillows,” which contain the seeds.

NASA is maturing Veggie technology aboard the space station to provide future pioneers with a sustainable food supplement – a critical part of NASA’s Journey to Mars. As NASA moves toward long-duration exploration missions farther into the solar system, Veggie will be a resource for crew food growth and consumption. It also could be used by astronauts for recreational gardening activities during deep space missions.•

 

From the June 6, 1857 Brooklyn Daily Eagle:

leeches5

 

Tags:

It would be great to ban autonomous-weapons systems, but you don’t really get to govern too far into the future from the present. Our realities won’t be tomorrow’s, and I fear that sooner or later the possible becomes the plausible. Hopefully, we can at least kick that can far enough down the road so that everyone will be awakened to the significant risks before they’ve been realized. As Peter Asaro makes clear in a Scientific American essay, there will be grave consequences should warfare be robotized. An excerpt:

Autonomous weapons pose serious threats that, taken together, make a ban necessary. There are concerns whether AI algorithms could effectively distinguish civilians from combatants, especially in complex conflict environments. Even advanced AI algorithms would lack the situational understanding or the ability to determine whether the use of violent force was appropriate in a given circumstance or whether the use of that force was proportionate. Discrimination and proportionality are requirements of international law for humans who target and fire weapons but autonomous weapons would open up an accountability gap. Because humans would no longer know what targets an autonomous weapon might select, and because the effects of a weapon may be unpredictable, there would be no one to hold responsible for the killing and destruction that results from activating such a weapon.

Then, as the Future of Life Institute letter points out, there are threats to regional and global stability as well as humanity. The development of autonomous weapons could very quickly and easily lead to arms races between rivals. Autonomous weapons would reduce the risks to combatants, and could thus reduce the political risks of going to war, resulting in more armed conflicts. Autonomous weapons could be hacked, spoofed and hijacked, and directed against their owners, civilians or a third party. Autonomous weapons could also initiate or escalate armed conflicts automatically, without human decision-making. In a future where autonomous weapons fight autonomous weapons the results would be intrinsically unpredictable, and much more likely lead to the mass destruction of civilians and the environment than to the bloodless wars that some envision. Creating highly efficient automated violence is likely to lead to more violence, not less.

There is also a profound moral question at stake.•

 

Tags:

When Garry Kasparov was defeated by Deep Blue, a key breaking point was his mistaking a glitch in his computer opponent for a human level of understanding. These strange behaviors can throw us off our game, but perhaps they can also shed light. In “Artificial Intelligence Is Already Weirdly Human,” David Berreby’s Nautilus article, the author believes that neural-network oddities, something akin to AI meeting ET, might be useful. An excerpt:

Neural nets sometimes make mistakes, which people can understand. (Yes, those desks look quite real; it’s hard for me, too, to see they are a reflection.) But some hard problems make neural nets respond in ways that aren’t understandable. Neural nets execute algorithms—a set of instructions for completing a task. Algorithms, of course, are written by human beings. Yet neural nets sometimes come out with answers that are downright weird: not right, but also not wrong in a way that people can grasp. Instead, the answers sound like something an extraterrestrial might come up with.

These oddball results are rare. But they aren’t just random glitches. Researchers have recently devised reliable ways to make neural nets produce such eerily inhuman judgments. That suggests humanity shouldn’t assume our machines think as we do. Neural nets sometimes think differently. And we don’t really know how or why.

That can be a troubling thought, even if you aren’t yet depending on neural nets to run your home and drive you around. After all, the more we rely on artificial intelligence, the more we need it to be predictable, especially in failure. Not knowing how or why a machine did something strange leaves us unable to make sure it doesn’t happen again.

But the occasional unexpected weirdness of machine “thought” might also be a teaching moment for humanity. Until we make contact with extraterrestrial intelligence, neural nets are probably the ablest non-human thinkers we know.•

Tags:

Humans are so terribly at hiring other humans for jobs that it seems plausible software couldn’t do much worse. I think that will certainly be true eventually, if it isn’t already, though algorithms won’t likely be much better at identifying non-traditional candidates with deeply embedded talents. Perhaps a human-machine hybrid à la freestyle chess would work best for the foreseeable future?

In arguing that journalists aren’t being rigorous enough when reporting on HR software systems, Andrew Gelman and Kaiser Fung of the Daily Beast point out that data doesn’t necessarily mitigate bias. An excerpt:

Software is said to be “free of human biases.” This is a false statement. Every statistical model is a composite of data and assumptions; and both data and assumptions carry biases.

The fact that data itself is biased may be shocking to some. Occasionally, the bias is so potent that it could invalidate entire projects. Consider those startups that are building models to predict who should be hired. The data to build such machines typically come from recruiting databases, including the characteristics of past applicants, and indicators of which applicants were successful. But this historical database is tainted by past hiring practices, which reflected a lack of diversity. If these employers never had diverse applicants, or never made many minority hires, there is scant data available to create a predictive model that can increase diversity! Ironically, to accomplish this goal, the scientists should code human bias into the software.•

Tags: ,

The passage below from Rachel Nuwer’s BBC report about technological unemployment speaks to why I largely disagree with Jerry Kaplan that robotics will be far worse for male workers than female. There probably will be a difference, but if the machines come en masse in a compressed period of time, they come for most of us.

Oxford’s Carl Frey tells Nuwer that “overall, people should be happy that a lot of these jobs have actually disappeared,” when speaking of drudgery that’s heretofore been vanished by electrical gadgets, but the new reality may mean a tremendous aggregate improvement enjoyed by relatively few. In the long-term, that may all work itself out, but we better be ready with solutions in the short- and medium-term.

The excerpt:

Self-driving trucks wouldn’t be good news for everyone, however. Critics point out that, should this breakthrough be realised, there will be a significant knock-on effect for employment. In the US, up to 3.5 million drivers and 5.2 million additional personnel who work directly within the industry would be out of a job. Additionally, countless pit stops along well-worn trucking routes could become ghost towns. Self-driving trucks, in other words, might wreck millions of lives and bring disaster to a significant sector of the economy.

Dire warnings such as these are frequently issued, not only for the trucking industry, but for the world’s workforce at large. As machines, software and robots become more sophisticated, some fear that we stand to lose millions of jobs. According to one unpublished study, the coming wave of technological breakthroughs endangers up to 47% of total employment in the US.

But is there any truth to such projections, and if so, how concerned should we be? Will the robots take over, rendering us all professional couch potatoes, as imagined in the film Wall-E, or will technological innovation give us the freedom to pursue more creative, rewarding endeavours?•

Tags: ,

One positive outcome of our newly decentralized media is that all of society is now a long tail, with room for far more categories of beliefs and lifestyles, whether someone is Transgender or Libertarian or Atheist.

Case in point: Houston Texans star Arian Foster. Religion goes with football the way it does with war, perhaps because they’re two activities where you might want to pray you don’t get killed, but Foster, who’s played in the heart of the Bible Belt his entire college and pro career, doesn’t believe anyone is watching over him except for the replay assistant in the NFL booth. The former Muslim is now a devout atheist who offers a respectful Namaste bow after a TD but does not pray in a huddle. In an ESPN Magazine article, Tim Keown profiles the running back as he publicly discusses his lack of religion for the first time. An excerpt:

THE HOUSE IS a churn of activity. Arian’s mother, Bernadette, and sister, Christina, are cooking what they proudly call “authentic New Mexican food.” His older brother, Abdul, is splayed out on a room-sized sectional, watching basketball and fielding requests from the five little kids — three of them Arian’s — who are bouncing from the living room to the large playhouse, complete with slide, in the front room. I tell Abdul why I’m here and he says, “My brother — the anti-Tebow,” with a comic eye roll.

Arian Foster, 28, has spent his entire public football career — in college at Tennessee, in the NFL with the Texans — in the Bible Belt. Playing in the sport that most closely aligns itself with religion, in which God and country are both industry and packaging, in which the pregame flyover blends with the postgame prayer, Foster does not believe in God.

“Everybody always says the same thing: You have to have faith,” he says. “That’s my whole thing: Faith isn’t enough for me. For people who are struggling with that, they’re nervous about telling their families or afraid of the backlash … man, don’t be afraid to be you. I was, for years.”

He has tossed out sly hints in the past, just enough to give himself wink-and-a-nod deniability, but he recently decided to become a public face of the nonreligious.•

Tags: ,

For all the great things the shift from newsprint to the Internet has brought us, one thing lost in that dynamic has been the ability for fledgling reporters–even veteran ones–to pay the bills, especially those wishing to write about important social issues. Like most of America, the middle is largely gone in journalism, the fear of falling having proved to be no mere paranoia.

In a Guardian piece, Barbara Ehrenreich writes about this new arrangement, the few haves and the many have-nots, particularly among those who wish to cover poverty in America. After all, what is the good of everybody having their own channel in a decentralized media if they can’t afford the electricity to power their laptop or recharge their smartphone? An excerpt:

In the last few years, I’ve gotten to know a number of people who are at least as qualified writers as I am, especially when it comes to the subject of poverty, but who’ve been held back by their own poverty. There’s Darryl Wellington, for example, a local columnist (and poet) in Santa Fe who has, at times, had to supplement his tiny income by selling his plasma – a fallback than can have serious health consequences. Or Joe Williams, who, after losing an editorial job, was reduced to writing for $50 a piece for online political sites while mowing lawns and working in a sporting goods store for $10 an hour to pay for a room in a friend’s house. Linda Tirado was blogging about her job as a cook at Ihop when she managed to snag a contract for a powerful book entitled Hand to Mouth (for which I wrote the preface). Now she is working on a “multi-media mentoring project” to help other working-class journalists get published.

There are many thousands of people like these – gifted journalists who want to address serious social issues but cannot afford to do so in a media environment that thrives by refusing to pay, or anywhere near adequately pay, its “content providers.” Some were born into poverty and have stories to tell about coping with low-wage jobs, evictions or life as a foster child. Others inhabit the once-proud urban “creative class,” which now finds itself priced out of its traditional neighborhoods, like Park Slope or LA’s Echo Park, scrambling for health insurance and childcare, sleeping on other people’s couches. They want to write – or do photography or documentaries. They have a lot to say, but it’s beginning to make more sense to apply for work as a cashier or a fry-cook.

This is the real face of journalism today: not million dollar-a-year anchorpersons, but low-wage workers and downwardly spiraling professionals who can’t muster up expenses to even start on the articles, photo-essays and videos they want to do, much less find an outlet to cover the costs of doing them.•

Tags: , , ,

The success of China’s insta-cities is dubious even with the iron fist of authoritarianism set to crush dissenters, but dense “cities in a building” or “cities in the sky,” attempts at large-scale, ecologically friendly developments influenced by the work of the late Arcology designer Paolo Soleri, have a particularly spotty track record. Abu Dhabi’s Masdar City (even a diminished version) may prove the exception, but top-down developments seldom satisfy human desires, even if they’re ostensibly good for us.

In a smart Aeon essay, Jared Keller writes of Soleri’s Arizona desert dream and explores why its offshoots, potential goldmines, don’t pan out. An excerpt:

In 1956, Soleri and his wife Corolyn ‘Colly’ Woods moved just miles from Phoenix’s out-of-control suburban sprawl to set up an architectural workshop, dubbed Cosanti (from the Italiancosa and anti, or ‘before things’), in Paradise Valley to develop his unique philosophy of architecture. One of Soleri’s earliest visions was Mesa City, a proposed city the size of Manhattan with 2 million inhabitants. Over five years, Soleri would draw hundreds of feet of scrolls detailing the intricate structures and landscape of this hypothetical metropolis.

In 1970, Soleri finally broke ground on Arcosanti, an experimental city and ‘urban laboratory’ that has been under construction for nearly half a century. To the average visitor, Arcosanti looks like a college campus sprouting in the middle of the desert, molded from the red silt of the surrounding mesa. The complex is marked by a cluster of soaring stone apses, crafted in Soleri’s distinct, casting-inspired architectural style, designed to absorb sunlight and power the town’s energy grid. The majority of buildings are oriented to the south to capture the sun’s light and heat, while an open roof design yields maximum sunlight in the winter and shade in the summer. Artisans live and work in a densely packed compound, designed for maximum energy efficiency and sustainability. The community’s permanent residents keep greenhouses and agricultural fields, and income from bell-casting goes to maintaining the town’s infrastructure.

Arcosanti is as socially efficient as it is sustainable. The buildings and walkways are built in a more dynamic formation than a conventional city grid, not just to conserve resources but also to encourage increased social interaction between residents, forcing them to bump into each other in various open-air atriums, gardens and greenhouses. Living quarters are clustered in a honeycomb of sparse, minimalist apartments, all virtually identical. The open design and emphasis on sustainable living has created a distinctly hippy, communitarian vibe; the population of the town is mostly Soleri fanatics and bell-casting artisans. The city has never been officially finished, and while the current population wavers around 80, the town was designed to sustain some 5,000. …

Despite Soleri’s best efforts, it’s not clear that humanity is ready for the perfect architectural utopias he imagined.•

 

Tags: ,

I think the received wisdom about spacesuits is that even the smallest tear in the fabric during a mission in outer space will lead to certain death. Not necessarily so, says Cathleen Lewis, curator at Smithsonian’s National Air and Space Museum, during a Reddit AMA. Three exchanges follow.

___________________

Question:

What would happen if a spacesuit were to be punctured while in space?

Cathleen Lewis:

First of all, there has never been a loss of an astronaut or cosmonaut due to a spacesuit failure. Second, please forget everything that you have seen in science fiction movies about spacesuit failures. They are usually overly dramatized and frequently wrong. There have been four documented cases of spacesuit failures in history. None resulted in deaths. Without a spacesuit and the oxygen necessary to breathe, an astronaut would immediately feel the nitrogen coming out of his fluids, almost like the tears and saliva were carbonated. After about 15 seconds, he would pass out and, without an emergency rescue, he would die within two minutes. The body would float in space and only very slowly lose body heat because there is no efficient way to radiate heat away from the body. In the case of a small puncture, usually the flesh would swell in the immediate area and stopper the hole. This can be extremely painful, but the victim would recover.

___________________

Question:

What are other countries space suits like compared to ours?

Cathleen Lewis:

Remarkably, even though all spacesuits perform similar functions, they do not look alike. When the Soviet Union designed a suit to carry men to the Moon, they opted for a single piece suit that the cosmonaut would climb in through a hinged backpack. The Russians maintain a similar design in the EVA suits that cosmonauts wear when they do spacewalks from the Russian node of the International Space Station. These dissimilarities result from differences in available materials, different senses of aesthetics, and differing attitudes about innovation and refinement of design. The Russians remain very conservative and have retained many of the features that they designed for their first suits over 50 years ago. On the U.S. side, there is a greater effort at matching the spacesuit to the spacecraft and the mission. There is also the contracting and bidding issue that complicates the American side, but I won’t go into that here. You should also look at the Chinese spacesuits. They are remarkably similar to the Russian launch and entry suits. One assumes that they learned this design from the years that they worked with the Soviets and Russians in preparation for their own human spaceflight program.

___________________

Question:

Regardless of accuracy, what is your favorite movie space suit?

Cathleen Lewis:

It hasn’t opened yet, but I am anxiously awaiting Ridley Scott’s The Martian. I loved the book and from the promotions, he seems to have gotten the spacesuit right. Usually in movies the helmets are too big. I understand that this is for filming and showing the actors’s faces, but it is a distracting feature for a spacesuit curator.

Tags:

« Older entries § Newer entries »