Urban Studies

You are currently browsing the archive for the Urban Studies category.

I think, sadly, that Aubrey de Grey will die soon, as will the rest of us. A-mortality is probably theoretically possible, though I think it will be awhile. But the SENS radical gerontologist has probably done more than anyone to get people to rethink aging as a disease to be cured rather than an inevitability to be endured. In the scheme of things, that paradigm shift has enormous (and hopefully salubrious) implications. De Grey just did an Ask Me Anything at Reddit. A few exchanges follow.

__________________________

Question:

What is the likelihood that someone who is 40 (30? 20? 10?) today will have their life significantly extended to the point of practical immortality? 

Is it a slow, but rapidly rising collusion of things that are going to cause this, or is it something that is going to kind of snap into effect one day?

Will the technology be accessible to everyone, or will it be reserved for the rich?

What are your thoughts on cryonics?

What is your personal preferred method of achieving practical immortality? Nanotechnology? Cyborgs? Something else?

Aubrey de Grey:

I’d put it at 60, 70, 80, 90% respectively.

Kind of snap, in that we will reach longevity escape velocity.

For everyone, absolutely for certain.

Cryonics (not cryogenics) is a totally reasonable and valid research area and I am signed up with Alcor.

Anything that works! – but I expect SENS to get there first.

__________________________

Question:

Before defeating aging, what if we were to first defeat cardiovascular disease or cancer or Alzheimer’s disease? Do you think this would be enough to make people snap of of their “pro-aging trance” and be more optimistic about the feasibility & desirability of SENS and other rejuvenation therapies?

EDIT: Do you think people would be more convinced by more cosmetic rejuvenation therapies instead (reversal of hair loss/graying, reduction of wrinkles and spots in the skin)?

Aubrey de Grey:

Not a chance. People’s main problem is that they have a microbe in their brains called “aging” that they think means something distinct from diseases. The only way that will change is big life extension in mice.

__________________________

Question:

Long life for our ability to continue to develop ourselves, explore the world, gain knowledge, and create is great. The goal however has different paths, from genetic manipulation to body cyborgization. Some speak of mind uploading but who knows if that’s possible, as mind transfer implies dualism.

Is one preferable over another?

And what is your opinion on the potential of our unstable interconnected world to negatively impact our potential for progress from things like ecological collapse, global warming, etc.? I feel like it’s a race between disaster and scientific progress, can we out run chaos? Or is this a false dichotomy, maybe the future is a world of suffering and a few individuals have military grade cyborg tech.

Aubrey de Grey:

I don’t think mind transfer necessarily implies dualism, and I’m all for exploring all options.

I am quite sure we can outrun chaos.

__________________________

Question:

I’ve been learning more and more lately about the work that you do in the fight to end aging, and fully believe that it is both possible and just over the horizon. How can the general public get involved in the fight other than donating?

Aubrey de Grey:

Money is the bottleneck, I’m afraid, so the next best thing to donating is getting others to donate.

__________________________

Question:

We really appreciate all your work. Some people have expressed concerns that these anti-aging techniques and treatments won’t be available to everyone, but only to the extremely wealthy. Are there strategies to prevent this?

Aubrey de Grey:

Yes – they are called elections. Those in power want to stay there.•

Tags:

Like most Atlantic readers, I go to the site for the nonstop Shell ads but stay for the articles. 

Jerry Kaplan, author of Humans Need Not Apply, has written a piece for the publication which argues that women will fare much better than men if technological unemployment becomes widespread and entrenched, the gender biases among jobs and careers favoring them. I half agree with him. 

Take for instance his argument that autonomous cars will decimate America’s three million truck drivers (overwhelmingly men) but not disrupt the nation’s three million secretaries (overwhelmingly women). That’s not exactly right. The trucking industry, when you account for support work, is estimated to provide eight million jobs, including secretarial positions. Truckers spend cash at diners and coffee shops and such, providing jobs that are still more often filled by females. And just because autonomous trucks won’t eliminate secretarial positions, that doesn’t mean other technologies won’t. That effort to displace office-support staff has been a serious goal for at least four decades, and the technology is probably ready to do so now.

This, of course, also doesn’t account for the many women who’ve entered into white-collar professions long dominated by men, many of which are under threat. But I think Kaplan is correct in saying that the middle-class American male is a particularly endangered species if this new reality takes hold, and there won’t likely be any organic solution coming from within our current economic arrangement.

Kaplan’s opening:

Many economists and technologists believe the world is on the brink of a new industrial revolution, in which advances in the field of artificial intelligence will obsolete human labor at an unforgiving pace. Two Oxford researchers recently analyzed the skills required for more than 700 different occupations to determine how many of them would be susceptible to automation in the near future, and the news was not good: They concluded that machines are likely to take over 47 percent of today’s jobs within a few decades.

This is a dire prediction, but one whose consequences will not fall upon society evenly. A close look at the data reveals a surprising pattern: The jobs performed primarily by women are relatively safe, while those typically performed by men are at risk.

It should come as no surprise that despite progress on equality in the labor force, many common professions exhibit a high degree of gender bias. For instance, of the 3 million truck drivers in the U.S., more than 95 percent are men; of the nearly 3 million secretaries and administrative assistants, more than 95 percent are women. Autonomous vehicles are a not-too-distant possibility, and when they arrive, those drivers’ jobs will evaporate; office-support workers suffer no such imminent threat.•

 

Tags:

Even if it’s difficult to believe now, imagine someone as recently as 2000 suggesting that cars powered by internal combustion engines would be out of California showrooms by 2030 and off the state’s highways by 2050, that they’d be replaced by zero-emission vehicles, one-hundred percent of them. Even think about someone hatching that plan ten years ago, when the electric car was all but considered killed. The seemingly impossible dream looks likely to become a reality thanks to great leaps in technology running headlong into the unique politics of a state demanding change. Amusingly enough, it’s something Governor Ronald Reagan tried to jump-start in 1969.

At the heart of the push is Mary Nichols, Chair of the California Air Resources Board. The opening of John Lippert’s Bloomberg article about how she’s ending Big Auto’s business as usual in the Golden State:

Sergio Marchionne had a funny thing to say about the $32,500 battery-powered Fiat 500e that his company markets in California as “eco-chic.” “I hope you don’t buy it,” he told his audience at a think tank in Washington in May 2014. He said he loses $14,000 on every 500e he sells and only produces the cars because state rules re­quire it. Marchionne, who took over the bailed-out Chrysler in 2009 to form Fiat Chrysler Automobiles, warned that if all he could sell were electric vehicles, he would be right back looking for another govern­ment rescue.

So who’s forcing Marchionne and all the other major automakers to sell mostly money-losing electric vehicles? More than any other person, it’s Mary Nichols. She’s run the California Air Resources Board since 2007, championing the state’s zero-emission-vehicle quotas and backing Pres­ident Barack Obama’s national mandate to double average fuel economy to 55 miles per gallon by 2025. She was chairman of the state air regulator once before, a generation ago, and cleaning up the famously smoggy Los Angeles skies is just one accomplish­ment in a four-decade career.

Nichols really does intend to force au­tomakers to eventually sell nothing but electrics. In an interview in June at her agency’s heavy-duty-truck laboratory in downtown Los Angeles, it becomes clear that Nichols, at age 70, is pushing regula­tions today that could by midcentury all but banish the internal combustion engine from California’s famous highways. “If we’re going to get our transportation system off petroleum,” she says, “we’ve got to get people used to a zero-emissions world, not just a little-bit-better version of the world they have now.”•

Tags: , ,

In his Pacific·Standard piece, “The Second Industrious Revolution,” Louis Hyman uses the phrase “precarious work” is the describe the new Gig Economy, which would be destabilizing if widespread. With so much of Labor prone to automation and robotization and no suitable replacement work presently in view, it’s time for Americans to start envisioning solutions that don’t impede progress but instead create stability that allows us to thrive within the new realities. Like Andrew McAfee, Eric Brynjolfsson, Martin Ford and numerous others, Hyman believes basic income may become a necessity. An excerpt:

All sorts of exciting technologies will reinforce this industrious revolution, but it is not the technology that deserves our attention. It is the people whose lives will be turned upside down. Scholars and activists are concerned about this rise in precarious work, but instead of fighting the work, we need to understand how to empower workers to take advantage of this revolution—before it is too late.

The first industrious and industrial revolutions inaugurated several centuries of social dislocation, as well as unprecedented economic growth. Not until the mid-20th century, in the heyday of post-war capitalism, did we find a way to create economic security in a wage-work economy: a steady paycheck, health insurance, and home ownership. But, almost as soon as these happened, they began to go away.

We should use this coming crisis as an opportunity to return to our core American values. An older American Dream, the Jeffersonian vision of independent farmers, was promoted by the federal government in the 19th century through the Homestead Act, which provided farm land to our citizens. It was a way to push back against the rise of wage labor, which was seen as dependent and an antithesis to American values.

In today’s digital economy, we need a comparable act that empowers us to make our own way in business. While we often discuss the American Dream in terms of consumption, there is another American Dream that is more visceral: control over one’s work. The longing many Americans feel for owning their own business, the celebration of entrepreneurship in our culture, and our homesteading heritage are not just about money—or buying houses. Yet for several generations we have made it easy to own a home, but hard to own our own businesses.

Workers don’t need land, but they do need other kinds of support—health insurance, skilled education, maybe even a basic income—to take the risks upon which success depends.•

Tags:

Watson has a way with words and Siri sounds sexy, but Cyc is almost silent. Why so silent, Cyc?

Cycorp’s ambitious project to create the first true AI has been ongoing for 31 years, much of the time in seclusion. A 2014 Business Insider piece by Dylan Love marked the three-decade anniversary of the odd endeavor, summing up the not-so-modest goal this way: to “codify general human knowledge and common sense.” You know, that thing. Every robot and computer could then be fed the system to gain human-level understanding.

The path the company and its CEO Doug Lenat have chosen in pursuit of this goal is to painstakingly teach Cyc every grain of knowledge until the Sahara has been formed. Perhaps, however, it’s all a mirage. Because the work has been conducted largely in quarantine, there’s been little outside review of the “patient.” But even if this artificial-brain operation is a zero rather than a HAL 9000, a dream unfulfilled, it still says something fascinating about human beings.

An excerpt from “The Know-It-All Machine,” Clive Thompson’s really fun 2001 Lingua Franca cover story on the subject: 

SINCE THIS is 2001, [Doug] Lenat has spent the year fielding jokes about HAL 9000, the fiendishly intelligent computer in Arthur C. Clarke’s 2001: A Space Odyssey. On one occasion, when television reporters came to film Cyc, they expected to see a tall, looming structure. But because Cyc doesn’t look like much—it’s just a database of facts and a collection of supporting software that can fit on a laptop—they were more interested in the company’s air conditioner. “It’s big and has all these blinking lights,” Lenat says with a laugh. “Afterwards, we even put a sign on it saying, CYC 2001, BETTER THAN HAL 9000.”

But for all Lenat’s joking, HAL is essentially his starting point for describing the challenges facing the creation of commonsense AI. He points to the moment in the film 2001 when HAL is turned on—and its first statement is “Good morning, Dr. Chandra, this is HAL. I’m ready for my first lesson.”

The problem, Lenat explains, is that for a computer to formulate sentences, it can’t be starting to learn. It needs to already possess a huge corpus of basic, everyday knowledge. It needs to know what a morning is; that a morning might be good or bad; that doctors are typically greeted by title and surname; even that we greet anyone at all. “There is just tons of implied knowledge in those two sentences,” he says.

This is the obstacle to knowledge acquisition: Intelligence isn’t just about how well you can reason; it’s also related to what you already know. In fact, the two are interdependent. “The more you know, the more and faster you can learn,” Lenat argued in his 1989 book, Building Large Knowledge-Based Systems, a sort of midterm report on Cyc. Yet the dismal inverse is also true: “If you don’t know very much to begin with, then you can’t learn much right away, and what you do learn you probably won’t learn quickly.”

This fundamental constraint has been one of the most frustrating hindrances in the history of AI. In the 1950s and 1960s, AI experts doing work on neural networks hoped to build self-organizing programs that would start almost from scratch and eventually grow to learn generalized knowledge. But by the 1970s, most researchers had concluded that learning was a hopelessly difficult problem, and were beginning to give up on the dream of a truly human, HAL-like program. “A lot of people got very discouraged,” admits John McCarthy, a pioneer in early AI. “Many of them just gave up.”

Undeterred, Lenat spent eight years of Ph.D. work—and his first few years as a professor at Stanford in the late 1970s and early 1980s—trying to craft programs that would autonomously “discover” new mathematical concepts, among other things. Meanwhile, most of his colleagues turned their attention to creating limited, task-specific systems that were programmed to “know” everything that was relevant to, say, monitoring and regulating elevator movement. But even the best of these expert systems are prone to what AI theorists call “brittleness”—they fail if they encounter unexpected information. In one famous example, an expert system for handling car loans issued a loan to an eighteen-year-old who claimed that he’d had twenty years of job experience. The software hadn’t been specifically programmed to check for this type of discrepancy and didn’t have the common sense to notice it on its own. “People kept banging their heads against this same brick wall of not having this common sense,” Lenat says.

By 1983, however, Lenat had become convinced that commonsense AI was possible—but only if someone were willing to bite the bullet and codify all common knowledge by brute force: sitting down and writing it out, fact by fact by fact. After conferring with MIT’s AI maven Marvin Minsky and Apple Computer’s high-tech thinker Alan Kay, Lenat estimated the project would take tens of millions of dollars and twenty years to complete.

“All my life, basically,” he admits. He’d be middle-aged by the time he could even figure out if he was going to fail. He estimated he had only between a 10 and 20 percent chance of success. “It was just barely doable,” he says.

But that slim chance was enough to capture the imagination of Admiral Bobby Inman, a former director of the National Security Agency and head of the Microelectronics and Computer Technology Corporation (MCC), an early high-tech consortium. (Inman became a national figure in 1994 when he withdrew as Bill Clinton’s appointee for secretary of defense, alleging a media conspiracy against him.) Inman invited Lenat to work at MCC and develop commonsense AI for the private sector. For Lenat, who had just divorced and whose tenure decision at Stanford had been postponed for a year, the offer was very appealing. He moved immediately to MCC in Austin, Texas, and Cyc was born.•

Tags: , ,

We’ll likely be richer and healthier in the long run because of the Digital Revolution, but before the abundance, there will probably be turbulence.

A major reorganization of Labor among hundreds of millions promises to be bumpy, a situation requiring deft political solutions in a time not known for them. It’s great if Weak AI can handle the rote work and free our hands, but what will we do with them then? And how will we balance a free-market society that’s also a highly automated one?

In a Washington Post piece, Matt McFarland wisely assesses the positive and negatives of the new order. Two excerpts follow.

_______________________

Just as the agrarian and industrial revolutions made us more efficient and created more value, it follows that the digital revolution will do the same.

[Geoff] Colvin believes as the digital revolution wipes out jobs, new jobs will place a premium on our most human traits. These should be more satisfying than being a cog on an assembly line.

“For a long period, really dating to the beginning of the Industrial Revolution, our jobs became doing machine-like work, that the machines of the age couldn’t do it. The most obvious example being in factories and assembly-line jobs,” Colvin told me. “We are finally achieving an era in which the machines actually can do the machine-like work. They leave us to do the in-person, face-to-face work.”

_______________________

If self-driving cars and automated drone delivery become a reality, what happens to every delivery driver, truck driver and cab driver? Swaths of the population won’t be able to be retrained with skills needed in the new economy. Inequality will rise.

“One way or another it’s going to be kind of brutal,” [Jerry] Kaplan said. “When you start talking about 30 percent of the U.S. population being on the edge of losing their jobs, it’s not going to be a pleasant life and you’re going to get this enormous disparity between the haves and the have nots.”•

 

Tags: ,

From the March 28, 1886 Brooklyn Daily Eagle:

Tags:

The good and bad part of decentralization is the same: There is no center. That allows for all sorts of new possibilities, some of them good.

As I’ve argued before, the U.S. government, that reviled and feared thing, will have less and less ability to control it all, despite surveillance. You don’t have to be a paranoid Birther to see this new reality being born. Even the most suspicious among us may someday long for a strong federal presence.

Speaking of the center not holding: David Amsden’s excellent New York Times Magazine article “Who Runs the Streets of New Orleans?” looks at the privatization of some policing in the French Quarter, a remarkable square mile that’s been marred by mayhem since the destabilizing tragedy of Hurricane Katrina. In response, a single wealthy New Orleans citizen, Sidney Torres, who made his treasure hauling trash, entered into a tech-forward joint effort with the city to fight crime. It may ultimately make things safer, but, of course, there are many dangers in privatizing policing, in having an unelected individual with money dictate policy based on personal beliefs or even whims. There can be a mission creep that doesn’t just target criminals, but also the impoverished and minorities, creating a tale of two cities. While that may not sound too different than current public policing in America, at least elected officials have to answer to those issues.

An excerpt:

In the United States, private police officers currently outnumber their publicly funded counterparts by a ratio of roughly three to one. Whereas in past decades the distinction was often clear — the rent-a-cop vs. the real cop — today the boundary between the two has become ‘‘messy and complex,’’ according to a study last year by Harvard’s Kennedy School of Government. Torres’s task force is best understood in this context, one where the larger merging of private and public security has resulted in an extensive retooling of the nation’s policing as a whole. As municipal budgets have stagnated or plummeted, state and local governments have taken to outsourcing police work to the private sector, resulting in changes that have gone largely unnoticed by the public they’re tasked with protecting.

A recent report by the Justice Department, which has become one of the most prominent advocates of such collaborative efforts, identified 450 partnerships in the country between law enforcement and the private sector. Nationwide, there are now more than 1,200 ‘‘business improvement districts’’ in which businesses pay self-imposed taxes to fund improved services, including security. In many cases, officers covered by corporate entities have become indistinguishable from those paid for by taxpayers. Last year, Facebook entered into a three-year partnership with the Menlo Park, Calif., Police Department in which the social-media giant agreed to pay the $194,000 salary of a police officer whose job was going to be cut. One of the largest private security forces in the nation today is the University of Chicago Police, which has full jurisdiction over 65,000 residents, only 15,000 of whom are students. More than 100 public housing projects in Boston are patrolled by private security, including one company that has been authorized to arrest suspects under certain circumstances.•

Tags: ,

In a great Matter piece about the nightmare of climate change, Margaret Atwood revisits a 2009 Die Zeit article she wrote about possible outcomes for a future in which the world is no longer based on oil: one of accommodation, one of ruin and another in which some states are more capable of managing a post-peak tomorrow than others, a planet still inhabited by haves and have-nots, though one rewritten according to new realities.

Atwood asks these questions, among others: “Can we change our energy system? Can we change it fast enough to avoid being destroyed by it?” Despite it all, the novelist holds out hope that we can master an “everything change,” as she terms it.

An excerpt:

Then there’s Picture Two. Suppose the future without oil arrives very quickly. Suppose a bad fairy waves his wand, and poof! Suddenly there’s no oil, anywhere, at all.

Everything would immediately come to a halt. No cars, no planes; a few trains still running on hydroelectric, and some bicycles, but that wouldn’t take very many people very far. Food would cease to flow into the cities, water would cease to flow out of the taps. Within hours, panic would set in.

The first result would be the disappearance of the word “we”: except in areas with exceptional organization and leadership, the word “I” would replace it, as the war of all against all sets in. There would be a run on the supermarkets, followed immediately by food riots and looting. There would also be a run on the banks — people would want their money out for black market purchasing, although all currencies would quickly lose value, replaced by bartering. In any case the banks would close: their electronic systems would shut down, and they’d run out of cash.

Having looted and hoarded some food and filled their bathtubs with water, people would hunker down in their houses, creeping out into the backyards if they dared because their toilets would no longer flush. The lights would go out. Communication systems would break down. What next? Open a can of dog food, eat it, then eat the dog, then wait for the authorities to restore order. But the authorities — lacking transport — would be unable to do this.•

Tags:

The year before “Professor” Alphonse King, whose academic credentials were questionable, reportedly crossed the Niagara River on a water bicycle, he tried to traverse its channel with tin shoes of his own invention. Each weighed 30 pounds, and the results were unsurprisingly mixed. From the December 12, 1886 New York Times:

Buffalo–An attempt was made to-day to outrival the feats of Donovan, Graham, Hanslitt, Potts and Allen in braving the terrors of Niagara, which though a failure in one way, was a success in another. Mr. Alphonse King, who is the inventor of a water shoe, gave exhibitions some years ago in this country and Mexico and not long ago in Europe. He gave one in the Crystal Palace in London, and while there attracted the attention of Harry Webb, an old-time manager, who made him an offer of a year’s engagement to come to this country. While here some time ago Mr. King had looked over Niagara River below the Falls and believed that he could walk across the channel on the patent shoes. He came to this country four weeks ago and has since that time been in New-York City practicing for the trip. While there, Thomas Bowe, hearing of King’s determination to attempt the trip, made a wager of $1,500 with Webb that King could not walk 100 feet in the current. The money was deposited with a New-York newspaper, and on Friday afternoon Messrs. King and Webb, accompanied by A.C. Poole, of Poole’s Eighth Street Theatre, reached the Falls.

The trip to-day gave King two cold water baths, and demonstrated that while he could walk with or against the current all right it was impossible to walk across the river because of the eddies, which twice upset them. He retired confident that what he set out to do could not be done. King’s ‘shoes’ are of tin, 32 inches long, 8 inches wide, sloping at the top, and 9 inches deep. Each weighs 30 pounds. They are air-tight and have in the middle an opening large enough to admit the feet of the wearer. At the bottom are a series of paddles, which operate automatically as fins.•

Predictive medicine powered by Big Data can alert you if you’re unwittingly headed downhill–and we all are to varying degrees–but what if this information isn’t just between you and your doctor?

CVS is phasing in IBM’s Watson to track trends in customer wellness. Seems great, provided you aren’t required to surrender your “health score” to get certain jobs the way you now must sometimes submit to drug tests. What if you needed a certain number to get a position the way you’re required to have a good credit score to get a loan? Seems unlikely, but tools are only as wise as the people who govern them in any particular era. Like most of the new normal, this innovation has the potential for great good–and otherwise.

From Ariana Eunjung Cha at the Washington Post:

[CVS Chief Medical Officer Troyen A.] Brennan said he could imagine the creation of mobile apps that would integrate information from fitness trackers and allow Watson to identify when a person’s activity level drops substantially and flag that as an indicator of something else going on. Or perhaps act as a virtual adviser for pharmacy or clinic staff that could help them identify “early signals” for when interventions may not be working and additional measures should be considered.

“Basically, if you can identify places to intervene and intervene early, you help people be healthier and avoid costly outcomes,” he said.

He added that the key to making these types of systems work will be to open lines of communication between a pharmacist, clinic staff and a patient’s physician, and that technology can help facilitate this dialogue.•

 

Tags: ,

Should I say a short story about climate-change apocalypse is fun? Choire Sicha’s brief, new Matter fiction, “Table of Contents,” certainly is, though it’s suitably sobering as well. The author imagines a scenario in which the seas have risen in a bad mood, and the narrator tries to aid the survivors by printing key entries from our modern Library of Alexandria, Wikipedia, before the plug is pulled. The opening:

I don’t know which will last longer, the paper or the ink. Eventually the paper will burn or the ink will fade, so read this all as fast as you can.

But of Wikipedia’s five-million articles, these 40,000 seemed to be the most super-important.

They’re crammed in these eight plastic-bagged boxes. because I printed it all single-side. That way you can make notes on the back! For instance, definitely keep track of who has babies with whom. (See the page for Incest, then check Consanguinity.) I put in some Bics, they should last a few… years? No idea.

After that, you can look up the Pen page.

I also put in Pen (Enclosure) in case you domesticate animals later.

In any event, please do not leave the entirety of portable human knowledge out in the rain.•

Tags:

Algorithms may be biased, but people certainly are. 

Financial-services companies are using non-traditional data cues to separate signals from noises in determining who should receive loans. I’d think in the short term such code, if written well, may be fairer. It certainly has the potential, though, to drift in the wrong direction over time. If our economic well-being is based on real-time judgements of our every step, then we could begin mimicking behaviors that corporations desire, and, no, corporations still aren’t people.

From Quentin Hardy at the New York Times:

Douglas Merrill, the founder and chief executive of ZestFinance, is a former Google executive whose company writes loans to subprime borrowers through nonstandard data signals.

One signal is whether someone has ever given up a prepaid wireless phone number. Where housing is often uncertain, those numbers are a more reliable way to find you than addresses; giving one up may indicate you are willing (or have been forced) to disappear from family or potential employers. That is a bad sign. …

Mr. Merrill, who also has a Ph.D. in psychology…thinks that data-driven analysis of personality is ultimately fairer than standard measures.

“We’re always judging people in all sorts of ways, but without data we do it with a selection bias,” he said. “We base it on stuff we know about people, but that usually means favoring people who are most like ourselves.” Familiarity is a crude form of risk management, since we know what to expect. But that doesn’t make it fair.

Character (though it is usually called something more neutral-sounding) is now judged by many other algorithms. Workday, a company offering cloud-based personnel software, has released a product that looks at 45 employee performance factors, including how long a person has held a position and how well the person has done. It predicts whether a person is likely to quit and suggests appropriate things, like a new job or a transfer, that could make this kind of person stay.

Tags: ,

The world is richer and smarter and healthier than ever, which is great, except that many of the forces that enabled these wonderful advances may also bring about the end of the species or at least cause an unprecedented die-off and severely diminish life for the “lucky” survivors. That’s the catch. 

In an Economist essay, Christoph Rheinberger and Nicolas Treich write of the ineffectiveness of traditional tools in assessing the cost of a potential climate-change disaster, which hampers attempts to mitigate risks, the unimaginable being incalculable. An excerpt:

Interestingly, the Pope’s letter recognises that “decisions must be made based on a comparison of the risks and benefits foreseen for the various possible alternatives”.

However, estimating these benefits means that we need to determine the value of a reduction in preventing a possible future catastrophic risk. This is a thorny task. Martin Weitzman, an economist at Harvard University, argues that the expected loss to society because of catastrophic climate change is so large that it cannot be reliably estimated. A cost-benefit analysis—economists’ standard tool for assessing policies—cannot be applied here as reducing an infinite loss is infinitely profitable. Other economists, including Kenneth Arrow of Stanford University and William Nordhaus of Yale University, have examined the technical limits of Mr Weitzman’s argument. As the interpretation of infinity in economic climate models is essentially a debate about how to deal with the threat of extinction, Mr Weitzman’s argument depends heavily on a judgement about the value of life.

Economists estimate this value based on people’s personal choices: we purchase bicycle helmets, pay more for a safer car, and receive compensation for risky occupations. The observed trade-offs between safety and money tell us about society’s willingness to pay for a reduction in mortality risk. Hundreds of studies indicate that people in developed countries are collectively willing to pay a few million dollars to avoid an additional statistical death. For example, America’s Environmental Protection Agency recommends using a value of around $8m per fatality avoided. Similar values are used to evaluate vaccination programmes and prevention of traffic accidents or airborne diseases.

Mr Posner multiplies the value of life by an estimate of Earth’s future population and obtains an illustrative figure of $336m billion as the cost of human extinction. Nick Bostrom, a philosopher at Oxford University, argues that this approach ignores the value of life of unborn generations and that the tentative figure should be much larger—perhaps infinitely so.•

 

Tags: ,

From the January 11, 1885 Brooklyn Daily Eagle:

A Barton County man has a living chicken without a head. Attempting to cut off a chicken’s head, the axe passed through the head of the chicken immediately in front of the ears, thus leaving a small portion of the brain attached to the neck. The chicken did not take this as an execution of his death warrant and got up and stood on his feet, to the astonishment of this would be executioner, who then contrived a plan to feed him by dropping food and drink into the thorax, which has so far proved a success. The chicken is now doing well.•

 

Even many economists who’ve made their bones doing macro often encourage the next generation to do micro (while continuing to do macro themselves), but something tells me the attraction of being a “big-picture thinker,” no matter how fraught that is, won’t simply dissolve. 

In his Foreign Policy piece, “Requiem for the Macrosaurus,” David Rothkopf argues the opposite, believing that macroeconomics is entering into obsolescence, that the sweep of history can’t be framed in blunt terms as it’s occurring. He feels now is the moment to move beyond this “medieval” state, with real-time Big Data leading the way. Rothkopf’s contention would certainly explain why traditional measurements seem inept at understanding the new normal of the Digital Age–why nobody knows what’s happening. An excerpt:

Being wrong has long been a special curse of economists. You might not think this would be the case in a so-called “science.” But, of course, all sciences struggle in those early years before scientists have enough data to support theories that can reflect and predict what actually happens in nature. Scientists from Galileo to Einstein have offered great discoveries but, due to the limits of their age, have labored under gross misconceptions. And in economics we are hardly in the era of Galileo quite yet. It is more like we are somewhere in the Middle Ages, where, based on some careful observation of the universe and a really inadequate view of the scope and nature of that universe, we have produced proto-science—also known today as crackpottery. (See long-standing views that the Earth was the center of the solar system or the belief that bleeding patients would cure them by ridding them of their “bad humors.”)

Modern economic approaches, theories, and techniques, the ones that policymakers fret over and to which newspapers devote barrels of ink, will someday be seen as similarly primitive. For example, economic policymakers regularly use gross estimates of national and international economic performances—largely aggregated measures based on data and models that are somewhere between profoundly flawed and crazy wrong—to assess society’s economic health, before determining whether to bleed the economic body politic by reducing the money supply or to warm it up by pumping new money into its system. Between these steps and regulating just how much the government spends and takes in taxes, we have just run through most of the commonly utilized and discussed economic policy tools—the big blunt instruments of macroeconomics.

I remember that when I was in government, those of us who dealt with trade policy or commercial issues were seen as pipsqueaks in the economic scheme of things by all the macrosauruses beneath whose feet the earth trembled, whose pronouncements echoed within the canyons of financial capitals, and who felt everything we and anyone else did was playing at the margins.

But think of the data on which those decisions were based. GDP, as it is calculated today, has roughly the same relationship to the size of the economy as estimates of the number of angels that can dance on the head of a pin do to the size of heaven. It misses vast amounts of economic activity and counts some things as value creation that aren’t at all.•

Tags:

The problem with anarchy is that you can’t control it.

That’s the lesson (hopefully) learned by Gawker and its techno-optimist founder, Nick Denton, after publishing an atrocious post outing a media executive for no other reason than sport, and having since been the center of a shitstorm fueled by what’s apparently a mass delusion inside the company.

In a 2014 Playboy Q&A conducted by Jeff Bercovici, Denton extolled the virtues of the radical transparency and the “natural monopolies” of Google, Uber and Amazon, seemingly without realizing this new order could also cause injury. Two excerpts follow.

__________________________

Playboy:

What are the implications for the broader society? What does America look like from inside the Panopticon?

Nick Denton:

When people take a look at the change in attitudes toward gay rights or gay marriage, they talk about the example of people who came out, celebrities who came out. That has a pretty powerful effect. But even more powerful are all the friends and relatives, people you know. When it’s no longer some weird group of faggots on Christopher Street but actually people you know, that’s when attitudes change, and my presumption is the internet is going to be a big part of that. You’re going to be bombarded with news you wouldn’t necessarily have consumed—information, humanity, texture. I think Facebook, more than anything else, and the internet have been responsible for a large part of the liberalization of the past five or 10 years when it comes to sex, when it comes to drinking. Five years ago it was embarrassing when somebody had photographs of somebody drunk as a student. There was actually a discussion about whether a whole generation of kids had damaged their career prospects because they put up too much information about themselves in social media. What actually happened was that institutions and organizations changed, and frankly any organization that didn’t change was going to handicap itself because everyone, every normal person, gets drunk in college. There are stupid pictures or sex pictures of pretty much everybody. And if those things are leaked or deliberately shared, I think the effect is to change the institutions rather than to damage the individuals. The internet is a secret-spilling machine, and the spilling of secrets has been very healthy for a lot of people’s lives.

__________________________

Playboy:

You’re more willing than most people to organize your life according to principle and see how the experiment turns out.

Nick Denton:

You could argue that privacy has never really existed. Usually people’s friends or others in the village had a pretty good idea what was going on. You could look at this as the resurrection of or a return to the essential nature of human existence: We were surrounded by obvious scandal throughout most of human existence, when everybody knew everything. Then there was a brief period when people moved to the cities and social connections were frayed, and there was a brief period of sufficient anonymity to allow for transgressive behavior no one ever found out about. That brief era is now coming to an end.•

Driverless cars and trucks are the future, but when, exactly, is that? 

It would be really helpful to know, since million jobs of jobs would be lost or negatively impacted in the trucking sector in the U.S. alone. There’s also, of course, taxis, delivery workers, etc.

In the BBC piece What’s Putting the Brakes on Driverless Cars?, Matthew Wall examines the factors, legal and technical, delaying what’s assumed more and more to be inevitable. An excerpt:

The technology isn’t good enough yet

Many semi-autonomous technologies are already available in today’s cars, from emergency braking to cruise control, self parking to lane keeping. This year, Ford is also planning to introduce automatic speed limit recognition tech and Daimler is hoping to test self-driving lorries on German motorways.

But this is a far cry from full autonomy.

Andy Whydell, director at TRW, one of the largest global engineering companies specialising in driver safety equipment, says radars have a range of about 200-300m (218-328 yards) but struggle with distances greater than this.

As a result, “sensors may not have sufficient range to react fast enough at high speed when something happens ahead,” he says. “Work is going on to develop sensors that can see ahead 400m.”

Lasers and cameras are also less effective in rainy, foggy or snowy conditions, he says, which potentially makes them unreliable in much of the northern hemisphere.

Even Google has admitted that its prototype driverless car struggles to spot potholes or has yet to be tested in snow.

And how would a driverless car cope trying to exit a T-junction at rush hour if human-driven cars don’t let it out?•

 

Tags:

New York City had nearly a thousand millionaires in 1905, and seemingly everyone wanted to part them from their money. Cranks would frequently write a gigantic number on a piece of scrap paper and expectantly hand it to a bank teller, believing it was a sure thing. They were escorted from the building–and often sent to Bellevue. But in the waning days of the Gilded Age, some took things a step further, paying unannounced visits to the well-to-do in their mansions. Precautions were taken, which included howitzers. From an article in the November 12, 1905 New York Times:

…The Morosini mansion at Riverdale-on-the-Hudson is equipped with very extraordinary and picturesque apparatus as a proof against burglars and other unwelcome visitors. Several small-bore cannon and sundry howitzers are planted around the house, each piece of ordinance being connected with the house by an electric wire.

Whenever occasion demands, a button may be pressed inside the mansion, and any one or all of the cannon can be fired off. In addition to this novel safeguard the grounds surrounding the mansion can be illuminated by means of electric bulbs scattered thickly among the trees and shrubbery.

Recently there was occasion one night for the police to answer a call from the Morosini mansion, two servants having become obstreperous. As the vehicle containing two officers from the King’s Bridge Station passed through the gate, the lawn for a hundred feet about suddenly burst into light. Adjacent trees glowed with a hundred dazzling flashes. Surprised, the officers came to an abrupt halt. But presently continuing on toward the house, every foot of the way was similarly illuminated, lights budding everywhere, making the grounds almost as brilliant as day. During a subsequent survey of the premises the police learned that all the windows on the ground floor were connected with heavily charged electric wires. When the family retires a switch is turned on, and any one attempting to open a window from the outside is apt to be fatally shocked.•

Reality stars happened for numerous reasons, most notably because TV and print, destabilized by the Internet, needed insta-celebs to provide cheap content–actual stars were too expensive in the new economic reality–so dysfunction was commodified, and the modern version of the circus freak show was popularized. They’re pretty much walking products, all the Housewives and Bachelors, desperately trying to sell themselves in a market where the middle has disappeared and the bottom is the best most can hope for.

In 2007, Lynn Hirschberg of the New York Times Magazine penned “Being Rachel Zoe,” a profile of the image maker at an inflection point in the culture, when faux celebs were becoming the real thing, when the sideshow moved to the center ring. An excerpt:

As always, Zoe (pronounced ZOH) was dressed for the designer she was viewing. She was wearing a bright pink nubby wool Chanel jacket, black pants and her usual five-inch platform open-toed shoes. All the Zoe trademarks were in place: she was very tan; her long blond hair was carefully styled to look carefree; there were ropes of gold chains around her neck and stacks of diamond bangles on her wrists; and enormous (Chanel) sunglasses nearly obscured her face. Even wearing high heels, she is short and stick-thin, but Zoe, who is 36, does not seem fragile. The masses of jewelry, the outsize sunglasses, the whole noisy, ’70s-inspired look add up to a hectic, ostentatious, theatrical sort of glamour.

It’s the look she has duplicated on her clients, making the so-called Zoe-bots paparazzi favorites, as well as walking advertisements for a host of top designers. A cross-pollinator of the worlds of Hollywood celebrities, high fashion and tabloid magazines, Zoe has become a powerful image broker, a conduit to the ever-more lucrative intersection of commerce, style and fame. Early in her career, in 1996, she worked as a stylist at YM magazine, dressing such teenage pop stars as Britney Spears and Jessica Simpson, girls who were young enough to be molded and popular enough to be influential. Around the same time, magazines like Us Weekly began inventing their own cadre of celebrities, like Paris Hiltonand Nicole Richie. They had no discernible accomplishments or talent, but they did seem to go out a lot, and they thrived under the flash of the paparazzi. Magazines like Us constructed provocative narratives around them — their romantic woes, their drug problems — and Zoe, who began working with Richie in 2003 when she was viewed only as Hilton’s plump sidekick, saw an opportunity. “Nicole is now what people refer to as the big thing that happened,” Zoe told me in Paris. “Everything went from nowhere to everywhere. Nicole was about creating a look. Because of her fashion sense, which was really my fashion sense, she became famous. It was a huge moment: Nicole became a style icon without being a star.”

And then Nicole became a star, too. Because of circumstances that remain murky, Nicole and Rachel no longer speak. But the relationship made their careers. Zoe began working with Lindsay Lohan, Kate Beckinsale and other tabloid-ready stars eager for a new fashion identity. Now she has 20 clients, each of whom reportedly pays her more than $6,000 a day to dress them for events, big and small. Some pay only for premieres and award shows; some also retain Zoe to provide clothes for their daily lives. The financial scope of her business also includes incentives in the form of money and/or clothes, accessories or jewels, offered by designers eager to dress a particular Zoe client for a particular event. “Around three years ago, everything began to change,” Zoe said as she ran through puddles toward the entrance of the Chanel show. “The nature of what, or who, is a celebrity has expanded. We aren’t saving lives here, but we are creating images, and images create opportunities in a lot of areas.”•

Tags: ,

Jeez, Jim Holt is a dream to read. If you’ve never picked up his 2012 philosophical and moving inquiry, Why Does the World Exist?: An Existential Detective Story, it’s well worth your time. For awhile it was cheekily listed No. 1 at the Strand in the “Best Books to Read While Drunk” category, but I don’t drink, and I adored it.

In a 2003 Slate article, “My Son, the Robot,” Holt wrote of Bill McKibben’s Enough: Staying Human in an Engineered Age, a cautionary nonfiction tale that warned our siblings soon would be silicon sisters, thanks to the progress of genetic engineering, robotics, and nanotechnology. It was only a matter of time.

Holt was unmoved by the clarion call, believing human existence unlikely to be at an inflection point and thinking the author too dour about tomorrow. While Holt’s certainly right that we’re not going to defeat death anytime soon despite what excitable Transhumanists promise, both McKibben and the techno-optimists probably have time on their side.

An excerpt:

Take McKibben’s chief bogy, genetic engineering—specifically, germline engineering, in which an embryo’s DNA would be manipulated in the hopes of producing a “designer baby” with, say, a higher IQ, a knack for music, and no predisposition to obesity. The best reason to ban it (as the European Community has already done) is the physical risk it poses to individuals—namely, to the children whose genes are altered, with unforeseen and possibly horrendous consequences. The next best reason is the risk it poses to society—exacerbating inequality by creating a “GenRich” class of individuals who are smarter, healthier, and handsomer than the underclass of “Naturals.” McKibben cites these reasons, as did Fukuyama (and many others) before him. However, what really animates both authors is a more philosophical point: that genetic engineering would alter “human nature” (Fukuyama) or take away the “meaning” of life (McKibben). As far as I can tell, the argument from human nature and the argument from meaning are mere terminological variants of each other. And both are woolly, especially when contrasted with the libertarian argument that people should be free to do what they wish as long as other parties aren’t harmed.

Finally, McKibben’s reasoning fitfully betrays a vulgar variety of genetic determinism. He approvingly quotes phrases like “genetically engineered thoughts.” Altering an embryo’s DNA to make your child, say, less prone to violence would turn him into an “automaton.” Giving him “genes expressing proteins to boost his memory, to shape his stature” would leave him with “no more choice about how to live his life than a Hindu born untouchable.” Why isn’t the same true with the randomly assigned genes we now have?

Now to the deeper fallacy. McKibben takes it for granted that we are at an inflection point of history, suspended between the prehistoric and the Promethean. He writes, “we just happen to be alive at the brief and interesting moment when [technological] growth starts to really matter—when it spikes.” Everything is about to change.

The extropian visionaries arrayed against him—people like Ray Kurzweil, Hans Moravec, Marvin Minsky, and Lee Silver—agree. In fact, they think we are on the verge of conquering death (which McKibben thinks would be a terrible thing). And they mean RIGHT NOW. When you die, you should have your brain frozen; then, in a couple of decades, it will get thawed out and nanobots will repair the damage; then you can start augmenting it with silicon chips; finally, your entire mental software, and your consciousness along with it (you hope), will get uploaded into a computer; and—with multiple copies as insurance—you will live forever, or at least until the universe falls apart.•

 

Tags: ,

Uber’s claims that it’s good for Labor are nonsense, and the press conference in Harlem even used Eric Garner’s name to sell that hokum. The rideshare company is potentially good in some ways–consumer experience, challenging the taxi business’ racial profiling, being friendlier to the environment–but it didn’t beat Mayor de Blasio because of those potentially good things, but because it outmaneuvered him on a big lie. That’s worrisome.

Chris Smith of New York smartly sums up the gamesmanship:

De Blasio hadn’t been prepared for the onslaught. What was truly disorienting for him — and politically ominous — was that the roles had been scrambled. The mayor assumed he was the progressive defender of moral fairness and the little guy: Of course city government should regulate anyone trying to add 10,000 commercial vehicles to New York’s streets. Of course he needed to protect the rights of Uber’s “driver-partners.”

Yet Uber was able to deftly outflank de Blasio on his home turf, co-opting pieces of his message, splitting him from his normal Democratic allies, and drawing together an opposition constituency that could haunt de Blasio in 2017. To do so, Uber deployed a sophisticated, expensive political campaign waged by lobbyists and strategists trained in the regimes of Obama, Cuomo, and Bloomberg. That campaign worked in part because even though Uber the company is motivated by the pursuit of profit, not social justice, Uber the product has some genuinely progressive effects. So Uber went straight at the mayor’s minority base, drawing it into its vision of the modern New York. The company’s ad blitz highlighted how Uber’s drivers are mostly black and brown. It held a press conference at Sylvia’s, in Harlem, where the company basically accused the mayor of discriminating against minorities by daring to try to rein in its growth. It pushed data to reporters showing that Uber serves outer-borough neighborhoods that for years were shunned by yellow cabs.

In doing so, the company was able to dispel its aura of Bloomberg-era elitism. Newer services like UberPool, which allows drivers to pick up multiple passengers who split the fare, would ease congestion and make the city greener. Uber exploited its appeal to a youthful, techie, multiracial liberalism, selling itself as about openness and choice — a choice that was being stymied by old bureaucratic ways that have no business in the new city. This was a direct hit on de Blasio’s greatest vulnerability: the mayor’s seeming defense of an entrenched and hated yellow-taxi monopoly that’s been one of his most prolific campaign contributors.•

 

Tags:

The news business isn’t dead, just a tale of haves and have-nots like most of contemporary culture, and we may have reached an inflection point in that arrangement over the past year or so. From Jeff Bezos’ purchase of the Washington Post to Nikkei snapping up the Financial Times, those publications with a global name are being acquired and reformatted for a mobile world where smartphones are the main medium. 

The question for me remains whether the New York Times, the most valuable erstwhile newspaper in the U.S., will be able to go it alone as a “mom-and-pop” shop, or if they must also become part of a gigantic, diversified company. I’d hope for the former and bet the latter.

From Matthew Garrahan at the Financial Times:

What has convinced investors to take another look at news? “There has been a lot of disruption but there has never been a lack of consumer demand for quality news content,” says Jim Bankoff, chief executive of Vox Media.

Another clue can be found in the actions of Apple and Facebook, which have built news offerings. The technology companies have realised that, like photo sharing and music apps, news can attract and retain users of their mobile services. This year will be the first when smartphones are responsible for 50 per cent of news consumption, up from 25 per cent in 2012, according to Ken Doctor, an analyst with Newsonomics. The smartphone has become “the primary access point for many readers,” he says.

The news brands that have attracted the most interest are digital, mobile and global. For Nikkei, buying the FT gives it an opportunity to expand into new markets — particularly in Asia, says Mr Doctor, where markets such as South Korea, Indonesia and India are growing rapidly. The FT “gives Nikkei more weight and more smarts in how to compete,” he says.•

Tags: , ,

Well, of course we shouldn’t engage in autonomous warfare, but what’s obvious now might not always seem so clear. What’s perfectly sensible today might seem painfully naive tomorrow.

I think humans create tools to use them, eventually. When electricity (or some other power source) is coursing through those objects, the tools almost become demanding of our attention. If you had asked the typical person 50 years ago–20 years ago?–whether they would be accepting of a surveillance state, the answer would have been a resounding “no.” But here we are. It just creeped up on us. How creepy.

I still, however, am glad that Stephen Hawking, Steve Wozniak, Elon Musk and a thousand others engaged in science and technology have petitioned for a ban on AI warfare. It can’t hurt.

From Samuel Gibbs at the Guardian:

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.•

Tags: , , ,

nasamoon9

A NASA-commissioned study suggests the costs of establishing and maintaining a moon colony could be reduced by 90% if it’s a multinational, public-private enterprise that utilizes robotics and reusable rockets, and if the moon’s resources are mined for fuel and other necessities. As Popular Science points out, the whole project pretty much rests on there being abundant hydrogen in the moon’s crust, or it’s a no-go.

Below is the study’s passage on robotics, how autonomous machines would aid in the mission and what residual benefits their development may deliver to Earth:

Establish Crew Outpost

Following completion of the ISRU production facility, and the arrival of the large reusable lunar lander, the site is ready for the delivery of habitats, and other infrastructure needed for the permanent crewed lunar base. The ELA is designed to launch a Bigelow BA-330 expandable habitat sized system via either a Falcon Heavy or Vulcan LV to LEO, which is then transferred from LEO to low-lunar orbit (LLO) by leveraging inspace propellant transfer in LEO. The large reusable lunar lander will then rendezvous with the habitat, and other large modules, in LLO and transport them to the surface of the Moon. These modules would be moved by robotic systems from the designated landing areas to the crew habitation area selected during the scouting/prospecting operation. The modules could be positioned into lava tubes, which provide ready-made, natural protection against radiation and thermal extremes, if discovered at lunar production site. Otherwise, the robotic systems will move regolith over the modules for protection. Additionally, the robotic systems will connect the modules to the communications and power plant at the site.

Human & Robot Interaction as a System:

Why are robotics critical?

The reasons that the process begins with robotics instead of beginning with ‘human-based’ operations like Apollo includes:

1. Robotics offer much lower costs and risk than human operations, where they effective, which is amplified in remote and hostile environments.

2. Robotic capabilities are rapidly advancing to a point where robotic assets can satisfactorily prospect for resources and also for set up and prepare initial infrastructure prior to human-arrival.

3. Robotics can be operated over a long period of time in performing the prospecting and buildup phases without being constrained by human consumables on the surface (food, water, air, CO2 scrubbing, etc.).

4. Robotics can not only be used to establish initial infrastructure prior to crew arrival, preparing the way for subsequent human operations, but to also repair and maintain infrastructure, and operate equipment after humans arrive.

Why do robots need humans to effectively operate a lunar base? Why can’t robotics “do it all”? Why do we even need to involve humans in this effort?

1. Some more complex tasks are better performed jointly by humans and robotics….or by humans themselves. This is an important area of research and testing.

2. Humans operate more effectively and quicker than robotic systems, and are much more flexible. Human are able to make better informed and timely judgments and decisions than robotic operations, and can flexibly adapt to uncertainty and new situations.

3. Robotic technology has not reached a point where robots can repair and maintain themselves. The robotic systems will need to periodic as well as unscheduled maintenance and repair….provided by humans.

Public Benefits of Investments in Advanced Robotics

U.S. government investments in advanced technologies such as robotics will have tremendous impacts on American economic growth and innovation here on Earth. The investments just by DARPA in robotic technologies are having significant spill-over effects into many terrestrial applications and dual-use technologies. Examples of dual use technologies include:

a. Robotic systems performing connect /disconnect operations of umbilicals for fluid/propellant loading … could lead to automated refueling of aircraft, cars, launch vehicles, etc.

b. Robotic civil engineering: 3D printing of structures on the Moon with plumbing through industrial 3D printer robotics, could lead to similar automated construction methods here on Earth.

c. Tunnel inspections: Robotic operations for inspecting lava tunes on the Moon could lead to advanced automation in mine shafts on Earth. Advances in autonomous navigation, imagery, and operations for dangerous locations and places could save many lives here on Earth.

d. Remote and intelligent inspection of unsafe structures from natural disasters (tsunamis, radiation leakage, floods, hurricanes) could enable many more operations by autonomous robotics where it is unsafe to send humans.•

« Older entries § Newer entries »