Science/Tech

You are currently browsing the archive for the Science/Tech category.

BillGatesmugshot_700

In America, ridiculously rich people are considered oracles, whether they deserve to be or not.

Bill Gates probably earns that status more than most. He was a raging a-hole when engaged full time as a businessperson at Microsoft, but he’s done as much good for humanity as anyone likely can in his 2.0 avuncular philanthropist rebooting. Gates just did one of his wide-ranging Reddit AMAs. A few exchanges follow. 

_____________________________

Question:

What do you see human society accomplishing in the next 20 years? What are you most excited for?

Bill Gates:

I will mention three things.

First is an energy innovation to lower the cost and get rid of green house gases. This isn’t guaranteed so we need a lot of public and private risk taking.

EDIT: I talked about this recently in my annual letter.

Second is progress on disease particularly infectious disease. Polio, Malaria, HIV, TB, etc.. are all diseases we should be able to either eliminate of bring down close to zero. There is amazing science that makes us optimistic this will happen.

Third are tools to help make education better – to help teachers learn how to teach better and to help students learn and understand why they should learn and reinforce their confidence.

_____________________________

Question:

Hey Bill! Has there been a problem or challenge that’s made you, as a billionaire, feel completely powerless? Did you manage to overcome it, and if so, how?

 

Bill Gates:

The problem of how we prevent a small group of terrorists using nuclear or biological means to kill millions is something I worry about. If Government does their best work they have a good chance of detecting it and stopping it but I don’t think it is getting enough attention and I know I can’t solve it.

_____________________________

Question:

What’s your take on the recent FBI/Apple situation?

Bill Gates:

I think there needs to be a discussion about when the government should be able to gather information. What if we had never had wiretapping? Also the government needs to talk openly about safeguards. Right now a lot of people don’t think the government has the right checks to make sure information is only used in criminal situations. So this case will be viewed as the start of a discussion. I think very few people take the extreme view that the government should be blind to financial and communication data but very few people think giving the government carte blanche without safeguards makes sense. A lot of countries like the UK and France are also going through this debate. For tech companies there needs to be some consistency including how governments work with each other. The sooner we modernize the laws the better.

_____________________________

Question:

Some people (Elon Musk, Stephen Hawking, etc) have come out in favor of regulating Artificial Intelligence before it is too late. What is your stance on the issue, and do you think humanity will ever reach a point where we won’t be able to control our own artificially intelligent designs?

Bill Gates:

I haven’t seen any concrete proposal on how you would do the regulation. I think it is worth discussing because I share the view of Musk and Hawking that when a few people control a platform with extreme intelligence it creates dangers in terms of power and eventually control.

_____________________________

Question:

How soon do you think quantum computing will catch on, and what do you think about the future of cryptography if it does? Thanks!

Bill Gates:

Microsoft and others are working on quantum computing. It isn’t clear when it will work or become mainstream. There is a chance that within 6-10 years that cloud computing will offer super-computation by using quantum. It could help use solve some very important science problems including materials and catalyst design.

_____________________________

Question:

You have previously said that, through organizations like Khan Academy and Wikipedia and the Internet in general, getting access to knowledge is now easier than ever. While that is certainly true, K-12 education seems to have stayed frozen in time. How do you think the school system will or should change in the decades to come?

Bill Gates:

I agree that our schools have not improved as much as we want them to. There are a lot of great teachers but we don’t do enough to figure out what they do so well and make sure others benefit from that. Most teachers get very little feedback about what they do well and what they need to improve including tools that let them see what the exemplars are doing.

Technology is starting to improve education. Unfortunately so far it is mostly the motivated students who have benefited from it. I think we will get tools like personalized learning to all students in the next decade.

A lot of the issue is helping kids stay engaged. If they don’t feel the material is relevant or they don’t have a sense of their own ability they can check out too easily. The technology has not done enough to help with this yet.

_____________________________

Question:

What’s a fantasy technological advancement you wish existed? 

Bill Gates:

I recently saw a company working on “robotic” surgery where the ability to work at small scales was stunning. The idea that this will make surgeries higher quality, faster and less expensive is pretty exciting. It will probably take a decade before this gets mainstream – to date it has mostly been used for prostate surgery.

In the Foundation work there are a lot of tools we are working on we don’t have yet. For example an implant to protect a woman from getting HIV because it releases a protective drug.

Question:

What’s a technological advancement that’s come about in the past few years that you think we were actually better off without?

Bill Gates:

I am concerned about biological tools that could be used by a bioterrorist. However the same tools can be used for good things as well.

Some people think Hoverboards were bad because they caught on fire. I never got to try one.•

321hover8-e1450829287238

Tags:

jeffersonani

Thomas Jefferson was never a soldier, but he fought for Americans in numerous ways. After the new nation won its independence, the Founding Father squared off in France against those who believed the United States’ plants and animals inferior to Europe’s, which of course was wholly ignorant, but unenlightenment shapes the world if it has enough believers.

Jefferson’s efforts involved, among other things, a giant moose skeleton. From Andrea Wulf in the Atlantic:

In Paris, in between negotiations of commercial treaties, arranging loans and composing diplomatic dispatches, Jefferson purchased the latest scientific books, visited famous gardens and met the greatest thinkers and scientists of the age. He also quickly found himself in the midst of a scientific battle that to his mind was of the greatest political and national interest. His weapons were native North American trees, weights of mammals, a panther pelt, and the bones and skin of a moose.

For years, Jefferson had been furious about a theory that the French called the “degeneracy of America.” Since the mid-eighteenth century several French thinkers had insisted that flora and fauna degenerated when “transplanted” from the Old to the New World. They noted how European fruits, vegetables and grains often failed to mature in America and how imported animals refused to thrive. They also insisted that American native species were inferior to European plants and animals. One of the offending scientists was Georges-Louis Leclerc, Comte de Buffon, the most famous naturalist in the world and the author of the 36–volume magisterial Histoire Naturelle. In the 1760s and 1770s Buffon had written that in America all things “shrink and diminish under a niggardly sky and unprolific land.”

As Buffon’s theories spread, the natural world of America became a symbol for its political and cultural significance—or insignificance, depending on the point of view. Hoping to restore America’s honor, and elevate his country above those in Europe, Jefferson set out to prove that everything was in fact larger and superior in the New World.•

Tags: ,

pacmanghosts

Will machines eventually grow intelligent enough to eliminate humans? One can only hope so. I mean, have you watched the orange-headed man discuss the length of his ding-dong at the GOP debates?

In all seriousness, I think the discussion about humans vs. machines is inherently flawed. It supposes that Homo sapiens as we know them will endlessly be the standard. Really unlikely. If our Anthropocene sins don’t doom us, we’ll likely have the opportunity to engineer a good part of our evolution, whether it’s here on Earth or in space. (Alien environments we try to inhabit will also change the nature of what we are.) Ultimately, it will be a contest between Humans 2.0 and Strong AI, though the two factions may reach detente and merge.

For the time being, really smart researchers teach computers to teach themselves, having them use Deep Learning to master Pac-Man and such, speeding the future here a “quarter” at a time. From an article about the contemporary London AI scene by Rob Davies in the Guardian:

Murray Shanahan, professor of cognitive robotics at Imperial, believes that while we should be thinking hard about the moral and ethical ramifications of AI, computers are still decades away from developing the sort of abilities they’d need to enslave or eliminate humankind and bringing Hawking’s worst fears to reality. One reason for this is that while early artificial intelligence systems can learn, they do so only falteringly.

For instance, a human who picks up one bottle of water will have a good idea of how to pick up others of different shapes and sizes. But a humanoid robot using an AI system would need a huge amount of data about every bottle on the market. Without that, it would achieve little more than getting the floor wet.

Using video games as their testing ground, Shanahan and his students want to develop systems that don’t rely on the exhaustive and time-consuming process of elimination – for instance, going through every iteration of lifting a water bottle in order to perfect the action – to improve their understanding.

They are building on techniques used in the development of DeepMind, the British AI startup sold to Google in 2014 for a reported £400m. DeepMind was also developed using computer games, which it eventually learned to play to a “superhuman” level, and DeepMind programs are now able to play – and defeat – professional players of the Chinese board game Go.

Shanahan believes the research of his students will help create systems that are even smarter than DeepMind.•

Tags: ,

cellph-1

It’s no secret that regulation, not traditionally the nimblest of things, has trouble keeping pace with technology, but Vivek Wadhwa states the case well in a new Washington Post column. He points out that decisions made on these thorny questions are often done emotionally–would Tim Cook be willingly working with the government if there had been a terrorist attack on Apple headquarters?–but the bigger issue is the briskness with which our tools are progressing. Think about how quickly drones and driverless have morphed in just the past few years. Wadhwa uses another example: the iPhone. An excerpt:

It takes decades, sometimes centuries, to reach the type of consensus that is needed to enact the far-reaching legislation that Congress will have to consider. Laws are essentially codified ethics, a consensus that is reached by society on what is right and wrong. This happens only after people understand the issues and have seen the pros and cons.

Consider our laws on privacy. These date back to the late 1800s, when newspapers first started publishing gossip. They wrote a series of intrusive stories about Boston lawyer Samuel Warren and his family. This led his law partner, future U.S. Supreme Court Justice Louis Brandeis, writing a Harvard Law Review article “The Right of Privacy”  which argued for the right to be left alone. This essay laid the foundation of American privacy law, which evolved over 200 years. It also took centuries to create today’s copyright laws, intangible property rights, and contract law. All of these followed the development of technologies such as the printing press and steam engine.

Today, technology is progressing on an exponential curve; advances that would take decades now happen in years, sometimes months. Consider that the first iPhone was released in June 2007. It was little more than an iPod with an embedded cell phone. This has evolved into a device which captures our deepest personal secrets, keeps track of our lifestyles and habits, and is becoming our health coach and mentor. It was inconceivable just five years ago that there could be such debates about unlocking this device.•

Tags:

gighadidselfie5

I don’t have to tell you that we’re living in a new and strange economy. The star of a film franchise that makes more than a billion dollars globally is paid six figures and has no real leverage to demand more, whereas Kendall Jenner or Gigi Hadid reportedly earn in that ballpark just for putting a single post on Instagram. Of course, all of the above are lottery winners in this post-collapse world of flat wages and vulnerable workers.

In his recent Reddit AMA, Douglas Rushkoff, author of Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity, engaged in an esoteric give-and-take about where the economy may be heading. He believes restoring the middle class through micropayments unlikely (and it is!), thinking tomorrow will need a better system. Despite the noble thought projects Rushkoff mentions, I can guarantee you the future isn’t TV-less Tandy computers. An excerpt:

Question:

Within a decade we could see mainstream VR/AR with eye-tracking that would lead to complete compartmentalization, observation, and memorization of pretty much all Hierarchical interactions between individuals in all levels of a growing society.

With innovative social networking tools like Synereo, which is pretty much a decentralized Facebook that turns ‘Likes’ into attention-derived cryptocurrency, do you think we’re headed into a digital economy that’s vastly different than today, or are things going to be relatively the same?

Douglas Rushkoff:

We could go in some bizarre new direction like you’re describing. And it would be interesting. It’s a bit like Lanier envisions, where we start getting all sorts of data-mining activities back on the books, and pay people in micro currencies. But I’m thinking it’s likely easier to go in the other direction. I’m interested in getting things off the books. Building connections between people. I don’t like building a society based on the premise that everyone is trying to game the system.

True – right now, almost everyone is trying to game the system. Finance itself is gamified commerce. Derivatives and algorithms gasify that, and so on and so on. Startups are gamified Wall St.

So these new micro-transactional social networks mean to reprogram the value extraction to our own benefit. I just don’t know where the marketers are who are going to support all this in the end. Marketing has never made up more than 5% of GDP. And that’s being generous. I don’t think it can support the entire economy.

I’d rather see communities develop currencies for people to take care of one another, and for us to use those locally, and then use long distance money to buy our iPhones or whatever.

Question:

“I don’t like building a society based on the premise that everyone is trying to game the system.”

This exactly. I don’t mean to sound anti-capitalist, but that’s one of its major flaws – that exploiting people’s weaknesses for financial gain is a good thing.

Douglas Rushkoff:

One horrific factoid I’ve been working on is what would be the cost of an iPhone if it didn’t use the equivalent of slave labor and blood rare-earth minerals. We get these things so relatively cheaply, and it feels as if making these technologies cheaper somehow breaks down the digital divide. But it really just externalizes it to somewhere we don’t see.

It’s a strange project – but I’m wondering if it’s really appropriate to make our tech cheaper at their expense? And wonder if the older stuff we used to use – like my Tandy computer – can do things like this AMA, how is getting TV over my computer really important?•

Tags:

SES Obemannad Butik 3

Jazz Age entrepreneurs promised automated department stores, even if they didn’t yet possess the technology to deliver them. By the late 1930s, automated Keedoozle grocery stores became a reality, even if they didn’t exactly thrive over the next couple of decades. Robots have recently begun to infiltrate certain chains, though their role for the time being is complementary. 

In Sweden, one early adopter has designed a concept store that goes all in on automation, disappearing human workers and leaving the responsibility to the machines. Robert Ilijason came up with the idea to serve people living outside of cities who don’t have easy access to basic goods. You scan and pay for the items with your smartphone. “My ambition is to spread this to other small towns,” he says, “This can be a general store 2.0.” It may initially take root in more rural areas where it’s a job creator, but we’re only in the prelude stage, and the idea will be applied, to some extent, everywhere.

From Oresund Startups:

The person behind the idea is Robert Ilijason, who specializes in tech. But the idea is inspired not so much by the professional field but rather by personal experience.

“I lost it last baby food jar. And then there was panic. I myself live in the bay and had to go to Helsingborg to buy new food. Then I got the idea that there should be a store here and started thinking about how to solve this purely technical,” shared Robert in HD interview.

Those who are interested in using the services of a new store will have to become its members. The only requirement for joining is to provide a credit report, which takes a few minutes to complete. Initially the range of products will not be wide, but the customers will have the power to influence product variety. The app will provide a support service where users will write what they would like to see in the store.

If everything goes according to plan, the concept will be spread to more places.•

Tags:

Ray_Bradbury (1)

Can you imagine if more than 50 years after the global sensation of Charles Lindbergh’s Transatlantic flight if there were still no commercial airlines? Seems unthinkable, right? 

It’s beyond perplexing that we haven’t established permanent colonies on the moon, that this “trade route” wasn’t opened in the wake of the successful Apollo missions of 1969-72. It must have seemed fait accompli during those bold days. On the night of the giant leap for mankind, Robert Heinlein and Arthur C. Clarke could barely contain their wildest visions when interviewed live by Walter Cronkite and his cohorts. Sure, the cost involved in space flight dwarfs that of Earth-bound airport hopping, but it seemed to make too much sense to not happen, didn’t it?

Alas, it did not come to fruition. Sharing my disappointment is the excellent writer Brian Clegg, who’s penned an Aeon essay which explores why science fiction became so uncoupled from reality when it comes to “setting up house” on the moon. The opening:

One of the biggest thrills of my teenage life was being allowed to stay up all night to watch the Apollo 11 moon landing (the first time I’d ever spent a whole night without going to bed). And something was very clear to me, back then on that long 1969 night: I would be going to the moon too. Not soon, but before I died. I was realistic. I didn’t expect it to be soon, because I never saw myself as an astronaut. But I firmly expected that by the time this book was written and I was a very elderly person in my fifties, trips to the moon would be pretty much like flights across the Atlantic were in the 1960s. Still a very special experience, not for everyone by any means, but something that would be available to the general public as a safe, scheduled pleasure trip.

This seems very naïve now, but it really didn’t back in the heady days of 1969. I had read the science fiction. I knew that moon bases and lunar cities would inevitably follow that first, groundbreaking step of making a manned landing. Why not? It seemed an entirely logical progress. Think how much had been achieved in just the previous eight years. Imagine what would be possible in another 40 or 50 years. And yet the reality was so different. There were just six brief manned moon landings in the Apollo series, and then nothing. Not a single person has reached the moon for decades. There have been plenty of unmanned probes, but nothing has been done toward laying the ground for those lunar cities and for the regular, commercial moon flights I so eagerly anticipated. That glorious future has evaporated.

There’s something very strange and fascinating in the way that reality has deviated so far from science fiction – especially considering how deeply rooted the moon is in the human imagination.•

Tags:

Lucas_Cranach_(I)_-_Jungbrunnen_-_Gemäldegalerie_Berlin (1)

Jaron Lanier has written wisely about how religious fervor can be repurposed in a more algorithmic age, with AI becoming a new faith, the Singularity being anticipated as a Second Coming of sorts. This subtext has come to the surface in certain corners, one being the Church of Perpetual Life in Florida. Bankrolled by wealthy businessperson Bill Faloon, the institution is a sanctuary for belief in radical life extension and a-mortality, things theoretically possible in the very long run but incredibly unlikely to be available to the parishioners hoping to dodge death.

Transhumanist Presidential candidate Zoltan Istvan, who’s made mortality’s endgame a central tenet of his campaign, visited the Hollywood house of worship and filed a report for Business Insider. The opening:

Many people think of transhumanism — the belief that humans can evolve through science and tech — as a secular movement. For the most part it is, but there are a number of organizations that aim to combine science and spirituality together.

One of the largest is the Church of Perpetual Lifea brick and mortar worship center near Miami, Florida that looks like any other church. It has a minister, a congregation, and church activities. The only difference is this church wants to use science to conquer death. 

I was asked to speak at a Church of Perpetual Life service while traveling across America on my Immortality Bus — a coffin-like campaign bus I’m using during my run for president of the US (under the guise of the Transhumanist Part). Services at the Church of Perpetual Life don’t revolve around worshiping a deity. They’re passionate exploration of life extension research. It’s a group of people that want to live forever, but also want belong to a spiritual community.

Conversations are centered around how humanity can improve itself through science, how we can overcome death with technology, and how suffering can be broadly eliminated.

The church itself welcomes people of all religions, and sometimes explores concepts of a benign creator in very nonspecific terms.

Tags: ,

helmet56

Singles got a bad name in the 1970s and have been blamed for many of society’s ills ever since, but they were revolutionaries in their own way. This century, we’ve witnessed significant growth in the unmarried population in America (and Europe and Asia), a shift that impacting the world. It’s taken hold in part because phones and apps and other tools of liberation have uncoupled living alone and loneliness.

Many social scientists believe this new normal is a poison pill for us culturally and economically, but Bella DePaulo, author of How We Live Now, argues the contrary in a Nautilus essay. She believes the modern living arrangements have made for stronger and better communities, with an untethered class of people who’ve improvised families and have the time and freedom to contribute to the world outside their homes. I tend to agree with her, but it would be great to know for sure since the answer could help us in creating smart policy. The opening:

When Dan Scheffey turned 50, he threw himself a party. About 100 people packed into his Manhattan apartment, which occupies the third floor of a brick townhouse in the island’s vibrant East Village. His parents, siblings, and an in-law were there, and friends from all times and walks of his life. He told them how much they meant to him and how happy he was to see them all in one place. “My most important family,” says Dan, who has been single his entire life, “is the family that I’ve selected and brought together.”

Dan has never been married. He doesn’t have kids. Not long ago, his choice of lifestyle would have been highly unusual, even pitied. In 1950, 78 percent of households in the United States had a married couple at its helm; more than half of those included children. “The accepted wisdom was that the post-World War II nuclear family style was the culmination of a long journey—the end point of changes in families that had been occurring for several hundred years,” sociologist John Scanzoni wrote in 2001.

But that wisdom was wrong: The meaning of family is morphing once again. Fueled by a convergence of historical currents—including birth control and the rising status of women, increased wealth and social security, LGBTQ activism, and the spread of personal communication technologies and social media—more people are choosing to live alone than ever before.

Pick a random American household today, and it’s more likely to look like Dan’s than like Ozzie and Harriet’s. Nearly half of adults ages 18 and older are single. About 1 in 7 live alone. Americans are marrying later, divorcing in larger numbers, and becoming less interested in remarrying. According to the Pew Research Center, by the time today’s young adults reach age 50, a quarter of them will have never married at all.

The surge of singlehood is not just an American phenomenon. Between 1980 and 2011, the number of one-person households worldwide more than doubled, from about 118 million to 277 million, and will rise to 334 million by 2020, according to Euromonitor International. More than a dozen countries, including Japan and several European nations, now have even larger proportions of solo-dwellers than the U.S. (Sweden ranks highest at almost 50 percent.) Individuals, not couples or clans or other social groups, are fast becoming the fundamental units of society.•

Tags: ,

OldFashionedGirlsMean

In “Global Cities, Global Talent” a new Deloitte report that’s bullish on London and bearish on NYC because of the greater number of high-skilled jobs the former has recently added, perhaps the most worrying conclusion is that the hollowing out of low-skilled positions via automation may further exacerbate our increasingly middle-less economy. According to Deloitte, women may be particularly prone in this new normal.

The paper does note that “the difficulty of implementing the technology, social or political resistance or the relative human cost of labor versus investment in technology” may be the “real brakes on the pace of job automation.” It seems doubtful those things will be any type of long-term obstacle to automation, and it really shouldn’t be artificially restrained. But policy is going to have to answer many difficult questions in the next few decades to keep societies from irreparably fraying.

From Matthew Nitch Smith at Forbes:

One of the biggest accountancy firms in the world Deloitte released a report today entitled “Global cities, global talent” and it warned that “automation risks ‘hollowing out’ London’s lower paid jobs.”

However, at the same time it said 235,000 high-skill jobs have been created in London since 2013.

Basically, those working in lower paid jobs, mainly service and manufacturing sector jobs like cleaning, waitressing, and some factory work, are at the greatest risk of losing their jobs because robots are able to do it instead of them. 

The warning comes close after the World Economic Forum (WEF) warned that as many as five million jobs could be lost between 15 major and emerging economies by 2020 due to robots, automation, and artificial intelligence.

The British Retail Consortium also said that 900,000 jobs would be lost in retail across the country thanks, in part, to “robots.” It added that almost a third of stores would close by 2025. 

Automation on a mass scale has always been concern to economists and employees alike, but we’re now starting to get the sense that what was once in the realm of sci-fi is going to have a real, imminent impact on global cities like London.

So who should be worried?•

Tags:

tireturningani (2)

For driverless cars, it’s really more a matter of when than if. They may not arrive en masse in the next ten minutes the way Elon Musk believes they will, but we’re at the beginning of what may be a relatively quick transition into a world of hands-free vehicles, which, if we’re smart and fortunate, will be EVs powered by electricity from solar sources. This new reality will be full of ethical, legal and philosophical questions, some of them extremely thorny. But that’s the future, and it isn’t far from now. In our age, we’ll get to experience for years–decades, probably–a variation of what it was like when horses and cars (uneasily) shared the roads and streets. In the new equation, we’re the horses.

From Martin Belam in the Guardian:

Our cities must have been dreadfully foul and smelly before the motor car. At the London Transport Museum they have a display of two horse-drawn vehicles. Pre-recorded voices make it sound like the model horses are chatting to each other, and there’s fake horse dung on the floor for extra giggles. Whole sub-industries flourished in clearing up the straw and excrement clogging up our 19th-century streets. It must have been particularly grim when it rained or snowed.

I thought about this exhibit while trying to cross the road the other day, waiting for a break in the relentless London traffic. I watched cars whizz by, spewing out fumes that we know are toxic, and burning fossil fuels that costs us millions to extract from the ground.

It struck me how awful and primitive that is going to look in a museum display in a hundred years’ time. People stuck in movable boxes polluting the air, taking up all the space in our cities. The display will calmly inform people that by the early 21st century, thanks to huge efforts expended on safety measures, only around four people every day died on the UK’s roads due to cars.

That is the way things are.

But technology is going to transform it over the next couple of decades, and we can see the endgame. We know we are going to get to a point where nearly every car is driverless, and uses some kind of rechargeable electric power rather than petrol engines.

There will be awkward decades where the modes of transport co-exist, as evidenced by the fact that one of Google’s self-driving cars just pranged a bus in the US. But what is the exception now will become the norm.•

Tags:

The names Benjamin Franklin and Jenny McCarthy don’t usually squeeze into the same sentence, but they did both make a similar stand 290 years apart: They were anti-Vaxxers.

We know well of the blonde celebrity’s inane crusade linking vaccinations and autism, but America’s key-and-kite man similarly stood strong against smallpox inoculations in the early 1700s. Just as confounding was that the witch-burning enabler Cotton Mather was on the right side, spearheading the successful experiment which provoked violent dissent. The caveat is that Franklin was a mere 16 at the time, though it does remind that we all need to constantly question our beliefs despite our intellects or qualifications or allegiances.

Mike Jay, a wonderful thinker (see here and here and here) has written about this strange moment in history in a WSJ book review of Stephen Coss’ The Fever of 1721, which looks at how this roiling controversy anticipated aspects of the American Revolution. An excerpt:

Inoculation was commonplace across swaths of Africa, the Middle East and Asia, Mr. Coss explains, but this inclined the doctors of Enlightenment-era Europe to regard it as a primitive superstition. Such was the view of William Douglass, the only man in Boston with the letters “M.D.” after his name, who was convinced that “infusing such malignant filth” in a healthy subject was lethal folly. The only person Mather could persuade to perform the operation was a surgeon, Zabdiel Boylston, whose frontier upbringing made him sympathetic to native medicine and who was already pockmarked from a near-fatal case of the disease.

“Given that attempting inoculation constituted an almost complete leap of faith for Boylston,” Mr. Coss writes, “he spent surprisingly little time agonizing over it.” He knew personally just how savage the toll could be. On June 26, 1721, just as the epidemic began to rage in earnest, Boyston filled a quill with the fluid from an infected blister and scratched it into the skin of two family slaves and his own young son.

News of the experiment was greeted with public fury and terror that it would spread the contagion. A town-hall meeting was convened, at Dr. Douglass’s instigation, at which inoculation was condemned and banned. Mather’s house was firebombed with an incendiary device to which a note was attached: “I will inoculate you with this.”

The crisis was the making of James and Benjamin Franklin’s New-England Courant, which stoked the controversy with denunciations of Mather that drew parallels between his “infatuation” with inoculation and his onetime obsession with witchcraft. But as the death toll mounted, the ban on inoculation collapsed under the weight of public demand.

Tags: , , ,

hendrik1 eityomage6

IW_MuseoAndersen_02-665x498

Like most who entertain top-heavy fantasies for reimagining the world, the sculptor and urban planner Hendrik Christian Andersen was a bit of a buffoon.

The Norwegian-American artist truly believed that if he could build a flawless city of beauty and learning that knew no nationalist bounds, the entire world would be inspired to perfection. Not only was it an asinine political fantasy, but it somehow led Andersen into the arms of the vulgar, murderous clown Benito Mussolini, a former drifter and agitator who had horrified the world in the 1920s by coming to absolute power in Italy. Il Duce, no doubt enamored with the pomposity of the project, promised the visionary the land and resources to realize his dream. The Shangri-La was ultimately never built, but the “soft-voiced idealist,” as the artist was described, was still speaking fondly of Mussolini into the middle of the 1930s. Andersen died in Rome in 1940, not living long enough to see his patron deservedly face the business end of a meat hook. An article in the June 19, 1927 Brooklyn Daily Eagle recalls the proposed series of stately pleasure-domes.

andersentitle6789

and1

and2

and3

and4

and5

and6

Tags: ,

3213dprinter5

When Bill Maher belatedly learned 3D printers would be able to produce plastic guns that were fully operational but untraceable, he impetuously suggested we ban all plastic, which showed an ignorance of both the printers and of society in general, which would grind to a halt if the material was suddenly banished.

Misunderstandings about the machines aren’t without precedence: In the 1970s, those who half-interestedly glanced at the windows of a Byte Shop probably thought personal computers might be good for saving recipes or doing light bookkeeping, but it’s not likely the majority divined the breadth of the PCs’ applications. 3D printers are in an analogous position today. They may be Etsy-ready tools, but the truth is they’re positioned to revolutionize manufacturing and medicine.

In a WSJ column, Christopher Mims explains how carbon-fiber 3D-printing can deliver the “strength of metal for the cost of plastics.” An excerpt:

Not long ago I held the product of such a potentially game-changing technology in my hands—a small, intricately detailed component for a valve. It looked like the shell of a nautilus from an alien planet. With its combination of lightness, strength and finish, the component felt very much like the future. And not just the next five years, but the next 50.

The object I held was unusual for two reasons: what it was made of, and how it was made.

It was made of carbon fiber, a man-made material used in airplanes, race cars and wind turbines that is stronger, ounce for ounce, than steel or aluminum. But it is expensive, and surprisingly labor intensive to make, requiring workers to cut, layer and mold sheets of plastic infused with carbon fiber—an oddly 18th century approach to making a 21st century material.

This carbon-fiber component had been made on a 3-D printer, a gadget more often associated with spitting out plastic novelties.•

Tags:

The bodies of Franz Ferdinand, Archduke of Austria (1863 - 1914) and his wife Sophie lie in state after their assassination at Sarajevo. Original Publication: People Disc - HM0513 (Photo by Hulton Archive/Getty Images)

Speaking of Stephen Wolfram, the scientist recently did an Ask Me Anything at Reddit, addressing the topic of whether it’s possible to create a computational model of history. At first blush, it would seem impossible, understanding how many things can seemingly turn on a single incident or accident. Wolfram acknowledges he doesn’t have an answer, though he won’t dismiss it out of hand. After all, biology, which is capable of being mapped, is itself a type of history.

An excerpt:

That’s an interesting issue: is there a “theory of history” or is too much of it accidental? There clearly are some aspects of history that can be modeled, and indeed people have used my kinds of models to do this. (Think e.g. computational agents in a market, social system, etc.)

Biology gives us another example of a historical record … where perhaps more has been played out than in human history. One of the things one can ask is whether the organisms that exist are a consequence of particular historical accidents … or whether they’re somehow theoretically determined, e.g. by filling out a space of all possibilities. I was somewhat surprised to discover, at least in the particular cases I looked at (e.g. http://www.wolframscience.com/nksonline/section-8.6 ) that there was a lot of predictability in the set of possible organisms.

Does something similar apply to human history? I don’t know. I suspect we haven’t had enough independent societies etc. etc. to see the same kind of phenomena as in biology. I note that in Wolfram Language (and Wolfram|Alpha) we now have a lot of historical country data … and it’s remarkable to watch the evolution of the countries of the world with time: it looks remarkably “biological,” and perhaps amenable to theory.•

Tags:

lpfactoryrobot (3)

If our species is fortunate (and wise) enough to survive deep into the future, we’ll continually redefine why we’re here. I doubt anyone would want people in 2325 to subsist on currency paid to them for piecing together fast-food sandwiches. Those types of processes will be automated and everyone will hopefully be working on more substantial issues. 

The problem is, we really don’t need humans doing that job right now. And pretty soon, we won’t need delivery drivers, truck drivers, taxi drivers, bellhops, front-desk agents, wait staff, cooks, maintenance people and many other fields, a number of them white-collar positions that were traditionally deemed “safe.” In addition to figuring out what our new goals need to be, that type of technological unemployment could bring about a serious distribution problem. If the transition occurs too quickly, smart policy will need to be promptly deployed.

In the Edge piece “AI and the Future of Civilization,” Stephen Wolfram tries to answer the bigger question of what role humans will play as automation becomes ubiquitous. The scientist believes our part will be to invest the new machines with goals. He says “that’s what humans contribute, that’s what our civilization contributes—execution of those goals.”

The opening:

Some tough questions. One of them is about the future of the human condition. That’s a big question. I’ve spent some part of my life figuring out how to make machines automate stuff. It’s pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What’s the future of the human condition in that situation?

More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we’ve had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions’ work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it’s trying to execute.

People talk about the future of the intelligent machines, and [whether] intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated; the actual inventing of the goal is not something that in some sense has a path to automation.

How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by a small history of their cultural environment, the history of our civilization. Goals are something that are uniquely human. It’s something that almost doesn’t make any sense. We ask, what’s the goal of our machine? We might have given it a goal when we built the machine.•

Tags:

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723 (4)

Has there ever been a biography written about Alvin Toffler, the sociological salesman whose pants are forever being scared off? I don’t see one on Amazon. I’d love to know what it was about his life that positioned him, beginning in the 1960s, to look ahead at our future and be shocked. There’s always been a strong sci-fi strain to his work, though it’s undeniably important to think about how science and technology could go horribly wrong. By imagining the worst, perhaps we can avoid it. Likewise it’s vital to realize that exploring these uncharted frontiers may be key to saving us from extinction.

A passage about genetic engineering, a fraught field but one with tremendous promise, from a 1978 Omni interview with Toffler conducted by leathery beaver merchant Bob Guccione:

Omni:

What’s good about genetic engineering?

Alvin Toffler:

Genetic manipulation can yield cheap insulin. It can probably help us solve the cancer riddle. But, more important, over the very long run it could help us crack the world food problem.

You could radically reduce reliance on artificial fertilizers–which means saving energy and helping the poor nations substantially. You could produce new, fast-growing species. You could create species adapted to lands that are now marginal, infertile, arid, or saline. And if you really let your long-range imagination roam, you can foresee a possible convergence of genetic manipulation, weather modification, and computerized agriculture–all coming together with a wholly new energy system. Such developments would simply remake agriculture as we’ve known it for 10,000 years.

Omni:

What is the downside?

Alvin Toffler:

Horrendous. Almost beyond our imagination, When you cut up genes and splice them together in new ways, you risk the accidental escape from the laboratory of new life forms and the swift spread of new diseases for which the human race no defenses.

As is the case with nuclear energy we have safety guidelines. But no system, in my view, can ever be totally fail-safe. All our safety calculations are based on certain assumptions. The assumptions are reasonable, even conservative. But none of the calculations tell what happens if one of the assumptions turns out to be wrong. Or what to do if a terrorist manages to get a hold of the crucial test tube.

A lot of good people are working to tighten controls in this field. NATO recently issued a report summarizing the steps taken by dozens of countries from the U.S.S.R. to Britain and the U.S. But what do we do about irresponsible corporations or nations who just want to crash ahead? And completely honest, socially responsible geneticists are found on both sides of an emotional debate as to how–or even whether–to proceed.

Farther down the road, you also get into very deep political, philosophical, and ecological issues. Who is to write the evolutionary code of tomorrow? Which species shall live and which shall die out? Environmentalists today worry about vanishing species and the effect of eliminating the leopard or the snail darter from the planet. These are real worries, because every species has a role to play in the overall ecology. But we have not yet begun to think about the possible emergence of new, predesigned species to take their place.•

bubbleafflictor1dome-e1443217330389

Oliver Morton’s excellent The Planet Remade encourages earthlings to use every tool in the shed, even geoengineering, in combating climate change. Some blanch at willfully messing with mother nature, but we already knowingly tamper with the environment in large-scale ways with chemical fertilizers, fossil fuels, etc.

Many hopeful of colonizing Mars like to similarly to think outside the box–outside the dome–and try to figure out how humans might make it in outer space without living permanently indoors and/or stuffed inside of spacesuits. For the foreseeable future, the challenge of terraforming an entire planet, or even a good chunk of one, seems untenable. If our species or some variation of it persists long enough, however, the seemingly impossible may become plausible.

From Matteo Ceriotti at The Conversation:

The final requirement for a space colony will be keeping the climate habitable. Atmospheric composition and climate on other celestial bodies are very different to Earth’s. There is no atmosphere on the moon or asteroids, and on Mars the atmosphere is made mainly of carbon dioxide, producing surface temperatures of 20°C down to -153°C during winter at the poles, and an air pressure just 0.6% of Earth’s. In such prohibitive conditions, settlers will be limited to living inside the isolated habitats and strolls outside will only be possible using spacesuits.

One alternative solution may be to change the planet’s climate on a large scale. We’re already studying such “geo-engineering” as a way to respond to Earth’s climate change. This would require huge effort but similar techniques could be scaled and applied for example to other planets such as Mars.

Possible methods include bioengineering organisms to convert carbon dioxide in the atmosphere to oxygen, or darkening the Martian polar caps to reduce the amount of sunlight they reflect and increase the surface temperature. Alternatively, a large formation of orbiting solar mirrors could reflect the light of the sun on specific regions such as the poles to cause a local increase in temperature. Some have speculated that such relatively small temperature changes could trigger the climate to take on a new state with much higher air pressure, which could be the first step towards terraforming Mars.•

Tags:

WesleyAClark_LINC-f6904e895c5fe698

Long before John Lilly used Apple IIs to attempt to speak to dolphins, the LINC, the first modern personal computer, was his tool of choice in trying to coax conversation from the marine mammals. That was in the 1960s, the decade in which physicist Wesley A. Clark, realizing that microchips would progressively get much smaller and cheaper, led a team that built the not-quite-yet-portable PC, which ran counter to the popular idea of computers as shared instruments. It retailed at $43,000. 

Clark just died at 88. From John Markoff’s NYT obituary of the scientist: 

He achieved his breakthroughs working with a small group of scientists and engineers at the Lincoln Laboratory of the Massachusetts Institute of Technology in the late 1950s and early ’60s. Early on they had the insight that the cost of computing would fall inexorably and lead to computers that were then unimaginable.

Severo Ornstein, who as a young engineer also worked at Lincoln in the 1960s, recalled Mr. Clark as one of the first to have a clear understanding of the consequences of the falling cost and shrinking size of computers.

“Wes saw the future 15 years before anyone else,” he said.

Mr. Clark also had the insight as a young researcher that the giant metal cabinets that held the computers of the late 1950s and early ’60s would one day vanish as microelectronics technologies evolved and circuit sizes shrank.

Each LINC had a tiny screen and keyboard and comprised four metal modules. Together they were about as big as two television sets, set side by side and tilted back slightly. The machine, a 12-bit computer, included a one-half megahertz processor. (By contrast, an iPhone 6s is thousands of times faster and has 16 million times as much memory.)

A LINC sold for about $43,000 — a bargain at the time — and Digital Equipment, the first minicomputer company, ultimately built them commercially, producing 50 of the original design.

The influence of the LINC was far-reaching. For example, as a Stanford undergraduate, Larry Tesler, who would go on to become an early advocate of personal computing and who helped design the Lisa and Macintosh at Apple Computer, programmed a LINC in the laboratory of the molecular biologist Joshua Lederberg.•

Tags: ,

Artist : Elliott Erwitt (France; United States of America, b.1928)  Title :  Date : 1957 Medium Description: gelatin silver photograph Dimensions :  Credit Line : Purchased with funds provided by the Photography Collection Benefactors' Program 1995 Image Credit Line :  Accession Number : 287.1995

From the April 11, 1928 Brooklyn Daily Eagle:

mummy (1)

321CUT-RATE_GAS_STATION_OPERATES_OUT_OF_BUS_-_NARA_-_546153.tif-2-e1438205374203 (1)

This weekend, I tweeted a link to a 2014 Tony Hiss Smithsonian article about E.O. Wilson’s “Half-Earth” proposal for combating biodiversity loss. This plan suggests we set aside 50% of the planet’s surface for non-human species, which would not only help safeguard them but us as well. There are some, like Stewart Brand, who think we’ll soon be able to de-extinct at will, but the ability to repopulate is far from certain and full of unintended consequences. Better to preserve what we have while learning to create (or re-create) even more.

Coincidentally, Aeon has published a piece by Wilson on the topic today, the first essay the great biologist has penned for the great online magazine. Among other things, he explains why 50% isn’t just a nice round number but a key one and how rising consumption won’t doom the project. An excerpt:

Today, every sovereign nation in the world has a protected-area system of some kind. All together the reserves number about 161,000 on land and 6,500 over marine waters. According to the World Database on Protected Areas, a joint project of the United Nations Environmental Program and the International Union for Conservation of Nature, they occupied by 2015 a little less than 15 per cent of Earth’s land area and 2.8 per cent of Earth’s ocean area. The coverage is increasing gradually. This trend is encouraging. To have reached the existing level is a tribute to those who have led and participated in the global conservation effort.

But is the level enough to halt the acceleration of species extinction? Unfortunately, it is in fact nowhere close to enough. The declining world of biodiversity cannot be saved by the piecemeal operations in current use alone. The extinction rate our behaviour is now imposing on the rest of life, and seems destined to continue, is more correctly viewed as the equivalent of a Chicxulub-sized asteroid strike played out over several human generations.

The only hope for the species still living is a human effort commensurate with the magnitude of the problem. The ongoing mass extinction of species, and with it the extinction of genes and ecosystems, ranks with pandemics, world war, and climate change as among the deadliest threats that humanity has imposed on itself. To those who feel content to let the Anthropocene evolve toward whatever destiny it mindlessly drifts, I say please take time to reconsider. To those who are steering the growth of reserves worldwide, let me make an earnest request: don’t stop, just aim a lot higher.•

Tags:

astronaut789

Among space-exploration enthusiasts, Jean-Jacques Dordain, former Director General of the European Space Agency, is something of a dissenter. While he thinks humans traveling to Mars inevitable, he’s not among those contemporary thinkers (Stephen Hawking, Freeman Dyson, Elon Musk, etc.) who believes Homo sapiens is capable of being a multi-planet species. He’s probably right if we’re talking about the immediate future but almost definitely wrong if the long-term one is considered.

In a 2014 RT interview conducted by Sophie Shevarnadze, Dordain contended that space is too inhospitable to allow a Manifest Destiny on Mars and more. There were unfortunately no follow-up questions about domed environments and 3D-printed structures and such, so I don’t know precisely why he felt that way. In contrast, his successor at ESA, Johann-Dietrich Wörner, immediately proposed building moon colonies after taking office in 2015.

From RT:

Question:

Are you actually an advocate of Mars colonization?

Jean-Jacques Dordain:

Colonization – I don’t know, but we should certainly go to Mars with humans, and we should certainly stay on Mars – humans will stay on Mars, I think it is just a matter of calendar. I never said that it’s not “if” – it’s “when”. We have some time. If you go to Mars 10 years later – what’s the difference? It may make a difference for me, because I will not see it, but it will not make a difference for humanity. I must say, if we had gone to North Pole 50 years later than today – it would not change anything. I am convinced, yes, that humans will go to Mars, for me it’s not a question, it’s just a calendar.

Question:

I’m just trying to understand what’s at the root of…you’re saying “exploration is inherent for mankind, exploration makes us human and it must involve a human presence” – so you are for human presence everywhere, but – is it exploration just for discovery or exploration to conquer?

Jean-Jacques Dordain:

I think it’s more for discovery and also to make the future on planet Earth possible. I must say that there is no alternative of planet Earth for humanity. This is maybe something that we have learned from space. There is no other place where this humanity can live. We cannot live on different planets in Solar System and going to an exoplanet will be much too far away, at least with the technologies that we know. So, we have no alternatives but to stay together on planet Earth. Now, does that mean that we should continue to find all resources that we need just on planet Earth, that it’s number one, and maybe we should find some raw sources in other planets or on the Moon, for example- I don’t know. That is number one, number two – going to the other planet is also to understand what is future of planet Earth. Couple of billions of years ago, Mars, Earth and Venus were sister planets – and we have evolved very differently. There was water on Mars, we know it, the was, certainly, an atmosphere around Mars. Where is the water? We still find some traces. Where is the atmosphere? Today, we are living on planet Earth because there is water and atmosphere, so understanding why Mars has changed so dramatically since its creation would be certainly very interesting, to understand where we are going to ourselves. So, planet Earth is not isolated. I remember, that I ever started a speech by saying “space does not belong to Earth”, it’s the Earth which belongs to space, and we don’t have a chance to understand planet Earth if we don’t understand the overall system where we are living in, so I think that the Moon is not anymore “something” – it belongs to our environment. Mars – also, Venus – also, so I think that we have to understand that and we have to explore, because exploring Mars is also exploring planet Earth. Our future is on planet Earth, and we have to make our future possible. I am sure that our future on planet Earth, for humanity – not for me, it’s too late – but, to make the future of humanity on planet Earth possible we’d better understand the system we are living in.•

Tags:

john-d-rockefeller_2234793b

Americans won’t likely always settle for bread and Kardashians.

You could make a strong case that Donald Trump, the fluffer for a white supremacist porn film, has been significantly aided by our odd descent into un-reality, our constant desire to binge on entertainment, but Bernie Sanders’ surprising rise, though likely an abridged one, reminds that the very real Occupy unrest which informed the 2012 election season has dissipated no more than income inequality itself has. While these uneasy starts may never culminate in any elected official being able to reconfigure our system from the inside, the pressure from without may ultimately grow strong enough to make an impact.

Yet still there are rationalizations. The Libertarian economist Russ Roberts believes the Gig Economy isn’t populated mostly by struggling citizens but instead by entrepreneurs temporarily driving Ubers and Lyfts only until venture capital allows them to permanently park their mustaches. Roberts’ partner in the Cafe Hayek site, Don Boudreaux, offers up a doozy as well with his post “Most Ordinary Americans in 2016 Are Richer Than Was John D. Rockefeller in 1916.” The tacit implication is that since we now have antibiotics, Android phones and Amazon Prime, the fall of the middle class isn’t really so troubling.

We’ve enjoyed technological and material progress in eras when we’ve enjoyed a fair tax code and in ones in which we haven’t. It’s silly to suggest we need cling to what’s become a lopsided society because we like penicillin. It’s also tone-deaf analysis when many in our country struggle in this new Gilded Age to pay for the basics of shelter, food, health and education.

Below is piece from Boudreaux’s writing and the opening of Barry Ritholtz’s riposte in Bloomberg View.

________________________________

  • From Boudreaux, a passage about life in 1916:

While you might have air-conditioning in your New York home, many of the friends’ homes that you visit – as well as restaurants and business offices that you frequent – were not air-conditioned.  In the winter, many were also poorly heated by today’s standards.

To travel to Europe took you several days.  To get to foreign lands beyond Europe took you even longer.

Might you want to deliver a package or letter overnight from New York City to someone in Los Angeles?  Sorry. Impossible.

You could neither listen to radio (the first commercial radio broadcast occurred in 1920) nor watch television.  You could, however, afford the state-of-the-art phonograph of the era.  (It wasn’t stereo, though.  And – I feel certain – even today’s vinylphiles would prefer listening to music played off of a modern compact disc to listening to music played off of a 1916 phonograph record.)  Obviously, you could not download music.

There really wasn’t very much in the way of movies for you to watch, even though you could afford to build your own home movie theater.

Your telephone was attached to a wall.  You could not use it to Skype.•

________________________________

  • From Ritholtz:

Today’s discussion involves a visit to the here-we-go-again files. The website Cafe Hayek, in a post titled “Most Ordinary Americans in 2016 Are Richer Than Was John D. Rockefeller in 1916,” asks a seemingly simple question: What is the minimum amount of money that you would demand in exchange for going back to live as John D. Rockefeller did in 1916?

The obvious point here is that we are doing better than the richest man of a century ago. Yet there’s a subtext (which becomes pretty clear by looking at the comments on the post): That all of this talk about wealth and income inequality — an important theme in this year’s presidential election — can and should be ignored. After all, as some have noted, even many of the poorest Americans own a smartphone today, whereas a century ago not even the wealthiest person on Earth had one.

I have addressed the logical failings of this kind of comparative exercise before (see this and this). For one thing, if you are going to make a temporal argument, you must recognize that time is two-sided. Yes, it is true, the average American in so many ways is better off than the rich were 100 years ago. However, by that same logic, everyone today rich, poor and middle-class — is much worse off than the poor of 100 years from now. It’s easy to consider what the folks will say about long-dead us in 2116: “Imagine — they were mortal, gave birth, bred animals to be killed and eaten, drove their own cars. How primitive!”

Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.•

Tags: ,

drone4

You would understandably disagree if you or yours met with the business end of a drone, but these modern weapons aren’t, militarily speaking, the worst thing. 

Worse is a ground-battle quagmire that stretches on seemingly endlessly, until, as in Iraq, the dead are so numerous you can’t make an exact accounting of them. Even though it’s strategically far from perfect as well as morally dubious, the U.S. drone offensive against ISIS, Al Qaida, et al., hasn’t been nearly as destructive. The catch is that while selective strikes are responsible for far less collateral damage than pre–push-button offensives, traditional wars always offered us an out. Operating less-accurate arms inside the fog of war, we could tell ourselves that things just happened. No one meant to inflict so much carnage–that’s just the nature of combat. It was true to some extent, though this escape clause was applied liberally, eliding some of the horror of the whole business, even if it was only a psychological trick.

Precision has, more or less, arrived with drones, and that means fewer excuses along with fewer deaths. We definitively pick and choose who lives and dies and execute those decisions. Drones, then, aren’t an impersonal way to conduct war despite the remoteness of the soldiers. In Thomas Nagel’s London Review of Books piece “Really Good At Killing,” which meditates on Scott Shane’s Objective Troy: A Terrorist, a President and the Rise of the Drone, the philosopher addresses this thorny technological development.

An excerpt:

The 2010 United Nations report on targeted killings by Philip Alston says of drones that ‘because operators are based thousands of miles away from the battlefield, and undertake operations entirely through computer screens and remote audio-feed, there is a risk of developing a “Playstation” mentality to killing.’ But Shane contends credibly that this is not borne out by the experience of those who have done it, and who report an acute and disturbing awareness of the individual humanity of those they observe – not only the non-combatants nearby but also their intended targets. ‘The psychological toll on drone pilots and sensor operators was, paradoxically, far greater than on those who flew traditional fighters and bombers,’ he says.

The personal character of this kind of killing goes all the way to the top. Obama ‘did not trust the agencies carrying out the strikes to grade their own work. He felt it was his responsibility to invest the time – hours each week – to keep abreast of the operations and often to exercise his own judgment about what was justified and what was too risky.’ ‘He was the ultimate arbiter of a “nominations” process to designate terrorists for kill or capture, and there were virtually no captures by American agencies … When the CIA sent word that there was a rare opportunity for a drone strike on a top terrorist – but that his family was with him – it was the president who had reserved to himself the final moral calculation.’ ‘On several occasions, he told aides, with chagrin, that as president he had discovered an unexpected talent. “It turns out,” he said, “that I’m really good at killing people.”’

The president as killer is a chilling new face of the role of commander-in-chief. I suspect that it is the personal, individualised nature of drone warfare that many people find so repellent. It is easier to be resigned to the slaughter of faceless multitudes by conventional missiles, bombs and artillery, with the inevitable attendant collateral damage, in pursuit of legitimate military objectives. War is hell, as we all know. But when the president puts someone on a kill list to be taken out by a precise drone strike, it creates the illusory sense of a more direct responsibility for that death than for the other kind. It feels like an execution, though it is just retail warfare, and the responsibility, individual and collective, is equally great in both cases.

Does it make a moral difference that this kind of killing exposes the killers to no physical risk? Is it a condition on the acceptability of warfare that those who kill should put their lives on the line?•

Tags: ,

Black Lives Matter Protest Disrupts Holiday Shoppers At Mall Of America

Prior to the rise of the Internet and the fall of the Towers, is it possible we were unwittingly living in a golden age? Maybe for a moment.

If the 1990s was a good time, it was only briefly so. In the United States, the decade began with liberal Bill Clinton, Nirvana and brick-and-mortar, which gave way before the bell tolled to conservative Bill Clinton, Marcy Playground and point-and-click. In his latest Financial Times column, Douglas Coupland has warm thoughts about the pre-Internet era, fondly recalling the shopping mall, its fabricated community and food courts and fake trees, before we shrunk it all down to fit inside our phones. The opening:

On August 11 1992 I was in Bloomington, Minnesota, close to Minneapolis. I was on a book tour and it was the grand opening day of Mall of America, the biggest mall in the US. The local radio affiliate had a booth set up in front of the indoor roller coaster that strafed the booth like an air strike every 75 seconds. I was up on the stage with them doing a live interview for half an hour while thousands of people were walking by with “country fair face” — goggle-eyed and feeding on ice cream. I felt like I was inside a Technicolor movie from the 1950s. The show’s host assumed I was going to be an ironic, slacker wise-ass and said: “I guess you must think this whole mall is kind of hokey and trashy,” and I said: “No such thing.”

He was surprised. “What do you mean?”

“I mean that I feel like I’m in another era that we thought had vanished, but it really hasn’t, not yet. I think we might one day look back on photos of today and think to ourselves, ‘You know, those people were living in golden times and they didn’t even know it. Communism was dead, the economy was good and the future with all of its accompanying technologies hadn’t crushed society’s mojo like a bug.’”

Silence.

And it’s true. Technology hadn’t hollowed out the middle class and turned us into laptop click junkies, and there were no new bogeymen hiding in the closet. We may well look back at the 1990s as the last good decade.•

_________________________________

Mall of America, opening weekend, August 1992.

Tags:

« Older entries § Newer entries »