You are currently browsing the archive for the Politics category.

Part 2 of Alexander R. George’s 1938 Brooklyn Daily Eagle feature about life 25 years in the future isn’t quite as daring as Part 1, focusing instead on sensible if very hopeful predictions for a society still dealing with the fallout of the Great Depression and yet to lead the Allies to victory in WWII: longer lifespans, healthier citizens, etc. Perhaps most interesting are the fashion prognostications. Americans did wearer fewer and less-formal clothes by 1963, and women discarded corsets, but those expected glass raincoats that protected against lightning never did come to pass.



Speaking of desolate factories, here’s another twist: China’s rising middle class and aging population has made it difficult for many plants to find the necessary supply of cheap labor. The nation’s economic problems have temporarily eased the migration away from factory work, but automation seems the only answer in the longer run. As Daniel Kahneman has said: “robots will show up in China just in time.”

From Ben Bland in the Financial Times:

Factory managers tell the same story at industrial estates across Dongguan, in the southern province of Guangdong, which has gone from small town to a megacity of more than 7m people over the past two decades as manufacturers flooded in to take advantage of the vast supply of cheap labour.

A closely watched factory survey on Wednesday revealed that Chinese manufacturing is having its worst month since the depths of the financial crisis. But while heavily indebted steel and cement plants lie idle across China and luxury goods manufacturers have seen their sales slump, the jobs market in the south, one of the country’s most economically dynamic regions, is still in good health for those willing to travel from elsewhere and take up tough manufacturing work.

The problem for factory owners is that their pool of labour has been shrinking in recent years because of the rapidly ageing population and the growth of less-stressful job openings in the service sector, selling cappuccinos and cinema tickets to China’s expanding middle class.

Facing this long-term challenge, many manufacturers from TAL to Foxconn, the Taiwanese maker of iPhones and iPads, are increasing their investments in robots and automation.•


Douglas Coupland, when working as a member of Steven Spielberg’s think tank on Minority Report, advised the director that the future would be much quieter. Well, certain parts of it will be, and some of them are pieces that rumbled with human activity for as long as they existed. Actually, in many cases, Coupland’s tomorrow has already arrived. 

Photographer Edgar Martins’ new series, 00:00.00, captures eerie moments when cutting-edge Munich BMW factories are slowed to a stop, their almost-humanless hum silenced, reminding us that they exist and operate while we busy ourselves elsewhere. Soon, these operations won’t need even a few of us, and we’ll have to find other things to do with our time. We’ll have to redefine what humans are here for, a process that will continue as long as we do, and we’ll require fresh political solutions to navigate this new normal.

From Adele Peters at Fast Company:

“Factories and data processing centers are, perhaps, the most relevant production centers of our times,” Martins says. “I’m interested in how technology is shaping our lives and how we have become increasingly dependent on it, for better or worse. I’m also interested in the notion of technological utopias and the dreams and aspirations we attach to technological advancements and progress.”

Car factories have helped shape the world we live in and will reshape it again as companies react to climate change, he says. “The automotive industry faces some major challenges over the next decades, as it aims to deal with the inherent shortcomings and pitfalls of the internal combustion engine and its environmental repercussions.”•

Tags: ,

Robots have been members of the U.S. military since at least 1928, and the only question in the long term is whether our warriors will ultimately be wholly silicon or if we’ll use brain chips, drugs, exoskeleton suits and genetic manipulation to alter humans into fighting “machines.” We’ll certainly develop both, but the former’s lack of flexibility for the foreseeable future makes battlefield Transhumanism, dicey though it is from an ethical standpoint, more doable for now.

Questions abound for this new arms race: If war is relatively painless (for one side, in some cases), will it make aggression more attractive? How will these experiments in pain vaccines and teleportation eventually inform civilian life? Will humanitarian crises like Syria’s collapse be eliminated by these tools?

In “Engineering Humans for War,” Annie Jacobsen’s excellent Atlantic article, she looks at DARPA’s goal of creating a real-life Iron Man in numerous ways, including a super-soldier suit called TALOS (Tactical Assault Light Operator Suit), which the department expects to have operational by 2018. An excerpt:

For decades after its inception in 1958, the Defense Advanced Research Projects Agency—DARPA, the central research and development organization of the Department of Defense—focused on developing vast weapons systems. Starting in 1990, and owing to individuals like [Retired four-star general Paul F.] Gorman, a new focus was put on soldiers, airmen, and sailors—on transforming humans for war. The progress of those efforts, to the extent it can be assessed through public information, hints at war’s future, and raises questions about whether military technology can be stopped, or should.

Gorman sketched out an early version of the thinking in a paper he wrote for DARPA after his retirement from the Army in 1985, in which he described an “integrated-powered exoskeleton” that could transform the weakling of the battlefield into a veritable super-soldier. The “SuperTroop” exoskeleton he proposed offered protection against chemical, biological, electromagnetic, and ballistic threats, including direct fire from a .50-caliber bullet. It “incorporated audio, visual, and haptic [touch] sensors,” Gorman explained, including thermal imaging for the eyes, sound suppression for the ears, and fiber optics from the head to the fingertips. Its interior would be climate-controlled, and each soldier would have his own physiological specifications embedded on a chip within his dog tags. “When a soldier donned his ST [SuperTroop] battledress,” Gorman wrote, “he would insert one dog-tag into a slot under the chest armor, thereby loading his personal program into the battle suit’s computer,” giving the 21st-century soldier an extraordinary ability to hear, see, move, shoot, and communicate.

At the time Gorman wrote, the computing technology needed for such a device did not yet exist.•

Tags: ,

I was reading a 1908 Brooklyn Daily Eagle article about Red Cloud, and it reminded me of a passage from the opening chapter of Ian Frazier’s excellent 2000 book, On the Rez. In telling about the Oglala Lakota chief’s visit to the White House in 1870, Frazier examined our age and came to some troubling conclusions, all of which seem even truer 15 years on. Real freedom in our corporatocracy is more expensive than ever, but it’s cheap and easy to be discarded. The excerpt:

    In 1608, the newly arrived Englishmen at Jamestown colony in Virginia proposed to give the most powerful Indian in the vicinity, Chief Powhatan, a crown. Their idea was to coronate him a sub-emperor of Indians, and vassal to the English King. Powhatan found the offer insulting. “I also am a King,” he said, “and this is my land.” Joseph Brant, a Mohawk of the Iroquois Confederacy between eastern New York and the Great Lakes, was received as a celebrity when he went to England with a delegation from his tribe in 1785. Taken to St. James’s Palace for a royal audience, he refused to kneel and kiss the hand of George III; he told the King that he would, however, gladly kiss the hand of the Queen. Almost a century later, the U.S. government gave Red Cloud, victorious war leader of the Oglala, the fanciest reception it knew how, with a dinner party at the White House featuring lighted chandeliers and wine and a dessert of strawberries and ice cream. The next day Red Cloud parleyed with the government officials just as he was accustomed to on the prairie—sitting on the floor. To a member of a Senate select committee who had delivered a tirade against Sitting Bull, the Hunkpapa Sioux leader carelessly replied, “I have grown to be a very independent man, and consider myself a very great man.”

     That self-possessed sense of freedom is closer to what I want; I want to be an uncaught Indian like them.

Another remark which non-Indians often make on the subject of Indians is “Why can’t they get with the program?” Anyone who talks about Indians in public will be asked that question, or variations on it; over and over: Why don’t Indians forget all this tribal nonsense and become ordinary Americans like the rest of us? Why do they insist on living in the past? Why don’t they accept the fact that we won and they lost? Why won’t they stop, finally, being Indians and join the modern world? I have a variety of answers handy. Sometimes I say that in former days “the program” called for the eradication of Indian languages, and children in Indian boarding schools were beaten for speaking them and forced to speak English, so they would fit in; time passed, cultural fashions changed, and Hollywood made a feature film about Indians in which for the sake of authenticity the Sioux characters spoke Sioux (with English subtitles), and the movie became a hit, and lots of people decided they wanted to learn Sioux, and those who still knew the language, those who had somehow managed to avoid “the program” in the first place, were suddenly the ones in demand. Now, I think it’s better not to answer the question but to ask a question in return: What program, exactly, do you have in mind?

    We live in a craven time. I am not the first to point out that capitalism, having defeated Communism, now seems to be about to do the same to democracy. The market is doing splendidly, yet we are not, somehow. Americans today no longer work mostly in manufacturing or agriculture but in the newly risen service economy. That means that most of us make our living by being nice. And if we can’t be nice, we’d better at least be neutral. In the service economy, anyone who sat where he pleased in the presence of power or who expatiated on his own greatness would soon be out the door. “Who does he think he is?” is how the dismissal is usually framed. The dream of many of us is that someday we might miraculously have enough money that we could quit being nice, and everybody would then have to be nice to us, and niceness would surround us like a warm dome. Certain speeches we would love to make accompany this dream, glorious, blistering tellings-off of those to whom we usually hold our tongue. The eleven people who actually have enough money to do that are icons to us. What we read in newsprint and see on television always reminds us how great they are, and we can’t disagree. Unlike the rest of us, they can deliver those speeches with no fear. The freedom that inhered in Powhatan, that Red Cloud carried with him from the plains to Washington as easily as air—freedom to be and to say, whenever, regardless of disapproval—has become a luxury most of us can’t afford.•


Tags: ,

There are many negatives about having an American military class so discrete from the rest of the nation, and one of them, as outlined in “Welfare’s Last Stand,” Jennifer Mittelstadt’s Aeon article, is that veterans are no longer an incubator to test benefits that can later be expanded to the rest of society. For example, the G.I. Bill after WWII educated a generation of vets and made grants and loans for higher education for all citizens the goal.

While the draft ended nearly a decade before Ronald Reagan took office, the political historian details how his Administration was the one where benefits for the military and the rest of the country–the huge majority of people–came to a fork in the road. It’s not that our troops don’t deserve exceptional benefits, but separating us into heroes and “welfare queens” is a most unfortunate division. An excerpt:

This post-1973 military welfare state played a different role in US life than most earlier types of military welfare. For one, military welfare no longer served as a reward for the services of citizen soldiers. Instead, it sustained the volunteer force: it lured new recruits, supported them while on duty, and convinced them to re‑enlist.

More importantly, earlier versions of military welfare catalysed broader social welfare programmes for the US populace. Civil War pensions pioneered federal retirement and disability payments, and paved the way for civilian retirement pensions. Veterans’ healthcare after the First World War created the first model of government health provision. And the Second World War-era GI Bill vaulted millions of former civilian draftees and their families into the middle class, legitimising government support for education and housing for all Americans.

The modern military welfare state of the post-1973 era never stimulated social welfare for the populace. Quite the opposite. As a smaller number and narrower cross-section of Americans volunteered for military service in the late 20th century, the divide between the military and civilians grew. So, too, did the divide between the new military welfare state and the existing civilian one. From the 1970s to the early ’90s, while many civilian welfare programmes contracted, public and private unions declined, and employers cut private employment benefits, the military expanded its welfare functions.

How did this happen?•


John McAfee is as paranoid and prescribed as Philip K. Dick, but that doesn’t mean he’s writing fiction when he imagines that planes are prone to cyberterrorists. The anti-virus VIP and former fugitive from Belizean justice thinks America needs a serious course correction or hackers at home, not on-board hijackers, will perpetrate 9/11 2.0. 

From McAfee’s latest International Business Times column:

A person does not have to physically board a plane in order take control of it. Even though Chris boarded a flight to Philadelphia and used the entertainment system to demonstrate the weaknesses inherent in Airline control systems, he has spoken out stating the obvious: anyone with moderate hacking abilities can go online from anywhere in the world, and take control of our commercial airliners. …

This may sound far-fetched, but it is obvious to anyone following the hacking community. In July, two hackers demonstrated to Wired magazine that they could, from anywhere on the internet, hack into a Jeep automobile manufactured within the past 5 years, take control away from the driver, and run the car into a ditch. The demo was done at 5mph. You can imagine what results would manifest at 50mph.

The architecture of automobile control and flight control systems share one commonality: they were designed in an age where the nuances of cybersecurity were unknown or ignored. They were not designed, first and foremost, with preventing a hack in mind. I could write forever about the impossibilities of providing any security whatsoever given the current approach to security that is being pursued by the TSA, but that would be counterproductive.•


Peter Diamandis is privy to much more cutting-edge technological information than I am, but he’s also more prone to irrational exuberance. I have little doubt driverless cars will be perfected for all climates and conditions at some point in the future, but will there really be more than 50 million autonomous cars on the road by 2035? Well, it is the kind of technology likely to spread rapidly when completed. From a Diamandis Singularity Hub post about the future of transportation, agriculture, and healthcare/elder care:

By 2035 there will be more than 54 million autonomous cars on the road, and this will change everything:

  • Saved Lives: Autonomous cars don’t drive drunk, don’t text and don’t fall asleep at the wheel.
  • Reclaiming Land: You can fit eight times more autonomous cars on our roads, plus you no longer need parking spaces. Today, in the U.S. we devote 10% of the urban land to ~600 million parking spaces, and countless more to our paved highways and roads. In Los Angeles, it’s estimated that more than half of the land in the city belongs to cars in the form of garages, driveways, roads, and parking lots.
  • Saved Energy: Today we give close to 25 percent of all of our energy to personal transportation, and 25 percent of our greenhouse gases are going to the car. If cars don’t crash, you don’t need a 5,000-lb SUV driving around a 100-lb passenger (where 2% of the energy is moving the person, and 98% is to move the metal womb wrapped around them).
  • Saved Money/Higher Productivity: Get rid of needing to own a car, paying for insurance and parking, trade out 4,000-lb. cars for lighter electric cars that don’t crash, and you can expect to save 90% on your local automotive transportation bill. Plus regain 1 to 2 hours of productivity in your life (work as you are driven around), reclaiming hundreds of billions of dollars in the US economy.

Best of all, you can call any kind of car you need. Need a nap? Order a car with a bed. Want to party? Order one with a fully-stocked bar. Need a business meeting? Up drives a conference room on wheels.•


Google is a great company, but that doesn’t mean it’s a good one.

When CEO Larry Page urges us to trust the “good corporations” like his, no one should obey for two reasons: 1) If the search giant is going to remain a powerhouse, it will need to ride information-rich moonshots into all areas of the world, turning every last object and body into an data-producing system. That will be a ferocious war among Google and all its competitors and ethics may become collateral damage. 2) Even if Page & co. were spotlessly noble, they won’t be here forever (not unless Calico is really successful), and those replacing them and inheriting our information may not be so benign. 

In a Scientific American podcast hosted by Seth Fletcher about privacy in the Digital Age, Jaron Lanier speaks to the corporate-succession issue and many others, including users being paid for their info. Listen here.


One would certainly think that Dr. Ben Carson knows a great deal about neurosurgery, but he understands precious little about American history and our Constitution, and it’s made him espouse a deeply bigoted view of who we are. From a Yahoo! News report about his just-aired Meet the Press appearance: 

Carson, a devout Christian, says a president’s faith should matter to voters if it runs counter to the values and principles of America.

Responding to a question during an interview broadcast Sunday on NBC’s Meet the Press, he described the Islamic faith as inconsistent with the Constitution.

“I would not advocate that we put a Muslim in charge of this nation,” Carson said. “I absolutely would not agree with that.”•

Despite what Carson thinks, our forefathers did not base America on Christianity. From The Stammering Century, Gilbert Seldes’ book about our nation during an earlier extreme age:

When the time came to frame a constitution, God was considered an alien influence and, in the deliberation of the Assembly, his name was not invoked. “Inexorably,” says Charles and Mary Beard in their story of The Rise of American Civilization, “the national government was secular from top to bottom. Religious qualifications …found no place whatever in the Federal Constitution. Its preamble did not invoke the blessings of Almighty God…and the First Amendment…declared that Congress shall make no law respecting an establishment of religion…” In dealing with Tripoli, President Washington allowed it to be squarely stated that “the government of the United States is not in any sense founded upon the Christian religion.”•


Uber should, of course, not be prevented from becoming a company of driverless taxis when innovation makes that possible. But CEO Travis Kalanick’s part-time pose as a champion of Labor is an infuriatingly dishonest stance. Autonomous vehicles and Uber’s business model may both be great in many ways, but they’re not good for workers. Not in the short and medium term, at least, and likely never.

Kalanick, who recently discussed his company’s robotic tomorrow with Marc Benioff, sees the transition to AI coming in 10 or 15 years or so. In commentary on Bloomberg, Forrester analyst James McQuivey thinks the future is just around the bend and Kalanick too conservative in his estimation of the driverless ETA. He also believes Kalanick’s job itself will likely be a casualty of the autonomous revolution (and other factors).

Tags: , ,

Should millions of jobs, entire industries, be taken over by AI in the near future, without other ones emerging to replace them, political and economic systems would need to quickly adapt and adjust to manage the new reality. One way to prepare would be to experiment with universal basic income, which may or may not prove a panacea.

From Federico Pistono in New Scientist:

How would the millions of telemarketers and taxi drivers, for example – whose jobs are at high risk of being automated – survive in this new landscape? One of the most interesting proposals, and one that does not live in the fanciful world of “the market will figure it out,” is the creation of an unconditional basic income (UBI).

It’s a simple idea with far-reaching consequences. The state would give a monthly stipend to every citizen, regardless of income or employment status. This would simplify bureaucracy, get rid of outdated and inefficient means-based benefits, and provide support for people to live with dignity and find new meaning.

No incentive-killer

The biggest UBI experiments, involving a whole town in Canada and 20 villages in India, have confounded a key criticism – that it would kill the incentive to work. Not only did people not stop working, but they were more likely to start new businesses or perform socially beneficial activities compared with controls. In addition, there was an increase in general well-being, and no increase in public bads such as alcohol and drug use, and gambling.

These early results are promising but not conclusive. We don’t know what would happen in other countries, and whether the same results would apply if millions of people were involved. Forthcoming experiments may give us a clearer picture.•


The spirit of any age must be addressed, even when inconvenient. I doubt Bill Clinton entered politics to be the tough-on-crime President whose policies helped turn the nation into a penal colony within a colony, but there he was in the 1990s, not realizing that crime was about to mysteriously and precipitously decline, waving a badge and a billy club. Clinton likely never dreamed of a scenario in which he would be chastening “welfare queens,” yet there he was doing a better job of it than Ronald Reagan, who coined that odious term. It was no different than when Richard Nixon, having at long last having won the White House, argued in favor of universal healthcare and a basic-guaranteed-income tax plan, something he certainly wasn’t considering before the Sixties happened.

Chief among the prevailing winds of our time is wealth inequality, the enduring gift of an Occupy movement that framed a single election and otherwise sputtered out, at least for now. So GOP candidates must, at minimum, pay lip service to the concern. Donald Trump is suddenly a reformer taking aim at hedge-fund managers. Jeb Bush has spoken about how income disparity has threatened the American Dream (without mentioning, of course, that his proposed tax cuts would only exacerbate the situation). Rick Santorum and the sweater-vest wing of the party want to raise the minimum wage. 

What’s true of politicians is also so of economists, and academics have descended on the problem, which makes this moment ideal for Joseph Stiglitz, who’s spent much of his career, from thesis forward, on the topic. In a NYRB piece, James Surowiecki analyzes the economist’s most recent slate of books, finding fault with Stiglitz’s identification of the twin devils of the contemporary financial arrangement: rent-seeking and a lack of corporate oversight. Surowiecki doesn’t believe these issues explain our 99-and-1 predicament. He doesn’t dismiss Stiglitz’s suggestions and likewise sees no reason why CEOs should be earning so much, but he believes a remedy is more complicated.

An excerpt: 

It’s possible, of course, that further reform of corporate governance (like giving shareholders the ability to cast a binding vote on CEO pay packages) will change this dynamic, but it seems unlikely. After all, companies with private owners—who have total control over how much to pay their executives—pay their CEOs absurd salaries, too. And CEOs who come into a company from outside—meaning that they have no sway at all over the board—actually get paid more than inside candidates, not less. Since 2010, shareholders have been able to show their approval or disapproval of CEO pay packages by casting nonbinding “say on pay” votes. Almost all of those packages have been approved by large margins. (This year, for instance, these packages were supported, on average, by 95 percent of the votes cast.)

Similarly, while money managers do reap the benefits of opaque and overpriced fees for their advice and management of portfolios, particularly when dealing with ordinary investors (who sometimes don’t understand what they’re paying for), it’s hard to make the case that this is why they’re so much richer than they used to be. In the first place, opaque as they are, fees are actually easier to understand than they once were, and money managers face considerably more competition than before, particularly from low-cost index funds. And when it comes to hedge fund managers, their fee structure hasn’t changed much over the years, and their clients are typically reasonably sophisticated investors. It seems improbable that hedge fund managers have somehow gotten better at fooling their clients with “uncompetitive and often undisclosed fees.”

So what’s really going on? Something much simpler: asset managers are just managing much more money than they used to, because there’s much more capital in the markets than there once was.•


Tags: ,

Think how strange it seems now: Until recently, one or several musicians would disappear for many months into an expensive recording studio and try to conjure something that would fill our ears, and, perhaps, blow our minds. They threw away the vast majority of the results and delivered a few dozen minutes of entertainment. There was a distribution system which not only supported this process but was even marvelously lucrative.

It was all a dream. The decentralization of the media not only usurped the record store but also the records, squeezing the financial value from these commodities, rendering them a mere promotional tool for the few touring acts that can fill arenas. Good luck with that.

During the halcyon music-business decade of the 1970s, one economist and theorist knew the precariousness of the arrangement, how this fraught ecosystem, still spectacularly profitable, was actually endangered. He was Jacques Attali, a Mitterand adviser who 39 years ago coined the term “crisis of proliferation” in his book Noise: The Political Economy of Music, which foretold the coming perfect storm that would soak the industry. 

In “The Pop Star and the Prophet,” a new BBC Magazine article, singer-songwriter Sam York sought out Attali, wanting to know where the philosopher thought the future was heading. The quick answer is that while he maintains some hope for musicians, Attali thinks what happened to the recording biz is merely prelude. An excerpt:

Attali also had another big idea. He said that music – and the music industry – forged a path which the rest of the economy would follow. What’s happening in music can actually predict the future.

When musicians in the 18th Century – like the composer Handel – started selling tickets for concerts, rather than seeking royal patronage, they were breaking new economic ground, Attali wrote. They were signalling the end of feudalism and the beginning of a new order of capitalism.

In every period of history, Attali said, musicians have been at the cutting edge of economic developments. Because music is very important to us but also highly adaptable it’s one of the first places we can see new trends appearing.

He was right about the “crisis of proliferation”… but if music really does predict the future for the rest of the economy, what does he think it is telling us will happen next?

Attali says manufacturing will be hit by an identical crisis to the music industry, and this time it will be caused by 3D printing.

“With 3D printing, people will print their own cups, furniture,” he says. “Everyone will make their own objects, in the same way they are making their own music.”•


Tags: ,

Peter W. Singer, who wrote Wired for War and coauthored (with August Cole) Ghost Fleet: A Novel of the Next World War, just did a really interesting Ask Me Anything at Reddit about the fire next time. In addition to the exchanges below, Singer’s long answer to a question about the U.S. response to Chinese cyberattacks is definitely worth reading.



Do you think that having high-tech weapons and capabilities lowers the threshold for taking military actions? Thinking of drones as an immediate example and cyberwarfare as a coming one.

Peter W. Singer:

Yes, I think that is a risk. The new generation of tech lowers the bar to entry in 2 ways: 1) Unlike Atomic bombs or aircraft carriers, they are much easier for nations and even non state actors to individuals to gain and use. Indeed, over 100 nations already have cyber military units and 80 have drones, while less than 10 have nukes and only one has supercarriers, but 2) The new generation of technology also moves the human role sometimes geographically off the battlefield and sometimes chronologically from the point in time of its use. So it creates distance and among many a percerption of less risk. Leaders and their public don’t look at the decision to use force in the same way. So that also lowers the barrier of entry to conflict. Think how we’ve carried out more than 400 drone strikes into Pakistan, but no one thinks of it as a “war.” To be clear, this perception of less risk doesn’t mean the actual costs are removed. The costs hit in everything from casualties in that target area to blowback over the longterm. I did a piece a few years back on this.



As I understand it, a large part of U.S. defense and security policy has been built around having a technological superiority over any potential adversary and so I would like to ask you if you believe that this strategy will continue to be viable in the foreseeable future and what potential trends or developments are/might impact it?

Peter W. Singer:

That is exactly one of the key questions the book plays with.

In every fight since 1945 (when Germany had jets and we had prop planes), US forces have been a generation-ahead in technology. It has not always translated to decisive victories, but it has been an edge every other nation wants and its baked into our assumptions. Yet US forces can’t count on that “overmatch” in the future. Some of our trusted major platforms are now vulnerable to new classes of weapons, but also we face a new tech race. China, for example, just overtook the EU in R& D spending and is on pace to match the US in five years, with new projects ranging from the world’s fastest supercomputers to three different long range drone strike programs. And now off-the-shelf technologies can be bought to rival even the most advanced tools in the US arsenal. The winner of a recent robotics test, for instance, was not a US defense contractor but a group of South Korea student engineers.

There is another interesting aspect of this that comes from cyber attacks. We’re planning to spend over a Trillion (no typo, a T) dollars on a new stealth fighter jet, the F35, that was planned to be a generation ahead of any potential foe. And yet that program has been hacked on at least 3 occasions. The result is that F35 has a near twin, the J31, before its even operational for us. It is hard to win an arms race, when you are paying the R&D for the other side.



I’m not sure if this is outside your field, but a few weeks ago I asked Defense Secretary Carter what technological developments interested him the most, and although he didn’t say that it directly interested him the most, he indicated that biotechnology has the potential to be very transformative for modern warfare. Do you have any thoughts about this? Any important systems, companies, personalities etc to keep an eye out in regards to this?


Peter W. Singer:

Yes, the pace of breakthrough in that field is actually moving much faster than even Moore’s Law for IT. Some amazing things happening in genomics etc. We played a bit with human performance modification tech in Ghost Fleet and another scene on Brain-Machine interface, but genomics is one we didn’t tap, which could be transformative. And to be clear, “transformative” means you get amazing new capabilities, but also amazing new questions and dilemmas and problems.



Do you think U.S. political leaders have a good handle on emerging high technology (and the benefits/risks thereof) or should the general population be concerned?

For example: drones, cybersecurity/cyber warfare, internet policy in general, etc.

Peter Singer:

No, sadly not. Just way behind in their understanding of not merely where we are headed, but where we already are. The result is that they often make simple errors with major consequences and are taken advantage of by “hucksters” who are invested in some particular tech or role, and spin a little amount of knowledge to their own advantage. We see it in everything from defense issues to the “cyber walls” discussion in presidential debates, where all the candidates nodded in seeming agreement, as someone used a term that is literally made up and makes no sense.•



I thought recently of this 2014 Gary Indiana quote about contemporary NYC and the U.S.: “This city, America, loves the successful sociopath and thinks it’s normal to dream of becoming like him.”

In the London Review of Books, Indiana writes of Masha Gessen’s The Brothers, her examination of the motivations of the Tsarnaev siblings, who perpetrated the horrific Boston Marathon bombing. The title was largely taken to task in the New York Times by former DHS Secretary Janet Napolitano, not a disinterested party. Indiana, conversely, lavishes praise upon it. In doing so, he argues that destabilized economies, America’s included, are helping to breed violent radicals. Perhaps, though even in times of relative financial stability, such acts of political terrorism occur. The extremist child can be born of many different types of parents, not just poor ones.

An excerpt that quotes another treatment of terrorist tragedy, Yasmina Khadra’s novel The Attack:

What passed between the brothers in the ten months after Zubeidat’s departure to Dagestan is terra incognita. The chances are no specific event or Svengali-like radicalisation inspired the Tsarnaev brothers to blow up the Boston Marathon. As a policeman in Yasmina Khadra’s 2006 novel The Attack puts it: ‘I think even the most seasoned terrorists really have no idea what has happened to them. And it can happen to anyone. Something clicks somewhere in their subconscious, and they’re off … Either it falls on your head like a roof tile or it attaches itself to your insides like a tapeworm. Afterwards, you no longer see the world in the same way.’ The media fantasy that Tamerlan was schizophrenic and ‘heard voices’ is highly improbable. The consensus among terrorism experts is that terrorists are normal people. ‘He was a perfectly nice guy.’ ‘The last person I’d imagine doing something like this.’ After the fact, neighbours, friends and co-workers invariably say the same things about terrorists as they say about serial killers. It’s worth noting that there isn’t a single provable instance of the legendary FBI profiling unit in Quantico, Virginia actually instigating the capture of a serial killer: it tends to be when someone is stopped for driving with a broken tail light that the dead body in the trunk is discovered. It’s only afterwards that we’re told they ‘fit the FBI profile’.


Why did they do it? How could they? In the world we live in now, the better questions are: why not? Why wouldn’t they? To quote Khadra’s novel again, on suicide bombers: ‘The only way to get back what you’ve lost or to fix what you’ve screwed up – in other words, the only way to make something of your life – is to end it with a flourish: turn yourself into a giant firecracker in the middle of a school bus or launch yourself like a torpedo against an enemy tank.’ Everything the US has done to prevent terrorism has been the best advertising terrorism could possibly have. The ‘war on terror’ has degenerated since its ugly inception in Afghanistan and Iraq into a two-pronged war against the US domestic population’s civil rights and the infrastructures of Muslim nations; every cynical episode of this endless war has inched America closer to a police state, and turned people minding their own business in other countries into jihadists and suicide bombers. If the United States were at all interested in preventing terrorism, it would first have to acknowledge that the country belongs to the citizens its economic policies have impoverished, and get rid of emergency laws that violate their rights on the pretext of ensuring their safety. This would involve dismantling the surveillance state apparatus that inflates its criminally gigantic budgets with phony terrorism warnings and a veritable industry of theatrical FBI sting operations. And then the country would have to address the systemic social problems that have been allowed to metastasise ever since the presidency of Ronald Reagan. As everyday existence becomes more punitive for all but the monied few, more and more frustrated, volatile individuals will seek each other out online, aggravate whatever lethal fairy tale suits their pathology, and, ultimately, transfer their rage from the screen world to the real one.•

Tags: , , , ,

Dr. Anders Sandberg of the Future of Humanity Institute at Oxford just did one of the best Reddit AMAs I’ve ever read, a brilliant back-and-forth with readers on existential risks, Transhumanism, economics, space travel, future technologies, etc. He speaks wisely of trying to predict the next global crisis: “It will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.”

The whole piece is recommended, and some exchanges are embedded below.



Will we start creating new species of animals (and plants, fungi, and microbes) any time soon?

What about fertilizing the oceans? Will we turn vast areas of ocean into monoculture like a corn field or a wood-pulp plantation?

When will substantial numbers of people live anywhere other than Earth? Where will it be?

What will we do about climate change?

Dr. Anders Sandberg:

I think we are already making new species, although releasing them into nature is frowned upon.

Ocean fertilization might be a way of binding carbon and getting good “ocean agriculture”, but the ecological prize might be pretty big. Just consider how land monocultures squeeze out biodiversity. But if we needed to (say to feed a trillion population), we could.

I think we need to really lower the cost to orbit (beanstalks, anyone?) for mass emigration. Otherwise I expect the first real space colonists to be more uploads and robots than biological humans.

I think we will muddle through climate: technological innovations make us more green, but not before a lot of change will happen – which people will also get used to.



What augmentations, if any, do you plan on getting?

Dr. Anders Sandberg:

I have long wanted to get a magnetic implant to sense magnetic fields, but since I want to be able to get close to MRI machines I have held off.

I think the first augmentations will be health related or sensory enhancement gene therapy – I would love to see ultraviolet and infrared. But life extension is likely the key area, which might involve gene therapy and implanting modified stem cells.

Further down the line I want to have implants in my hypothalamus so I can access my body’s “preferences menu” and change things like weight setpoint or manage pain. I am a bit scared of implants in the motivation system to help me manage my behavior, but it might be useful. And of course, a good neural link to my exoself of computers and gadgets would be useful – especially if it could allow me to run software supported simulations in my mental workspace.

In the long run I hope to just make my body as flexible and modifiable as possible, although no doubt it would tend to normally be set to something like “idealized standard self”.

It is hard to tell which augmentations will arrive when. But I think going for general purpose goods – health, intelligence, the ability to control oneself – is a good heuristic for what to aim for.



What major crisis can we expect in next few years? What the world is going to be like by 2025?

Dr. Anders Sandberg:

I am more of a long term guy, so it might be better to ask the people at the World Economic Forum risk report (where I am on the advisory board).http://www.weforum.org/reports/global-risks-report-2015

One group of things are economic troubles – they are safe bets before 2025 since they happen every few years, but most are not major crises. Expect some asset bubbles or deflation in a major economy, energy price shocks, failure of a major financial mechanism or institution, fiscal crises, and/or some critical infrastructure failures.

Similarly there will be at least some extreme weather or natural disaster events that cause a nasty surprise (think Katrina or the Tohoku earthquake) – such things happen all the time, but the amount of valuable or critical stuff in the world is going up, and we are affected more and more systemically (think hard drive prices after the Thai floods – all the companies were located on the same flood plain). I would be more surprised by any major biodiversity loss or ecosystem collapse, but the oceans are certainly not looking good. Even with the scariest climate scenarios things in 2025 are not that different from now.

What to look out for is interstate conflicts that get global consequences. We have never seen a “real” cyber war: maybe it is overhyped, maybe we underestimate the consequences (think something like the DARPA cyber challenge as persistent, adapting malware everywhere). Big conflicts are unfortunately not impossible, and we still have lots of nukes in the world. WMD proliferation looks worryingly doable.

If I were to make a scenario for a major crisis it would be something like a systemic global issue like the oil price causing widespread trouble in some unstable regions (think of past oil-food interactions triggering unrest leading to the Arab Spring, or Russia being under pressure now due to cheap oil), which spills over into some actual conflict that has long-range effects getting out of hand (say the release of nasty bio- or cyberweapons). But it will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.

And then we will dust ourselves off, swear to never let that happen again, and half forget it.



As I understand it, regarding existential risk and our survival as a species, most if not all discussion has to happen under the umbrella of ‘if we don’t kill ourselves off first.’ Surely, as a man who thinks so far ahead, you must have some hope that catastrophic self-inflicted won’t spell the end of our race, or at least that it won’t put us back irrevocably far technologically. In your estimation, what are the immediate self-inflicted harms we face and will we have the capacity to face them when their destructive effects manifest. Will the climate change to the point of poisoning our planet, will uncontrolled pollution destroy our global ecology in some other way, will nuclear blasts destroy all but the cockroaches and bacteria on the planet? It seems to me that we needn’t think too far to see one of these scenarios come to pass if we don’t present a globally concerted effort to intervene.

Dr. Anders Sandberg:

I think climate change, like ecological depletion or poisons, are unlikely to spell radical disaster (still, there is enough of a tail to the climate change distribution to care about the extreme cases). But they can make the world much worse to live in, and cause strains in the global social fabric that make other risks more likely.

Nuclear war is still a risk with us. And nuclear winters are potential giga-killers; we just don’t know whether they are very likely or not, because of model uncertainty. I think the probability is way higher than most people think (because of both Bayesian estimation and observer selection effects).

I think bioengineered pandemics are also a potential stumbling block. There may not be many omnicidal maniacs, but the gain-of-function experiments show that well-meaning researchers can make potentially lethal pathogens and the recent distribution of anthrax by the US military show that amazingly stupid mistakes do happen with alarming regularity.

See also: https://theconversation.com/the-five-biggest-threats-to-human-existence-27053



I have trouble imagining how our current economic structure could cope with all the 10’s of millions of driver/taxi/delivery jobs going.

The economic domino effect of inability to pay debts/mortgages, loss of secondary jobs they were supporting, fall in demand for goods, etc, etc

It seems like the world has never really got back to “normal” (whatever that is anymore in the 21st century) after the 2008 financial crisis & never will.

I’m an optimist by nature, I’m sure we will segue & transition into something we probably haven’t even imagined yet.

But it’s very hard to imagine our current hands off laissez fair style of economy functioning in the 2020’s in the face of so much unemployment.

Dr. Anders Sandberg:

Back in the 19th century it would have seemed absurd that the economy could absorb all those farmers. But historical examples may be misleading: the structure of the economy changes.

In many ways laissez faire economics work perfectly fine in the super-unemployed scenario: we just form an internal economy, less effective than the official one sailing off into the stratosphere, and repeat the process (the problem might be if property rights make it impossible to freely set up a side economy). But clearly there is a lot of human capital wasted in this scenario.

Some people almost reflexively suggest a basic income guarantee as the remedy to an increasingly automated economy. I think we need to think much more creatively about other solutions, the BIG is just one possibility (and might not even be feasible in many nations).



What is the most defining characteristic of transhumanism as an idea in the 10s compared with the 00s?

Dr. Anders Sandberg:

Back when I started in the 90s we were all early-Wired style tech enthusiasts. The future was coming, and it was all full of cyber! Very optimistic, very much based on the idea that if we could just organise better and convince society that transhumanism was a good idea, then we would win.

By the 00s we had learned that just having organisations does not mean your ideas get taken seriously. Although they were actually taken seriously to a far greater extent: the criticism from Fukuyama and others actually forced a very healthy debate about the ethics and feasibility of transhumanism. Also, the optimism had become tempered post-dotcom, post-911: progress is happening, but much more uneven and slow than we may have hoped for. It was by this point the existential risk and AI safety strands came into their own.

Transhumanism in the 10s? Right now I think the cool thing is the posttranshumanist movements like the rationalists and the effective altruists: in many ways full of transhumanist ideas, yet not beholden to always proclaiming their transhumanism. We have also become part of institutions, and there are people that grew up with transhumanism who are now senior enough to fund things, make startups or become philanthropists.



Which do you think is more important for the future of humanity, the exploration of outer space (planets, stars, galaxies, etc.)? Or the exploration of inner space (consciousness, intelligence, self, etc.)?

Dr. Anders Sandberg:

Both, but in different ways. Exploration of outer space is necessary for long term survival. Exploration of inner space is what may improve us.


What step would you take first? Would you first discover “everything” or as much as possible about inner space, or outer space?

Dr. Anders Sandberg:

I suspect safety first: getting off-planet is a good start. But one approach does not preclude working on the other at the same time.•


The MIT economist David Autor doesn’t believe it’s different this time, he doesn’t think automation will lead to widespread technological unemployment any more than it did during the Industrial Revolution or the last AI scares of the 1960s and 1970s. Autor feels that robots may come for some of our jobs, but there will still be enough old and new ones to busy human hands because our machine brethren will probably never be our equal in common sense, adaptability and creativity. Technology’s new tools may be fueling wealth inequality, he acknowledges, but the fear of AI soon eliminating Labor is unfounded. 

Well, perhaps. But if you’re a truck or bus or taxi or limo or delivery driver, a hotel clerk or bellhop, a lawyer or paralegal, a waiter or fast-casual food preparer, or one of the many other workers whose gigs will probably disappear, you may be in for some serious economic pain before abundance emerges at the other side of the new arrangement.

Autor certainly is right in arguing that the main economic problem caused by mass automation would be “one of distribution, not of scarcity.” But that’s an issue requiring some political consensus to solve, and reaching a majority isn’t easy these days in our polarized society.

From Autor’s smart article in the Journal of Economic Perspectives “Why Are There Still So Many Jobs?“:

Polanyi’s Paradox: Will It Be Overcome?

Automation, complemented in recent decades by the exponentially increasing power of information technology, has driven changes in productivity that have disrupted labor markets. This essay has emphasized that jobs are made up of many tasks and that while automation and computerization can substitute for some of them, understanding the interaction between technology and employment requires thinking about more than just substitution. It requires thinking about the range of tasks involved in jobs, and how human labor can often complement new technology. It also requires thinking about price and income elasticities for different kinds of output, and about labor supply responses.

The tasks that have proved most vexing to automate are those demanding flexibility, judgment, and common sense—skills that we understand only tacitly. I referred to this constraint above as Polanyi’s paradox. In the past decade, computerization and robotics have progressed into spheres of human activity that were considered off limits only a few years earlier—driving vehicles, parsing legal documents, even performing agricultural field labor. Is Polanyi’s paradox soon to be at least mostly overcome, in the sense that the vast majority of tasks will soon be automated?

My reading of the evidence suggests otherwise. Indeed, Polanyi’s paradox helps to explain what has not yet been accomplished, and further illuminates the paths by which more will ultimately be accomplished. Specifically, I see two distinct paths that engineering and computer science can seek to traverse to automate tasks for which we “do not know the rules”: environmental control and machine learning. The first path circumvents Polanyi’s paradox by regularizing the environment, so that comparatively inflexible machines can function semi-autonomously. The second approach inverts Polanyi’s paradox: rather than teach machines rules that we do not understand, engineers develop machines that attempt to infer tacit rules from context, abundant data, and applied statistics.•


Why do I feel that the chance of seemingly kindly Ben Carson being President is even slimmer than that of Donald Trump, a dump truck of a human being who’s always ready to pour dirt all over anything he deems in his way?

Perhaps that’s because Carson’s politics are truly much more retrograde than Trump’s, considering the neurosurgeon holds devout beliefs and the cartoonish real-estate mogul doesn’t really possess any, just hasty positions informed by a surfeit of ego and other personality defects. They’re as meaningful as when he asserts that he’s the “biggest builder in New York City,” even though he long ago stopped building here.

Trump’s a xenophobe and racist, the Birther-in-Chief, and his plan to drive millions of undocumented immigrants and their families from America is scary. But perhaps like most of his contentions, this scheme is more bullshit he’s made up as he’s gone along. And any of his few attempts at policy beyond blaming “foreigners” seem stabs in the dark. He might change his feelings about them tomorrow, or even about running for President at all. His only intention, at long last, is attention.

Carson, conversely, genuinely wants to wage a war on women–or, at least, in his own Atwoodian terms, “what’s inside of women“–whereas Trump wants have a look inside for other reasons. Carson really believes “being gay is a choice.” He’s said with conviction that Obamacare is the “worst thing since slavery.” He means it.

Beware the quiet ones.

From Edward Luce’s FT column about the contrasting styles of the two early GOP leaders:

The personality difference between the two is acute. Mr Carson’s demeanour is as gentle — and apparently devoid of anger — as Mr Trump’s is harsh. As the only African-American in the race, Mr Carson, 63, sparks comparisons with Herman Cain, the pizza parlour king, who briefly held sway in the 2012 race with his tirelessly recited “999” tax plan. Mr Carson’s moment in the sun may end just as quickly. Yet his numbers have been building steadily for the past three months. At the second Republican debate on Wednesday night he will stand on the centre of stage next to Mr Trump. It will be a study in contrasts.

What unites them is their outsider status — at this point, experience is the biggest handicap in the Republican battle. Any hint of a past in politics is toxic. Almost everything else differs. Mr Trump comes from a wealthy background as the son of a New York developer. Mr Carson is genuinely self-made. Raised by an illiterate single mother in Detroit, he worked his way to become head of the prestigious Johns Hopkins Hospital and became the first surgeon, in 1987, to separate Siamese twins at birth. His autobiography, Gifted Hands, remains a best-seller. Among some Christian conservatives, Mr Carson is viewed as a handmaiden of the Lord.

Yet his rise is baffling to electoral veterans. If anything, Mr Carson’s string of gaffes are more shocking than Mr Trump’s.•


Tags: , ,

Inhabitants on Wrangel Island, before Semenchuk’s mad reign.


Konstantin Semenchuk, the scientist who ruled for two years in the 1930s over the Soviet station on the remote Wrangel Island, is so forgotten today he doesn’t even merit his own dedicated Wikipedia page, but it’s unlikely those he governed ever forgot what Time magazine described as the madman’s “shifty-eyed” visage.

Perhaps there’s a scholar somewhere who can explain what exactly provoked Semenchuk’s seemingly insane criminality and the tragedies it brought about, but there’s no easily accessible record that spells out anything beyond the charges and result of his trial. The facts as we know them: He was appointed as Governor of Wrangel Island in 1934 by Stalin’s Soviet Union and was accused of starving, extorting, poisoning, raping and murdering the native people and his own rival coworkers. At the conclusion of a short and sensational Moscow trial, Semenchuk was sentenced to death along with his accomplice and dogsled driver, S.P. Startsev, for, among other crimes, having killed N.A. Wulfson, a doctor whom he sent out on a fake mission through a snowstorm. What follows are a succession of 1936 articles from the Brooklyn Daily Eagle which paint pieces of a ghastly portrait.


From May 19:


From May 20:


From May 24:

Tags: , , ,

Ilya Somin of the Volokh Conspiracy blog at the Washington Post has capsules of two new books with a Libertarian bent, one of which is Markets Without Limits: Moral Virtues and Commercial Interests by Jason Brennan and Peter Jaworski. The main premise seems to be that activities deemed legal if done for no financial gain should also be permitted if there is a charge. Selling kidneys and sex are two chief examples. On the face of it, that makes a lot of sense, except…

What if placing a financial value on a kidney reduces the number of organs donated for free, making them unaffordable except to people who could bid the highest? Would we want the market regulating such a thing?

Prostitution would seem like an easier problem: It’s always existed, so let’s stop being silly and just legalize it. One argument against: If it was okay to have group-sex clubs (like the infamous Plato’s Retreat), wouldn’t that create a ground zero for STDs that could go beyond the participants? Couldn’t it be a public-health threat? Sure, people can arrange for such risky group behaviors for free now, but legalization would commodify and encourage them.

It always seems exciting to strip away regulations, but there are hardly ever simple solutions. At any rate, I look forward to reading the book. 

From Somin:

In Markets Without Limits, [Jason] Brennan and [Peter] Jaworski argue that anything you should be allowed to do for free, you should also be allowed to do for money. They do not claim that markets should be completely unconstrained, merely that we should not ban any otherwise permissible transaction solely because money has been exchanged. Thus, for example, they agree that murder for hire should be illegal. But only because it should also be illegal to commit murder for free. Their thesis is also potentially compatible with a wide range of regulations of various markets to prevent fraud, deception, and the like. Nonetheless, their thesis is both radical and important. The world is filled with policies that ban selling of goods and services that can nonetheless be given away for free. Consider such cases as bans on organ markets, prostitution, and ticket-scalping.•

Tags: , ,

If you’re a homogenous culture with a lack of fervor for immigration and a graying population as Japan is, robots are a necessity, an elegant solution even, as Yoshiaki Nohara writes in an Financial Review article. For a country like America that embraces immigration (well, some of us still do) and has thrived on youthful demographics, it’s more complicated.

From Nohara:

The rise of the machines in the workplace has US and European experts predicting massive unemployment and tumbling wages.

Not in Japan, where robots are welcomed by Prime Minister Shinzo Abe’s government as an elegant way to handle the country’s aging populace, shrinking workforce and public aversion to immigration.

Japan is already a robotics powerhouse. Abe wants more and has called for a “robotics revolution.” His government launched a five-year push to deepen the use of intelligent machines in manufacturing, supply chains, construction and health care, while expanding the robotics markets from 660 billion yen ($US5.5 billion) to 2.4 trillion yen by 2020.

“The labour shortage is such an acute issue that companies have no choice but to boost efficiency,” says Hajime Shoji, the head of the Asia-Pacific technology practice at Boston Consulting Group. “Growth potential is huge.” By 2025, robots could shave 25 percent off of factory labour costs in Japan, according to the consulting firm.•

Tags: ,

We like to think we have empathy, but how can we truly see things through someone else’s eyes, especially if their reality has known extremes ours hasn’t? To help us, some of them paint a picture.

Ai Weiwei does that, in many different ways. The Chinese artist has been imprisoned and beaten and detained, continually detained. There seems to be some sort of detente in his current state of relations with government authorities, but his experiences don’t go away–they shape him. So he may view consumerism and technology differently than I do, but his feelings from within his context are just as valuable. Maybe more so.

For a NYRB article, Ian Johnson facilitated a pizzeria lunch in Berlin between Ai Weiwei, his passport in hand once again, and exiled writer Liao Yiwu. The artist believes China, viewed broadly, is better for its modernization, and that the world is mostly improved by social media. An excerpt:


What do you think of think of the modernization theory—that when people get to a certain standard of living, when people are no longer just concerned with food or shelter, they start to demand things. We could see that historically in South Korea, or Taiwan, say thirty years ago. Does that have any relevance to China today?

Ai Weiwei:

It does, very obviously. If you see those young kids, they’re better off than their parents. They’ve been sent to study abroad. They can travel more freely. They get on the internet. They get iPhones and iPads and video games.


Are the Chinese authorities aware of it?

Ai Weiwei:

They are aware of it, but I don’t know to what degree, and I don’t know if they have the right measures. To understand the crisis you need a philosophical mind and the system never really had that kind of discussion—like the one we’re having now, and to openly discuss it. To openly discuss it means first you have a balanced view and you get every mind involved, so the solution will be more democratic rather than some authoritarian solution, which will just create more problems. All they care about are results, but life is about more than results. It’s about our involvement, our passive involvement in each individual’s mind, and that’s why we can say we love it or we hate it.


One way people engage is through social media. Obviously that’s changed things a lot but it also seems to encourage a bit of a, not civil society, but uncivil society—people cursing each other and so on.

Ai Weiwei:

It does much more good than evil. Of course, if you have a society that never had a public platform or public property, it’s something new. It becomes an outlet for huge pressure. It’s like an explosion, but only because the building is not well-designed. If you had ten outlets [of expression], people would be much more friendly and courteous.

That’s why a modern structure is so important to deal with contemporary problems. It’s not about ideology. All those concepts of democracy or freedom of speech. It’s really about efficient tactics to solve modern problems. That problem is to recognize and protect each individual’s rights and to contribute them to society. Of course China is far from that. First it needs a philosophical understanding and then it needs laws to protect those rights and legislation designed for separate powers. All of that is not established in China now and that’s why I say China is not a modern society.•

Tags: , ,


Georgia’s Governor Jimmy Carter was so completely unknown on the national stage in 1974 that the panel didn’t need to don blindfolds when he appeared on What’s My Line? A mere two years later he was President of the free world’s most powerful country. That was one helluva Trilateral Commission.


In 1973, Russell Harty spent a weekend at Salvador Dali’s Catalonian home to create an appropriately insane portrait of the 69-year-old artist and his “cybernetic mind.” On display: Al Capone’s Cadillac, General Franco’s granddaughter and an “instantaneous plastic web.” Dali reveals that his two favorite animals are the rhinoceros and a filet of sole. Amazing stuff.


Peter Sellers is interviewed by talk show host/speed reader Steve Allen in 1964 about Dr. Strangelove, revealing he lifted the voice for the titular character from the famed tabloid photographer Weegee. Mixed in are a couple of clips of the protean actor’s former employees recalling how he faked an injury to get out of doing the Major King Kong role

From the December 30, 1934 Brooklyn Daily Eagle:


« Older entries § Newer entries »