Excerpts

You are currently browsing the archive for the Excerpts category.

I loved the Rem Koolhaas book Delirious New York, but I happened to be in Seattle in 2004 the week the Central Library he designed opened and I wasn’t really enamored of it the way I am many of his other works. It has an impressive exterior, but the interior felt like it was meant more to be looked at than utilized, though I guess that is the epitome of the modern library in a portable world, the best-case scenario, even–perhaps people will at least take a glance.

As his Fondazione Prada is set to open in Milan this month in a repurposed, century-old industrial space, the architect has become more focused on revitalization and preservation rather than outré original visions. From a Spiegel Q&A with him conducted by Marianne Wellershoff:

Kultur Spiegel:

Does a building need to have a certain age or degree of prominence for us to recognize it as important?

Rem Koolhaas:

The idea of preservation dates back to the beginning of the modern age. During the 19th century, people essentially felt that something had to be at least 2,000 years old to be worthy of preservation. Today, we already decide during the planning stages how long a building should exist. At first, historical monuments were deemed worthy of preservation, then their surroundings, then city districts and finally large expanses of space. In Switzerland the entire Rhaetian Railway has been added to the list of UNESCO World Heritage Sites. The dimensions and repertoire of what is worthy of preserving have expanded dramatically.

Kultur Spiegel:

Were there structures in recent years that you think should have been better preserved?

Rem Koolhaas:

The Berlin Wall, for example. Only a few sections remain, because no one knew at the time how to deal with this monument. I find that regrettable.

Kultur Spiegel:

And what do you think of the concrete architecture of the 1960s, a style known as brutalism? Should it be protected or torn down?

Rem Koolhaas:

We should preserve some of it. It would be madness for an entire period of architectural history — that had a major influence on cities around the world — to disappear simply because we suddenly find the style ugly. This brings up a fundamental question: Are we preserving architecture or history?

Kultur Spiegel:

What is your answer?

Rem Koolhaas:

We have to preserve history.

 

Tags: ,

In “Ancient DNA Tells A New Human Story,” the “Saturday Essay” at the Wall Street Journal, Matt Ridley explains how “low-cost, high-throughput DNA sequencing” has allowed prehistory to come into sharper focus. The facts don’t speak well of humans (we were not nurturers in the big picture), though it does prove what a polyglot race we actually are. There’s also a lot to reveal about the unusual course diseases may have traveled from the earliest societies to the modern ones. An excerpt:

It turns out that, in the prehistory of our species, almost all of us were invaders and usurpers and miscegenators. This scientific revelation is interesting in its own right, but it may have the added benefit of encouraging people today to worry a bit less about cultural change, racial mixing and immigration.

Consider two startling examples of how ancient DNA has solved long-standing scientific enigmas. Tuberculosis in the Americas today is derived from a genetic strain of the disease brought by European settlers. That is no great surprise. But there’s a twist: 1,000-year-old mummies found in Peru show symptoms of TB as well. How can this be—500 years before any Europeans set foot in the Americas?

In a study published late last year in the journal Nature, Johannes Krause of the Max Planck Institute for the Science of Human History in Jena, Germany, and his colleagues found that all human strains of tuberculosis share a common ancestor in Africa about 6,000 years ago. The implication is that this is when and where human beings first picked up TB. It is much later than other scientists had thought, but Dr. Krause’s finding only deepened the mystery of the Peruvian mummies, since by then, their ancestors had long since left Africa.

Modern DNA cannot help with this problem, but reading the DNA of the tuberculosis bacteria in the mummies allowed Dr. Krause to suggest an extraordinary explanation. The TB DNA in the mummies most resembles the DNA of TB in seals, which resembles that of TB in goats in Africa, which resembles that of the earliest strains in African people. So perhaps Africans gave tuberculosis to their goats, which gave it to seals, which crossed the Atlantic and gave it to native Americans.•

Tags:

Fran Lebowitz action figure.

Fran Lebowitz action figure.

As a lifelong New Yorker, I probably should feel guilty for saying in recent years that I think Los Angeles has become more interesting than NYC, but how can I be when even Fran Lebowitz, who was born on the jumpseat of a checker cab in Greenwich Village, has shifted her feelings on the rival metropolises?

A lot of the more creative, interesting people were driven out of New York by cost-of-living increases (particularly rents), and a lot of those who remain sit around and binge-watch TV on their iPads like everyone in every other place in the country. Sure, NYC is still more interesting than Cleveland, but was that really the goal?

From Alex Williams at T Magazine:

No less a New York mascot than Fran Lebowitz, whose jaded, cigarette-sucking visage may as well be inscribed on the city seal, also confessed to a change of heart about Los Angeles.

“L.A. is better than it used to be, New York is worse than it used to be,” Ms. Lebowitz said at a recent Vanity Fair party for the Tribeca Film Festival. The quality-of-life campaigns under Mayors Giuliani and Bloomberg swept away so much that was gritty, quirky or exceptional about the city, she said, and as a result, “New York has become vastly more suburban,” while “L.A. has become slightly less suburban.”

This is not a trivial point. Los Angeles is widely acknowledged to have become strikingly more cosmopolitan in recent years.•

Tags: ,

I’ll guess that the New York Times’ wonderful obituarist Margalit Fox does not spend most of her waking hours focused on mid-20th-century professional wrestling, yet she’s written a brilliant postmortem about the recently deceased Verne Gagne, a star DuMont TV wrestler in the 1950s who ultimately ran his own Midwest promotion. That’s what an excellent reporter can do: They come to an unfamiliar topic, gather information and process it, and then quickly turn out something that seems to have been written by a longtime expert on the subject. Much easier said than done.

Here’s the only thing I know about Gagne: He happened upon the young Andre the Giant (not yet so nicknamed) in Japan 45 years ago and wanted to turn him into a “Great White Hope” boxer to take on the likes of Ali and Frazier. Not quite how it turned out.

From Fox:

A saloonkeeper’s son, LaVerne Clarence Gagne was born on Feb. 26, 1926, in Corcoran, Minn., near Minneapolis, and reared on a farm there. His mother died when he was 11; three years later, determined to wrestle despite his father’s insistence that he work in the saloon instead, he left home. Verne finished high school, where he wrestled and played football and baseball while living with an aunt and uncle.

At the University of Minnesota, he became a four-time heavyweight champion of the Big Nine, as the Big Ten Conference was then known, and an N.C.A.A. national champion. He also played football. Near the end of World War II he served stateside with the Marines, tapped by virtue of his wrestling skills to teach the men hand-to-hand combat.

In 1947 Gagne was a 16th-round draft pick by the Chicago Bears; he was later courted by the Green Bay Packers and the San Francisco 49ers. But there was little money in pro football then, and he chose to earn his keep on the canvas.

In his first professional match, in 1949 in Minneapolis, Gagne defeated Abe Kashey, known as King Kong, and in the decades that followed Gagne traversed the country. Crowds waited eagerly for him to dispatch his foes with his trademark sleeper hold, which entailed grabbing an opponent’s head and pressing on his carotid artery so that he passed out — or at least gave a convincing impression of passing out.

In 1960, Gagne helped found the American Wrestling Association. Based in Minneapolis, the association promoted matches throughout the Midwest, Far West and Canada. Gagne, who later became the association’s sole owner, held the A.W.A. championship belt 10 times.

But in the 1980s, with the ascent of cable TV and its lucre, many of the nation’s star wrestlers, including Hogan and Ventura, were lured from their regional stables to the World Wrestling Federation, now a national behemoth known as World Wrestling Entertainment. The A.W.A. ceased operations in 1991; Gagne filed for personal bankruptcy in 1993.•

 

Tags: ,

It would cost less to offer guaranteed paid work to unemployed Americans than to finance a social safety net, but there’s really no movement on either side of the aisle in Washington to aid the long-time unemployed, those left behind by the 2008 financial collapse and the growth of robotics. The problem has just been permitted to percolate.

In a Financial Times piece, Martin Wolf looks at two new titles about the haves and have-nots, Inequality: What Can be Done? by Anthony Atkinson and The Globalization of Inequality by François Bourguignon. Interesting that the acceleration of inequality is most marked in the U.S. and U.K. and has not been shared by all other industrialized nations. France, in fact, has seen disparity decrease during the same timeframe. An excerpt: 

Both authors agree that something should be done about inequality. Atkinson provides a number of arguments for concern over rising inequality within rich countries. Some argue, for example, that only equality of opportunity matters. To this he responds that successful personal outcomes are often merely a matter of luck, that the structure of rewards is often grossly unfair and that, with sufficient inequality of outcome, equality of opportunity must be mirage.

Beyond this, argues Atkinson, unequal societies do not function well. The need to protect personal security or to incarcerate ever more people is likely to become a drag on economic performance and inimical to civilised life. If inequality becomes extreme, many will be unable to participate fully in their society. In any case, argues Atkinson, a pound in the hands of someone living on £10,000 a year must be worth more than it is to someone living on £1m. This does not justify complete equality, since the attempt to achieve it will impose costs. But it does mean that high inequality needs to be justified.

Atkinson goes far further, offering a programme of radical reform for the UK. It is not merely radical, but precise and (to the extent such a programme can be) costed. It starts from the argument that rising inequality “is not solely the product of forces outside our control. There are steps that can be taken by governments, acting individually or collectively, by firms, by trade union and consumer organisations, and by us as individuals to reduce the present levels of inequality.”What about policy? At the global level, both authors recommend improved and more generous aid. Bourguignon adds that properly managed trade has much to offer developing countries. Within countries, both authors call for higher taxes on wealth and incomes, and for better regulation, particularly of finance. Also important, they agree, will be policies directly addressed at improving educational outcomes for the disadvantaged.

Thus policy makers should develop a national pay policy, including a statutory minimum wage set at the “living wage,” and should also offer guaranteed public employment at that rate.•

Tags: , ,

Goods and food made, served and delivered by humans will some day (and soon) be an artisanal and specialized field, the same way some still buy handmade shoes at a great expense, but most of us hop around on the machine-manufactured kind. That’s right, the wealthy will say, an actual lady’s hands touched my carrots! How smart!

Seriously, almost all of us are eventually being replaced at work by robots, with almost every task that can be automated being automated, and there’s no economic plan in place to deal with that onrushing reality. How do we reconcile a free-market economy with a highly automated one? Of course, I’m just talking about Weak AI. What happens if something stronger comes along, which will likely occur if we go on long enough? As the song says, we’ll make great pets. From recent Steve Wozniak comments reported by Brian Steele at MassLive:

“I love technology, to try it out myself,” said Wozniak. “I’ve got at least 5 iPhones. … I have some Android phones.”

He imagined a world in which these kinds of devices would be able to teach our children for us.

“A lot of our schools slow students down,” he said. “We put computers in schools and the kids don’t come out thinking any better.”

Rather than just putting more gadgets and gizmos in the classroom, he said, each classroom needs to have fewer students, and kids who are further ahead than their peers should be nurtured, not forced to fall in line.

Dismissing the concern over giving artificial intelligence too much intelligence, he said that’s already happened.

“The machines won 200 years ago. We made them too important,” said Wozniak. “That makes us the family pet.”•

Tags: ,

Terrorists dress the part now, aided by Hollywood editing techniques which help them satisfy expectations. And the rest of us also try to project an image virtually of who we want to be, if one not so horrifying. It’s neither quite real nor fake, just a sort of purgatory. It’s a variation of who we actually are–a vulgarization.

Here’s the transcription of a scene from 1981’s My Dinner with Andre, in which Wallace Shawn and Andre Gregory discuss how performance had become introduced in a significant way into quotidian life, and that was long before Facebook gave the word “friends” scare quotes and prior to Reality TV, online identities and selfies:

Andre Gregory:

That was one of the reasons why Grotowski gave up the theater. He just felt that people in their lives now were performing so well, that performing in the theater was sort of superfluous, and in a way, obscene. Isn’t it amazing how often a doctor will live up to our expectation of how a doctor should look? You see a terrorist on television and he looks just like a terrorist. I mean, we live in a world in which fathers, single people or artists kind of live up to someone’s fantasy of how a father or single person or an artist should look and behave. They all act like that know exactly how they ought to conduct themselves at every single moment, and they all seem totally self-confident. But privately people are very mixed up about themselves. They don’t know what they should be doing with their lives. They’re reading all these self-help books.

Wallace Shawn:

God, I mean those books are so touching because they show how desperately curious we all are to know how all the others of us are really getting on in life, even though by performing all these roles in life we’re just hiding the reality of ourselves from everybody else. I mean, we live in such ludicrous ignorance of each other. I mean, we usually don’t know the things we’d like to know even about our supposedly closest friends. I mean, I mean, suppose you’re going through some kind of hell in your own life, well, you would love to know if your friends have experienced similar things, but we just don’t dare to ask each other. 

Andre Gregory:

No, it would be like asking your friend to drop his role.

Wallace Shawn:

I mean, we just put no value at all on perceiving reality. On the contrary, this incredible emphasis we now put on our careers automatically makes perceiving reality a very low priority, because if your life is organized around trying to be successful in a career, well, it just doesn’t matter what you perceive or what you experience. You can really sort of shut your mind off for years ahead in a way. You can turn on the automatic pilot.•

Tags: ,

It doesn’t seem plausible to me that we’re on the cusp of a-mortality, no matter how many Transhumanists say they believe it to be so. My main disagreement with futurists is that they seem to always think the future is now, that any dream theoretically possible will soon be realized. Usually you have to work awhile to get there. 

But I’d be so happy my head would explode if Transhumanist Party Presidential candidate, Zoltan Istvan, was included in the major debates with Hillary and Marco and Jeb, so that he could discuss robot hearts and designer babies. He has as much chance to win the election as Ted Cruz but would be far more interesting to listen to. 

Two questions follow from Roby Guerra’s new h+ interview with Istvan.

___________________________

Roby Guerra:

Zoltan, Is knowledge the new food? Food for a new type of man of year the year 2000 and beyond? 

Zoltan Istvan:

The new way for human beings to move forward is via cyborgism, where we merge machine parts with the human body. This might include things like robotic hearts, artificial limbs, and mind reading headsets. These are the sorts of new technologies that will make up the modern human being moving forward.

___________________________

Roby Guerra:

If you were to get elected what would your practical policies be? In addition to supporting transhumanist projects?

Zoltan Istvan:  

The Transhumanist Party supports American values, prosperity, and security.

So the three primary things I would do if I became president are:

1) Attempt to do everything possible to make it so America’s amazing scientists and technologists have resources to overcome human death and aging within 15-20 years–a goal an increasing number of leading scientist think is reachable.

2) Create a cultural mind-set in America that embracing and producing radical technology and science is in the best interest of our nation and species.

3) Create national and global safeguards and programs that protect people against abusive technology and other possible planetary perils we might face as we transition into the transhumanist era.•

Tags: ,

The quantified self certainly has its benefits, allowing us to detect illnesses early–perhaps eventually even anticipate them. We’ll have the ability to monitor our vitals and behavior whenever we like, but corporations may also have their telescope inside our bodies and minds. From Jacob Silverstein at the Baffler:

This month, John Hancock Insurance—whose patriotic namesake might be disappointed that the company is now a wholly owned subsidiary of Canadian giant Manulife Financial—announced that it would distribute rebates to life insurance customers in exchange for access to their fitness monitor and location information.

IBM and Microsoft are marketing their cloud computing services to insurers, offering to crunch their data for them.

Car insurers like Progressive are discovering the value of real-time telematics data, culled from GPS units or special devices that can track whether you brake too hard. (Want to gag a little? Check out this British insurer using information from car computers to encourage motorists to “drive like a girl.”)

This is the first wave of insurance companies capitalizing on the explosion in personal data, and it looks to get worse. Trade publications are awash with rosy stories about the profits to be extracted from modifying premiums not just once or twice a year, but every day. Soon, rates will be adjusted in real time. As one insurance consultant told Forbes, “the healthier you get the lower your premiums go.” The corollary is that if you get sick or injured, or if you do anything that the insurer’s algorithms deem unhealthy, your premiums will increase.•

Tags:

I’ve always traced the War on Drugs in the U.S. to the Nixon Administration, but British journalist Johann Hari, author of the new book Chasing the Scream, dates it to the end of Prohibition, particularly to bureaucrat Harry Anslinger, who later mentored Sheriff Joe Arpaio of Tent City infamy. He also reveals how intertwined crackdown was (and is) with racism. No shocker there.

The so-called War has been a huge failure tactically and financially and has criminalized citizens for no good reason. All the while, there’s been a tacit understanding that millions of Americans are hooked on Oxy and the like, dousing their pain with a perfectly legal script. These folks are far worse off than pot smokers, who are still afoul of the law in most states. I’m personally completely opposed to recreational drug use, but I feel even more contempt for the War on Drugs. It’s done far more harm than good.

Matthew Harwood of the ACLU interviews Hari at Medium. The opening:

Matthew Harwood:

So Chasing the Scream, what’s with the title?

Johann Hari:

The most influential person who no one has ever heard of is Harry Anslinger, the man who invented the modern War on Drugs — way before Nixon, way before Reagan. He’s the guy who takes over the Federal Bureau of Prohibition just as alcohol prohibition is ending. So, he inherits this big government department with nothing to do, and he basically invents the modern drug war to give his bureaucracy a purpose. For example, he had previously said marijuana was not a problem — he wasn’t worried about it, it wasn’t addictive — but he suddenly announces that marijuana is the most dangerous drug in the world, literally — worse than heroin — and creates this huge hysteria around it. He’s the first person to use the phrase “warfare against drugs.”

But he was driven by more than just trying to keep his large bureaucracy in work. When he was a little boy, he grew up in a place called Altoona in Pennsylvania, and he had this experience that really drove him all his life. He lived near a farmer and his wife, and one day, he goes to the farmhouse, and the farmer’s wife was screaming and asking for something. The farmer sent little Harry Anslinger to the local pharmacy to buy opiates — because of course opiates were legal. Harry Anslinger hurries back and gives the opiates to the farmer’s wife, and the farmer’s wife stops screaming. But he remembered this as this foundational moment where he realized the evils of drugs, and he becomes obsessed with eradicating drugs from the face of the earth. So I think of him as chasing this scream across the world. The tragedy is he created a lot of screams in turn.

It leads him to construct this global drug war infrastructure that we are all living with now. We are all living at end of the barrel of Harry Anslinger’s gun. He didn’t do it alone — I’m not a believer in the “Great Man Theory of History.” He could only do that because he was manipulating the fears of his time. But he played a crucial role.

Matthew Harwood:

We here at the ACLU look at the drug war and see that it has a disproportionate impact on communities of color. You find, however, that this war was pretty racist from the beginning.

Johann Hari:

If you had said to me four years ago, “Why were drugs banned?” I would have assumed it for the reasons people would give today — because you don’t want kids to use them or you don’t want people to become addicted. What’s striking when you look at the archives from the time is that almost never comes up. Overwhelmingly the reason why drugs are banned is race hysteria.•

Tags: , ,

Tesla is officially no longer solely an EV company but a home-battery outfit as well, which could make for a smoother grid and be a boon for alternative energies. Elon Musk should be pleased, as should those early tinkerers who began repurposing his electric-car batteries for makeshift home conversions. Perhaps the biggest benefit, as Chris Mooney of the Washington Post astutely points out, is the ability to store wind and solar power. An excerpt:

“Storage is a game changer,” said Tom Kimbis, vice president of executive affairs at the Solar Energy Industries Association, in a statement. That’s for many reasons, according to Kimbis, but one of them is that “grid-tied storage helps system operators manage shifting peak loads, renewable integration, and grid operations.” (In fairness, the wind industry questions how much storage will be needed to add more wind onto the grid.)

Consider how this might work using the example of California, a state that currently ramps up natural gas plants when power demand increases at peak times, explains Gavin Purchas, head of the Environmental Defense Fund’s California clean energy program.

In California, “renewable energy creates a load of energy in the day, then it drops off in the evening, and that leaves you with a big gap that you need to fill,” says Purchas. “If you had a plenitude of storage devices, way down the road, then you essentially would be able to charge up those storage devices during the day, and then dispatch them during the night, when the sun goes down. Essentially it allows you to defer when the solar power is used.”•

Tags: , , ,

Even someone as lacking in religion as myself can be perplexed by Richard Dawkins’ midlife anti-theology mission to irk people of faith on chat shows and the like. In his proselytizing–and that’s what it is–he has the fervor of a particularly devout and curmudgeonly priest. It’s true that many a horrid act has been committed in the name of the father, but so have many others been by those who believe (like Dawkins and I do) that we’re orphans. I don’t want to deny someone on an operating table (or the one doing the operating) from believing in a little in magic at that delicate moment, even if it is rot. Trust in science, and say a prayer if you like. 

But I wouldn’t let his noisily running a chariot over the gods make me deny his wonderful intellect and contributions to knowledge, from genes to memes. At Edge, the site’s founder and longtime NYC avant-gardist, John Brockman, has an engrossing talk with the evolutionary biologist about his “vision of life.” The transcript makes for wonderful reading.

Dawkins believes if life exists elsewhere in the universe (and his educated guess is that it does), it’s of the Darwinian, evolutionary kind, that no other biological system besides the one we know would work under the laws of physics. He also notes that we contribute in our own way to the amazing progress of life, even if our time on the playing field can be brutal and brief. As Dawkins puts it, “we are temporary survival machines” coded to be hellbent on seeing our genes persevere, even though life will eventually evolve in ways presently unimaginable to us. It will still be life, and that’s our gift to it. No matter what we personally feel is the main purpose of our existence, it’s actually that.

The opening:

Natural selection is about the differential survival of coded information which has power to influence its probability of being replicated, which pretty much means genes. Coded information, which has the power to make copies of itself—“replicator”—whenever that comes into existence in the universe, it potentially could be the basis for some kind of Darwinian selection. And when that happens, you then have the opportunity for this extraordinary phenomenon which we call “life.”

My conjecture is that if there is life elsewhere in the universe, it will be Darwinian life. I think there’s only one way for this hyper complex phenomenon which we call “life” to arise from the laws of physics. The laws of physics—if you throw a stone up in the air, it describes a parabola, and that’s it. But biology, without ever violating the laws of physics, does the most extraordinary things; it produces machines which can run, and walk, and fly, and dig, and swing through the trees, and think, and produce the whole of human technology, human art, human music. This all comes about because at some point in history, about 4 billion years ago, a replicating entity arose, not a gene as we would now see it, but something functionally equivalent to a gene, which because it had the power to replicate and the power to influence its own probability of replicating, and replicated with slight errors, gave rise to the whole of life. 

If you ask me what my ambition would be, it would be that everybody would understand what an extraordinary, remarkable thing it is that they exist, in a world which would otherwise just be plain physics. The key to the process is self-replication. The key to the process is that … let’s call them “genes” because nowadays they pretty much all are genes. Genes have different probabilities of surviving. The ones that survive, because they have such high fidelity replication, are the ones which we see in the world, the ones which dominate gene pools in the world. So for me, the replicator, the gene, DNA, is absolutely key to the whole process of Darwinian natural selection. So when you ask the question, what about group selection, what about higher levels of selection, what about different levels of selection, everything comes down to gene selection. Gene selection is fundamentally what is really going on. 

Originally these replicating entities would have been floating free and just replicating in the primeval soup, whatever that was. But they “discovered” a technique of ganging together into huge robot vehicles, which we call individual organisms.•

 

Tags: ,

At the Gawker site Phase Zero, William H. Arkin conducted a very interesting Q&A with Harper’s Washington Editor, Andrew Cockburn, who’s just published what’s a sadly timely book Kill Chain, which focuses on the U.S. droning program. Although the author doesn’t believe military droning will become automated, he feels the bigger-picture machinery of the system already is. Remote war has been a dream pursued since Tesla and now it’s a global reality. One exchange:

William M. Arkin:

The CIA’s drone program, the President’s drone program, Congressionally approved, not approved, tacitly accepted: almost every description of the drone program makes it sound like it isn’t the United States and its foreign policy. Is that the consequence of something unique to drones?

Andrew Cockburn:

It’s interesting, drones and covert foreign policy seem to go together. In Operation Menu, Nixon’s secret bombing of Cambodia, the B-52 flight paths were directed from the ground, as was the moment of bomb release. In other words, the B-52s were essentially drones. Maybe the drone campaign isn’t described as the foreign policy of the United States because there’s a tinge of embarrassment that we’re murdering people in foreign countries as a matter of routine.

Beyond that, maybe we should call it the drone program’s drone program, because it’s taken on a life of it’s own, a huge machine that exists to perpetuate itself. Just take a look at the jobs listed almost every day for just one of the Distributed Common Ground System stations at Langley AFB in the Virginia Southern Neck. On April 25, for example, various contractors (some of which you’ve never heard of) were asking for a “USAF Intelligence Resource Management Analyst,” a “Systems Integrator,” a “USAF Senior Intelligence Programs and Systems Support Analyst,” a “USAF ISR Weapons Systems Integration Support Analyst” a “DPOC Network Engineer,” whatever that is, and a few others. All high paying, all of course requiring Top Secret or higher clearances. Every so often we hear that the CIA drone program is going to be turned over to the military. I say, ‘good luck with that’ – is the CIA really going to obligingly hand over a huge chunk of its raison d’etre, and its budget, its enormous targeting apparatus? There’s a lot of talk about “autonomous drones,” which aren’t going to happen, but I think the whole system is autonomous, one giant robot that has become unstoppable as it grinds along, sucking up money and killing people along the way.•

Tags: ,

The International Commission of Stratigraphy awards a golden spike–which is promptly driven into the ground–when scientists provide geological proof that a new epoch has begun. By that definition, are we in the Anthropocene, a human-driven age of climate turmoil? Perhaps more importantly: Does it matter that we establish the Anthropocene by measuring rock when signs of the deleterious human effect on the planet are manifest in many other ways?

In “Written in Stone,” an Aeon article, James Westcott wonders about the rush by some geologists to make the Anthropocene “official,” especially by measures that perhaps aren’t the most vital ones. An excerpt:

For a potentially epoch-making event, the press conference after the AWG’s first face-to-face meeting in Berlin in October was muted, and sparsely attended. And yet Jan Zalasiewicz, a paleobiologist at the University of Leicester and chairman of the AWG, had important news to impart. He reported a growing feeling within the group that a strong case for formalising the Anthropocene can be made when the AWG submits its report to the International Commission of Stratigraphy in 2016.

The AWG will recommend a start date for the Anthropocene in the early 1950s (relegating many of our parents and grandparents to an entirely different epoch). Why then? Well, the flurry of post-war thermonuclear test explosions left a radionuclide signature that has spread across the entire planet in the form of carbon 12 and plutonium-239, which has a half-life of 24,110 years. The early 1950s also coincides with the beginning of the Great Acceleration in the second half of the 20th century, a period of unprecedented economic and population growth with matching surges – charted by Will Steffen and colleagues in Global Change and the Earth System (2004) – in every aspect of planetary dominance, from the damming of rivers to fertiliser production, to ozone depletion.

The Anthropocene’s advocates have a huge buffet of evidence that human activity amounts to an almost total domination of the planet – one of the latest being new maps that show the extent to which the United States has been paved over. But their problem in terms of formalisation on the Geological Time Scale is that the Earth has only just begun to digest this deadly feast through the pedosphere (the outermost layer) and into the lithosphere (the crust beneath it). The challenge is to convince geologists accustomed to digging much further back in time that the evidence accumulating now will be significant, stratigraphically speaking, deep into the future. Geologists are being asked to become prophets. …

Whatever happens after the AWG submits its recommendation in 2016, anthropocenists are, ironically, selling their theory short by seeking a place on something as esoteric as the Geological Time Scale. The Anthropocene, in all its multi-faceted, Earth system-altering horror, is more serious than that. The hope of course is that if we can name a new epoch after us then it will finally be a truth universally acknowledged that humans have more power than they know how to handle, and we will be able to start picking up the pieces.•

Tags: ,

Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s great Second Machine Age, recently argued in a Financial Times blog post that the economy’s behavior is puzzling these days. It’s difficult to find fault with that statement.

Inflation was supposed to be soaring by now, but it’s not. Technology was going to make production grow feverishly, but traditional measures don’t suggest that. Job growth and wages were supposed to return to normal once the financial clouds cleared, though that’s been largely a dream deferred. What gives?

In a sequel of sorts to that earlier post, McAfee returns to try to suss out part of the answer, which he feels might be that the new technologies have created an abundance which has suppressed inflation. That seems to be certain feature of the future as 3D printers move to the fore, but has it already happened? And has this plenty made jobs scarcer and suppressed wages? An excerpt:

In a Tweetstorm late last year, venture capitalist Marc Andreessen argued that technological progress might be another important factor driving prices down. He wrote: “While I am a bull on technological progress, it also seems that much of that progress is price deflationary in nature, so even extremely rapid tech progress may not show up in GDP or productivity stats, even as it = higher real standards of living.”

Prof [Larry] Summers shot back quickly, noting: “It is… not clear how one would distinguish deflationary and inflationary progress. The price level reflects the value of goods in terms of money, so it is hard to analyze without thinking about monetary and financial conditions.” This is surely correct, but is Prof Summers being too dismissive of Mr Andreessen’s larger point? Can tech progress be contributing to price declines?

Moore’s law — that computer processing power doubles roughly every two years — has made computers themselves far cheaper. It has also pretty directly led to the shrinkage of industries as diverse as encyclopedias, recorded music, film photography and standalone GPS devices. An intriguing analysis by writer Chris Goodall found that the “UK began to reduce its consumption of physical resources in the early years of the last decade.” Technological progress, which by its nature allows us to do more with less, is a big part of this move past “peak stuff.”

It’s also probably a big part of the reason that corporate profits remain so high, even while overall economic growth stagnates.

Tags: , , , ,

Oliver Sacks on a motorcycle in NYC, 1961. (Photo by Douglas White.)

I’ve read most of Lawrence Weschler’s books and gotten so much from them, particularly Seeing Is Forgetting the Name of the Thing One Sees and Vermeer in Bosnia. In a new Vanity Fair article, he uses passages from a long-shelved biography of his friend Oliver Sacks, the terminally ill neurologist, to profile the doctor in a way only a confidante and great writer can, revealing the many lives Sacks has lived, in addition to the public-intellectual one we’re all familiar with. Weschler is convinced that Sacks’ period of excessive experimentation with drugs when young led to his later scientific breakthroughs. An excerpt:

I had originally written him a letter, sometime in the late 70s, from my California home. Somehow back in college I had come upon Awakenings, published in 1973, an account of his work with a group of patients who had been warehoused for decades in a home for the incurable—they were “human statues,” locked in trance-like states of near-infinite remove following bouts of a now rare form of encephalitis. Some had been in this condition since the mid-1920s. These people were suddenly brought back to life by Sacks, in 1969, following his administration of the then new “wonder drug” L-dopa, and Sacks described their spring-like awakenings and the harrowing siege of tribulations that followed. In the book, Sacks gave the facility where all this happened the pseudonym “Mount Carmel,” an apparent reference to Saint John of the Cross and his Dark Night of the Soul. But, as I wrote to Sacks in that first letter, his book seemed to me much more Jewish and Kabbalistic than Christian mystical. Was I wrong?

He responded with a hand-pecked typed letter of a good dozen pages, to the effect that, indeed, the old people’s home in question, in the Bronx, was actually named Beth Abraham; that he himself came from a large and teeming London-based Jewish family; that one of his cousins was in fact the eminent Israeli foreign minister Abba Eban (another, as I would later learn, was Al Capp, of Li’l Abner fame); and that his principal intellectual hero and mentor-at-a-distance, whose influence could be sensed on every page of Awakenings, had been the great Soviet neuropsychologist A.R. Luria, who was likely descended from Isaac Luria, the 16th-century Jewish mystic.

Our correspondence proceeded from there, and when, a few years later, I moved from Los Angeles to New York, I began venturing out to Oliver’s haunts on City Island. Or he would join me for far-flung walkabouts in Manhattan. The successive revelations about his life that made up the better part of our conversations grew ever more intriguing: how both his parents had been doctors and his mother one of the first female surgeons in England; how, during the Second World War, with both his parents consumed by medical duties that began with the Battle of Britain, he, at age eight, had been sent with an older brother, Michael, to a hellhole of a boarding school in the countryside, run by “a headmaster who was an obsessive flagellist, with an unholy bitch for a wife and a 16-year-old daughter who was a pathological snitch”; and how—though his brother emerged shattered by the experience, and to that day lived with his father—he, Oliver, had managed to put himself back together through an ardent love of the periodic table, a version of which he had come upon at the Natural History Museum at South Kensington, and by way of marine-biology classes at St. Paul’s School, which he attended alongside such close lifetime friends as the neurologist and director Jonathan Miller and the exuberant polymath Eric Korn. Oliver described how he gradually became aware of his homosexuality, a fact that, to put it mildly, he did not accept with ease; and how, following college and medical school, he had fled censorious England, first to Canada and then to residencies in San Francisco and Los Angeles, where in his spare hours he made a series of sexual breakthroughs, indulged in staggering bouts of pharmacological experimentation, underwent a fierce regimen of bodybuilding at Muscle Beach (for a time he held a California record, after he performed a full squat with 600 pounds across his shoulders), and racked up more than 100,000 leather-clad miles on his motorcycle. And then one day he gave it all up—the drugs, the sex, the motorcycles, the bodybuilding. By the time we started talking, he had been pretty much celibate for almost two decades.•

Tags: ,

At Esquire, John H. Richardson profiles the brains behind Siri, Adam Cheyer and Chris Brigham, as they attempt (with other AI geniuses) to create a voice-based interface named Viv, which would “think” for itself and seamlessly band together all of the disparate elements of modern computing, a move which, if successful, could fundamentally change information gathering and the entire media landscape. It might unleash entrepreneurial energy and, you know, enable mass technological unemployment. There’s plenty of hyperbole surrounding the project (and in the article), though Siri’s success lends credence to the possibility of the outsize ambition being realized. An excerpt:

BRIGHAM CAME UP WITH the beautiful idea, which makes its own perfect sense. Cheyer was always the visionary. When they met at SRI International twelve years ago, Cheyer was already a chief scientist distilling the work of four hundred researchers from the Defense Department’s legendary CALO project, trying to teach computers to talk—really talk, not just answer a bunch of preprogrammed questions. Kittlaus came along a few years later, a former cell-phone executive looking for the next big idea at a time when the traditional phone companies were saying the iPhone would be a disaster—only phone companies can make phones. An adventurer given to jumping out of planes and grueling five-hour sessions of martial arts, he saw the possibilities instantly—cell phones were getting smarter every day, mobile computing was the future, and nobody wanted to thumb-type on a tiny little keyboard. Why not teach a phone to talk?

Brigham, at the time just an undergrad student randomly assigned to Cheyer’s staff, looked like a surfer, but he had a Matrix-like ability to see the green numbers scroll, offhandedly solving in a single day a problem that had stumped one of Cheyer’s senior scientists for months. Soon he took responsibility for the computer architecture that made their ideas possible. But he also had a rule-breaking streak—maybe it was all those weekends he spent picking rocks out of his family’s horse pasture, or the time his father shot him in the ass with a BB gun to illustrate the dangers of carrying a weapon in such a careless fashion. He admits, with some embarrassment, now thirty-one and the father of a young daughter, that he got kicked out of summer school for hacking the high school computer system to send topless shots to all the printers. After the SRI team and its brilliant idea were bought by Steve Jobs and he made it famous—Siri, the first talking phone, a commercial and pop-culture phenomenon that now appears in five hundred million different devices—Brigham sparked international news for teaching Siri to answer a notorious question: “Where do I dump a body?” (Swamps, reservoirs, metal foundries, dumps, mines.)

He couldn’t resist the Terminator jokes, either. When the Siri team was coming up with an ad campaign, joking about a series of taglines that went from “Periodically Human” to “Practically Human” to “Positively Human,” he said the last one should be “Kill All Humans.”

In the fall of 2012, after they all quit Apple, the three men gathered at Kittlaus’s house in Chicago to brainstorm, throwing out their wildest ideas. What about nanotechnology? Could they develop an operating system to run at the atomic level? Or maybe just a silly wireless thing that plugged into your ear and told you everything you needed to know in a meeting like this, including the names and loved ones of everyone you met?

Then Brigham took them back to Cheyer’s original vision. There was a compromise in the ontology, he said. Siri talked only to a few limited functions, like the map, the datebook, and Google. All the imitators, from the outright copies like Google Now and Microsoft’s Cortana to a host of more-focused applications with names like Amazon Echo, Samsung S Voice, Evi, and Maluuba, followed the same principle. The problem was you had to code everything. You had to tell the computer what to think. Linking a single function to Siri took months of expensive computer science. You had to anticipate all the possibilities and account for nearly infinite outcomes. If you tried to open that up to the world, other people would just come along and write new rules and everything would get snarled in the inevitable conflicts of competing agendas—just like life. Even the famous supercomputers that beat Kasparov and won Jeopardy! follow those principles. That was the “pain point,” the place where everything stops: There were too many rules.

So what if they just wrote rules on how to solve rules?

The idea was audacious. They would be creating a DNA, not a biology, forcing the program to think for itself.

Tags: , ,

The instability of the Argentine banking system (and the expense of dealing with it) has led a growing number of citizens to embark on a bold experiment using Bitcoin to sidestep institutions, a gambit which would probably not be attempted with the same zest in countries with relative financial stability. But if the service proves to be a large-scale success in Argentina, will it influence practices in nations heretofore resistant to cryptocurrency? And will a massive failure doom the decentralized system?

In a New York Times Magazine article adapted from Nathaniel Popper’s forthcoming Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money, the author writes of this new dynamic in the South American republic, which is enabled by itinerant digital money-changers like Dante Castiglione. An excerpt:

That afternoon, a plump 48-year-old musician was one of several customers to drop by the rented room. A German customer had paid the musician in Bitcoin for some freelance compositions, and the musician needed to turn them into dollars. Castiglione joked about the corruption of Argentine politics as he peeled off five $100 bills, which he was trading for a little more than 1.5 Bitcoins, and gave them to his client. The musician did not hand over anything in return; before showing up, he had transferred the Bitcoins — in essence, digital tokens that exist only as entries in a digital ledger — from his Bitcoin address to Castiglione’s. Had the German client instead sent euros to a bank in Argentina, the musician would have been required to fill out a form to receive payment and, as a result of the country’s currency controls, sacrificed roughly 30 percent of his earnings to change his euros into pesos. Bitcoin makes it easier to move money the other way too. The day before, the owner of a small manufacturing company bought $20,000 worth of Bitcoin from Castiglione in order to get his money to the United States, where he needed to pay a vendor, a transaction far easier and less expensive than moving funds through Argentine banks.

The last client to visit the office that Friday was Alberto Vega, a stout 37-year-old in a neatly cut suit who heads the Argentine offices of the American Bitcoin company BitPay, whose technology enables merchants to accept Bitcoin payments. Like other BitPay employees — there is a staff of six in Buenos Aires — Vega receives his entire salary in Bitcoin and lives outside the traditional financial system. He orders what he can from websites that accept Bitcoin and goes to Castiglione when he needs cash. On this occasion, he needed 10,000 pesos to pay a roofer who was working on his house.

Commerce of this sort has proved useful enough to Argentines that Castiglione has made a living buying and selling Bitcoin for the last year and a half. “We are trying to give a service,” he said.

That mundane service — harnessing Bitcoin’s workaday utility — is what so excites some investors and entrepreneurs about Argentina. Banks everywhere hold money and move it around; they help make it possible for money to function as both a store of value and a medium of exchange. But thanks in large part to their country’s history of financial instability, a small yet growing number of Argentines are now using Bitcoin instead to fill those roles. They keep the currency in their Bitcoin “wallets,” digital accounts they access with a password, and use its network when they need to send or spend money, because even with Castiglione or one of his competitors serving as middlemen between the traditional economy and the Bitcoin marketplace, Bitcoin can be cheaper and more convenient than Argentina’s financial establishment. In effect, Argentines are conducting an ambitious experiment, one that threatens ultimately to spread to the United States and disrupt some of the most basic services its banks have to offer.

Tags: ,

For more than a century, scientists have tried to coax solar power into cheap energy. In 1955, University of California “solar scientists” envisioned an abundance of healthy food and clean energy for Earthlings and space colonists alike. It would cost next to nothing. Never quite happened.

But the sun’s power is there for the taking, and it seems we’re much closer to stealing fire from gods. From David Roberts at Vox:

Obviously, predicting the far future is a mug’s game if you take it too seriously. This post is more about storytelling, a way of seeing the present through a different lens, than pure prognostication. But storytelling is important. And insofar as one can feel confident about far-future predictions, I feel pretty good about this one.

Here it is: solar photovoltaic (PV) power is eventually going to dominate global energy. The question is not if, but when. Maybe it will happen radically faster than anyone expects — say, by 2050. Or maybe it won’t be until the year 3000, or later. But it’ll happen. …

One often hears energy experts talk about “distributed energy,” but insofar as that refers to electricity, it usually just means smaller gas or wind turbines scattered about — except in the case of solar PV. Only solar PV has the potential to eventually diffuse into infrastructure, to become a pervasive and unremarkable feature of the built environment.

That will make for a far, far more resilient energy system than today’s grid, which can be brought down by cascading failures emanating from a single point of vulnerability, a single line or substation. An intelligent grid in which everyone is always producing, consuming, and sharing energy at once cannot be crippled by the failure of one or a small group of nodes or lines. It simply routes around them.

Will solar PV provide enough energy? Right now, you couldn’t power a city like New York fully on solar PV even if you covered every square inch of it with panels. The question is whether that will still be true in 30 or 50 years. What efficiencies and innovations might be unlocked when solar cells and energy storage become more efficient and ubiquitous? When the entire city is harvesting and sharing energy? When today’s centralized, hub-and-spoke electricity grid has evolved into a self-healing, many-to-many energy web? When energy works like a real market, built on millions of real-time microtransactions among energy peers, rather than the crude statist model of today’s utilities?

Tags:

In 2010, Mitch Moxley wrote “Rent a White Guy,” an amusing and insightful first-person Atlantic report about being hired by a Chinese firm to be a make-believe American businessperson. “Having foreigners in nice suits gives the company face,” he was told. In a New York Times documentary short, David Borenstein provides an excellent visual tour of the practice five years on, as it’s become fashionable for desperate real-estate developers of remotely located properties to temporarily stock their buildings with Western workers or performers to make the provincial neighborhoods appear like “international cities of the future.” It reveals a sense of inferiority still felt by the Chinese even as they’ve moved to the center of the global stage. He describes the assignment thusly:

In provincial West China, I filmed specialty firms that collect groups of foreigners whom they rent out to attend events. Clients can select from a menu of skin colors and nationalities; whites are the most desirable and expensive. The most frequent customers are real estate companies. They believe that filling their remote buildings with foreign faces, even for a day, suggests that the area is “international,” a buzzword in provincial areas that often translates to “buy.”•

Tags:

A moonshot launched from an outhouse is a pretty apt description of the cratered Hewlett-Packard’s unlikely attempt to reimagine the computer. A semi-secret project called “the Machine” may be the company’s best shot–albeit, a long shot–to recreate itself and our most used tools all at once, increasing memory manifold with the aid of a fundamentally new operating system. From Tom Simonite at MIT Technology Review:

In the midst of this potentially existential crisis, HP Enterprise is working on a risky research project in hopes of driving a remarkable comeback. Nearly three-quarters of the people in HP’s research division are now dedicated to a single project: a powerful new kind of computer known as “the Machine.” It would fundamentally redesign the way computers function, making them simpler and more powerful. If it works, the project could dramatically upgrade everything from servers to smartphones—and save HP itself.

“People are going to be able to solve problems they can’t solve today,” says Martin Fink, HP’s chief technology officer and the instigator of the project. The Machine would give companies the power to tackle data sets many times larger and more complex than those they can handle today, he says, and perform existing analyses perhaps hundreds of times faster. That could lead to leaps forward in all kinds of areas where analyzing information is important, such as genomic medicine, where faster gene-sequencing machines are producing a glut of new data. The Machine will require far less electricity than existing computers, says Fink, making it possible to slash the large energy bills run up by the warehouses of computers behind Internet services. HP’s new model for computing is also intended to apply to smaller gadgets, letting laptops and phones last much longer on a single charge.

It would be surprising for any company to reinvent the basic design of computers, but especially for HP to do it. It cut research jobs as part of downsizing efforts a decade ago and spends much less on research and development than its competitors: $3.4 billion in 2014, 3 percent of revenue. In comparison, IBM spent $5.4 billion—6 percent of revenue—and has a much longer tradition of the kind of basic research in physics and computer science that creating the new type of computer will require. For Fink’s Machine dream to be fully realized, HP’s engineers need to create systems of lasers that fit inside -fingertip-size computer chips, invent a new kind of operating system, and perfect an electronic device for storing data that has never before been used in computers.

Pulling it off would be a virtuoso feat of both computer and corporate engineering.•

Tags: ,

We may have already reached peak car in America (and in many other countries). Some of the next automobiles will likely aim for disruption the way ridesharing has, whether they’re EV community cars or driverless. Certainly urban planners want our cities to be less clogged and choked by them.

In “End of the Car Age,” a Guardian article, Stephen Moss smartly analyzes the likely shrinking role of automobiles in major urban centers. The writer sees a challenging if necessary transportation revolution approaching, with mobile information playing a large part in what comes next. The future may not be Masdar City, but it won’t be the Manhattan we’ve long known, either. The opening:

Gilles Vesco calls it the “new mobility”. It’s a vision of cities in which residents no longer rely on their cars but on public transport, shared cars and bikes and, above all, on real-time data on their smartphones. He anticipates a revolution which will transform not just transport but the cities themselves. “The goal is to rebalance the public space and create a city for people,” he says. “There will be less pollution, less noise, less stress; it will be a more walkable city.”

Vesco, the politician responsible for sustainable transport in Lyon, played a leading role in introducing the city’s Vélo’v bike-sharing scheme a decade ago. It has since been replicated in cities all over the world. Now, though, he is convinced that digital technology has changed the rules of the game, and will make possible the move away from cars that was unimaginable when Vélo’v launched in May 2005. “Digital information is the fuel of mobility,” he says. “Some transport sociologists say that information about mobility is 50% of mobility. The car will become an accessory to the smartphone.”

Vesco is nothing if not an evangelist. “Sharing is the new paradigm of urban mobility. Tomorrow, you will judge a city according to what it is adding to sharing. The more that we have people sharing transportation modes, public space, information and new services, the more attractive the city will be.”•

Tags: ,

The bottom fell out of the commercial-TV economic model, so the new sets aren’t just content to sell you soap but also want to eavesdrop to know precisely what brand you prefer and the exact moment you feel dirty. You will be scrubbed.

It’s sort of like how Google has its helpful algorithms scanning your Gmail for keywords to match to advertising. And not only are your media preferences recorded (anonymously, supposedly) by the new televisions, but even your conversations may be monitored. From Dennis Romero at LA Weekly:

Your television could be recording your most intimate moments. 

Some people might actually be into that. This is L.A., after all.

But local state Assemblyman Mike Gatto says it just isn’t right. He and the Assembly Committee on Privacy and Consumer Protection have introduced a bill that would require “manufacturers to ensure their television’s voice-recognition feature cannot be enabled without the consumer’s knowledge or consent,” according to his office.

Last week the committee voted 11-0 in favor of the proposal. But is this really a problem, you ask? We asked Gatto the same question.

Not necessarily yet is the answer. But the lawmaker argues that we need to get ahead of this Big Brother initiative before it gets all up in our bedrooms.

“Nobody would doubt the bedroom is a place where we have a tradition of privacy,” he said. “You can imagine a future where they know your sex life.”

Samsung’s newest smart TVs take voice commands. Cool. But the sets’ privacy policy spelled out some Orwellian shenanigans:

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.•

Tags: ,

I’m always stunned how some Germans who supported the rise of Hitler maintain a naivete about their role in the horrors, even in retrospect. The denial of culpability runs so deep.

An AMA at Reddit with an unnamed 92-year-old woman (aided by her grandson) who’s spent her entire life in Stuttgart falls into this category, and it’s a fascinating discussion. “We were normal people who did not want to harm anyone,” she says of the Germans who pledged allegiance to Nazism, yet so many were hurt, so many killed. The collective delusion that allowed for such atrocities has never fully lifted. A few exchanges below.

_________________________

Question:

During Hitler’s rise to power before the war in 1933, did all Germans, including yourself, love him? I mean, he did bring back Germany’s lost honour and economy from World War I. When did the general mood toward Hitler change? What did he do? Did you support him during the war?

Answer:

Everybody was enthusiastic about Adolf, nobody hated him or anything like that. That did not change until the end. I don’t know, I was a young girl. We were young and naive.

_________________________

Question:

Did you ever see Hitler himself? At a rally or such?

Answer:

Yes at Obersalzberg I stood right beside him and when he visited the “Hitlerjugend” in Berchtesgaden (Upper Bavaria) I saw him again.

_________________________

Question:

What did you think of the SS?

Answer:

Of course: the elite. They were attractive men and every girl adored them. They were tall and sportive and good-mannered and their uniforms were great. (Grandson: I asked her if they heard bad things about the SS. She shook her head.)

_________________________

Question:

Looking back, what was the most shocking thing you experienced in the time of Nazi Germany?

Answer:

In the “Hindenburg-Bau” (a building in Stuttgart), there was a dance-cafe every Sunday. I was there many times with my friends. There was a musician who played his guitar singing: “Es geht alles vorüber, es geht alles vorbei, auch Adolf Hitler und seine scheiß Partei.” (Translation: “Everything, everything will come to an end. So will Adolf Hitler and his crappy party NSDAP.”) There was a table with officers in the cafe, one of them stood up, pulled out his gun and shot the musician in his breast. He was instantly dead and they pulled his dead body out of the cafe. That was very horrible.

_________________________

Question:

Are there any popular misconceptions about what average people were like in Nazi Germany? Was there some kind of normalcy and optimism around daily life, or was there a encroaching sense of dread regarding the horrendous things happening in Germany and abroad?

Answer:

We were normal people who did not want to harm anyone. Everybody was optimistic in terms of the war. There was a daily dose of fear, because so many men we knew were at war and we never knew if a bomb would hit the house. We stayed in the cellar and begged that we would survive.

_________________________

Question:

What was, if any, the punishment for knowing you lived around Jewish families but not declaring that to the government?

Answer:

We didn’t know of anyone who hid Jews. But if anyone did, they would have had bad times. There was a Jew named Arnold, who owned a nearby [factory], he was a good man. He gave anyone asking him work and paid his workers well. He never let anyone down. (Grandson: I asked her if they took him away. “Yes they came and arrested him.”)

_________________________

Question:

Did you see the trains filled with Jews or shops that were owned by Jews after Krystalnacht?

Answer:

No, I don’t know. They were arrested, but we didn’t know where they brought them or what happened to them at that time.

_________________________

Question:

Propaganda was undoubtedly used throughout the war. Was she and her colleagues, friends or family aware of what was propaganda/true/untrue? Was it even discussed at all at the time? I’m guessing it would be dangerous to do so, but that probably didn’t stop it being discussed completely.

Answer:

Yes we discussed a lot, but I don’t remember if anyone said that all the propaganda is false. They would not have been allowed to. Hitlerjugend and BDM were strong organisations, there was no chance for going against common beliefs.

_________________________

Question:

When did you realize Germany hadn’t a chance to win the war?

Answer:

We always believed in the victory. Even after Stalingrad. Absolutely.

_________________________

Question:

How do you feel about the trial of Oskar Groening? He is 93 and on trial for working at Auschwitz as a bookkeeper/accountant.

Answer:

Insane. You cannot judge anyone 70 years later. Life is hard enough at 93. They cannot imagine how hard. I bet he regrets the things he has done or had to do without a trial.

_________________________

Question:

Thinking back, does anything about the tense climate in today’s political scene remind you of anything that happened before the war broke out?

Answer:

We only get to know what happens in the world after it happened. We have no influence on the things happening and we have to rely on those men and women who do.•

 

If you want to stop bubonic plague, killing as many cats and dogs as possible is probably not the most effective gambit. But that’s what the Mayor of London opted to do in 1665, putting down nearly a quarter-million predators of rats, which carried the lethal fleas. 

While we have a far greater understanding of epidemiology than our counterparts in the 17th century, we still probably accept some asinine ideas as gospel. In a Medium essay, Weldon Kennedy questions our faith in ourselves, naming three contemporary beliefs he feels are incorrect.

I’ll propose one: It’s wrong that children, who are wisely banned from frequenting bars and purchasing cigarettes, are allowed to eat at fast-food restaurants, which set them up for a lifetime of unhealthiness. Ronald McDonald and Joe Camel aren’t so different.

Kennedy’s opening:

In 19th century London, everyone was certain that bad air caused disease. From cholera to the plague: if you were sick, everyone thought it was because of bad air. It was called the Miasma Theory.

As chronicled in The Ghost Map, it took the physician John Snow years, and cost thousands of lives, to finally disprove the Miasma Theory. He mapped every cholera death in London and linked it back to the deceased’s source of water, and still it took years for people to believe him. Now miasma stands as a by-word for widely held pseudo-scientific beliefs widely held throughout society.

The problem for Snow was that no one could see cholera germs. As a result, he, and everyone else of the time, was forced to measure other observable phenomenon. Poor air quality was aggressively apparent, so it’s easy to see how it might take the blame.

Thankfully, our means of scientific measurement have improved vastly since then. We should hope that any such scientific theory lacking a grounding in observable data would now be quickly discarded.

Where our ability to measure still lags, however, it seems probable that we might still have miasmatic theories.•

Tags:

« Older entries § Newer entries »