Driverless would be the end of human-operated taxis, but even ridesharing seemed the end of the road for the Bickles among us. Ground Zero was San Francisco, in the heart of America’s disruption machine, and Uber and Lyft and the like were a jolt to business as usual. In an interesting Wall Street Journal piece, Georgia Wells reports there’s a renaissance in the city’s traditional cab industry, with old-fashioned car services experiencing an uptick. The piece suggests the Gig Economy may have interested a new pool of drivers in meters and medallions.
I wonder, though, if the waters have only temporarily receded. Newspapers were remarkably profitable during the 1990s even as the Internet began its ascension. That was when the New York Times purchased the Boston Globe for $1.1 billion, believing print an easy stream of revenue for the foreseeable future. It was a terrible acquisition, and two decades later the Globe was dumped for $70 million. Are taxis similarly poised in a perilous situation, ready to be run from the road by the sweep of history?
“There is a stigma attached to taxi cab driving. But Uber and Lyft have created a lot more people who would now consider driving as a way to make money,” says Mr. Kim.
Uber didn’t respond to requests for comment.
Taxi drivers’ incomes are still down about 25% since Uber launched, but their incomes have started to stabilize, according to Mr. Kim. An experienced taxi driver in San Francisco makes between $150 and $300 in take-home pay a day, he says. Uber said earlier this year that its drivers earn an average of $19.04 an hour – but that excludes expenses that come out drivers’ own pockets, including gas, maintenance and insurance.
Some neuroscientists disagree, but there doesn’t seem to be anything that’s theoretically impossible about creating intelligent AI, especially if we’re talking about humans being here to tinker 1,000 or 10,000 or 100,000 or 1,000,000 years from now. Most things will be possible given enough time, if it should pass with us still here.
In a lively Conversation piece, a raft of experts answers question about AI, from intelligent machines to technological unemployment. The opening:
Question:
How plausible is human-like artificial intelligence?
Toby Walsh, Professor of AI:
It is 100% plausible that we’ll have human-like artificial intelligence.
I say this even though the human brain is the most complex system in the universe that we know of. There’s nothing approaching the complexity of the brain’s billions of neurons and trillions of connections. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities.
Kevin Korb, Reader in Computer Science
Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn’t address is: when will it be plausible?
Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.
What I find less plausible than the AI in fiction is the emotional and moral lives of robots. They seem to be either unrealistically empty, such as the emotion-less Data in Star Trek, or unrealistically human-identical or superior, such as the AI in Spike Jonze’s Her.
All three – emotion, ethics and intelligence – travel together, and are not genuinely possible in some form without the others, but fiction writers tend to treat them as separate. Plato’s Socrates made a similar mistake.•
I wondered last week what type of “PEDs” terrorists were hopped up on when conducting suicide missions. Despite the oft-dark nature of humans, that’s just not a natural state and fanaticism or brainwashing or pharmaceuticals (or all three) are necessary to induce it.
If CNN is to be believed–and it almost never is–Syrian jihadists are dosing themselves with Captagon.
If you constantly play Bach to the fetus in your womb, your child may turn out to be a virtuoso like Glenn Gould, and he might also be a pill popper who dies at 50 like Glenn Gould.
The line between nurturing a young mind and treating it as a scientific experiment isn’t so fine that it isn’t willfully crossed. The early 20th-century American prodigy William James Sidis had a father who ran him as fast as he could from birth until he ran aground. Such a steep fall from grace probably isn’t the average, but it is a risk.
While such extreme upbringings are often bad for children, in the macro they may be instructive in the area of nature vs. nurture. In the Chronicle of Higher Education, Paul Voosen’s “Bringing Up Genius” examines the story of the chess-playing Polgár sisters (who suffered no such ill effects) to ask if all children are potential prodigies. (Thanks to the Browser for pointing it out to me.) An excerpt:
Before Laszlo Polgár conceived his children, before he even met his wife, he knew he was going to raise geniuses. He’d started to write a book about it. He saw it moves ahead.
By their first meeting, a dinner and walk around Budapest in 1965, Laszlo told Klara, his future bride, how his kids’ education would go. He had studied the lives of geniuses and divined a pattern: an adult singularly focused on the child’s success. He’d raise the kids outside school, with intense devotion to a subject, though he wasn’t sure what. “Every healthy child,” as he liked to say, “is a potential genius.” Genetics and talent would be no obstacle. And he’d do it with great love.
…
There are three Polgár sisters, Zsuzsa (Susan), Zsofia (Sofia), and Judit: all chess prodigies, raised by Laszlo and Klara in Budapest during the Cold War. Rearing them in modest conditions, where a walk to the stationery store was a great event, the Polgárs homeschooled their girls, defying a skeptical and chauvinist Communist system. They lived chess, often practicing for eight hours a day. By the end of the 1980s, the family had become a phenomenon: wealthy, stars in Hungary and, when they visited the United States,headline news.
The girls were not an experiment in any proper form. Laszlo knew that. There was no control. But soon enough, their story outgrew their lives. They became prime examples in a psychological debate that has existed for a century: Does success depend more on the accidents of genetics or the decisions of upbringing? Nature or nurture? In its most recent form, that debate has revolved around the position, advanced by K. Anders Ericsson, a psychologist at Florida State University, that intense practice is the most dominant variable in success. The Polgárs would seem to suggest: Yes.•
Military historian Gwynne Dyer deems terrorism “the weapon of the weak,” and he’s right. Compared to the threat from traffic accidents–not to mention climate change!–such small-scale evil is a mere drop in the dead pool. We mostly give these acts outsize importance because horrible deaths bother us more than mundane ones, and because of their sheer nastiness and needlessness excites our sense of fairness.
When it comes to the Islamic State, I think there’s more at play. Ignoring the organization probably won’t work. It’s carved out territories as bases of operation, and a lack of response will probably lead to attempts to acquire more “attention-getting” weapons. That doesn’t mean a ground war is a good idea (wow, it’s not), but attempts to degrade military might and financial holdings probably is needed.
The most hopeful note from Dyer is that he doesn’t believe ISIS has enough in the way of resources to spread much further, the surprise factor they initially exploited now a thing of the past.
Don’t panic. Terrorism is a very small problem. And any western president or prime minister who thinks they’ll severely damage ISIS by dropping bombs on its fighters is terribly mistaken
That was the message author and historian Gwynne Dyer brought to SFU Woodward’s in a March 25 sold-out lecture at the Goldcorp Centre for the Arts.
“Well, we lost two people in the last year to terrorism and we lost about 250 a month on the roads,” Dyer said. “You know, the Americans lost 3,000 people on 9/11, but they also lost 3,000 people on the roads and another 3,000 to gunshot wounds, mostly delivered by their nearest and dearest.
“The scale of the terrorism is tiny compared to its presence in the media,” Dyer continued. “Really, we should, as much as possible, ignore it. We certainly don’t need to overreact by sending troops to the Middle East or aircraft to do God knows what in terms of useful activity. It’s just dumb.”
In fact, according to Dyer, if western countries expand their bombing campaigns against ISIS into Syria, it will only make the Islamic State stronger.•
Donald Trump, a Fraggle Rock puppet inspired by Mussolini, has won favor in polls for his loose usage of dangerous words and both vague and specific threats, not directing them only at American enemies but also at Americans.
He’s spoken recklessly of oil wars and such, despite draft deferments that kept him personally far from any bunker that wasn’t situated on a golf course. An Economist essay looks at the many uses of the word “war,” including its troubling application in regards to the Islamic State. An excerpt:
War in its canonical form has state armies on a battlefield trying to control territory. Most of today’s shooting wars are not even that clean cut—America has not declared one officially since the second world war. But worse, politicians have been unable to resist the temptation to declare war on things like poverty (Lyndon Johnson), drugs (Ronald Reagan) or terrorism (George Bush).
Declaring such “wars” is a problem because such a war on a concept is unwinnable: poverty and drugs will never show up and sign a surrender document on the battleshipMissouri, as Imperial Japan did in 1945 to end the second world war. Did Johnson defeat poverty? Did Reagan defeat drugs? We certainly know that Mr Bush did not defeat terrorism.
What about declaring war on Islamic State (IS), presumed by all to be behind the Paris massacres? “War” is exactly the term IS would most eagerly choose. Wars are fought by armies belonging to states—just what the Islamic State fancies itself. In reality, the territory controlled by ISIS—the “caliphate”—has some elements of a state, with everything from fighting forces to rudimentary social services, but it is unrecognised, claiming territory other states have a legitimate claim to. IS’s claim to state status is dubious. Mr Hollande runs the risk of raising that status when he calls eight men with guns and small bombs capable of “an act of war” against a nuclear power.•
Citizens Unitedv. FEC is still an odious decision, but billionaires have done little more than offer stimulus to the struggling media sector when shoveling elephantine sums of money into national elections, with Jeb Bush being the latest evidence. It’s just very difficult to paper over an undesirable candidate or message. Another reason: Small donors feel a connection to their candidate that will get them to the polls. Well, at least the Tea Party-powering Kochs don’t yet know the winner’s curse on the biggest stage, though state and local elections seem more prone to stupid wealth.
The political world is discovering an unsettling truth: Money isn’t everything. The latest evidence comes from the just-expired presidential campaign of Louisiana Gov. Bobby Jindal, a Republican who dropped out on Tuesday, saying, “This is not my time.” Jindal had wallowed in the low single digits in polls and was relegated to the undercard debates even though groups allied with his campaign consistently ranked among the top sponsors of TV ads in Iowa.
Or consider the staggering confession made by conservative billionaire Charles Koch last month. The man who along with his brother David has spent or steered hundreds of millions of dollars into reshaping U.S. politics in recent years said, in effect, that he believes he has been wasting his money. “We looked like we won. [But] as you can see by the performance, we didn’t win much of anything,” Charles Koch told MSNBC’s Joe Scarborough and Mika Brzezinski in October. “So far we’re largely failures.”
Koch was only conceding what has become, more and more, an obvious fact. With the advent of Republican Paul Ryan as speaker of the House and a budget deal that will prevent more government shutdowns, the Koch-funded Freedom Caucus has been, for the moment, declawed.•
Before Bernie Madoff, there was Ivan Boesky, the stock trader who used insider information to amass more money than he could ever hope to spend but still enjoyed counting. In May of 1986, Boesky gave a speech in which he said, “I think greed is healthy,” and perhaps Oliver Stone should have given him a screenwriting credit for Wall Street. Only months after that address, Boesky was ruined, the cover boy of outraged articles about Wall Street’s brazen malfeasance. If there was a lesson learned, it was soon forgotten. FromPeople in 1986:
The unmasking of Ivan Boesky—a man who has come to symbolize unbounded avarice—has unsettled the financial community because no one knows who else may be under investigation. It has also led to some belated soul-searching about the ethics of Wall Street. In a commencement speech last year at the School of Business Administration at Berkeley, this is what Boesky had to say about greed: “I think greed is healthy. You can be greedy and still feel good about yourself.”
Even at the end, when the Securities and Exchange Commission scuttled Boesky’s operation, he still managed to cut himself a deal. It is widely believed that he agreed to record his phone conversations and thus implicate an unknown number of unscrupulous traders. He was allowed to unload an estimated $1.6 billion worth of stocks before the announcement of the government’s charges against him could drive prices down.
Until November, Ivan F. Boesky was a glittering success. He had an ideal family—a handsome wife and four children—and lived in a $3.3 million mansion in New York’s affluent Westchester County. He gave lavishly to charity; he supported both the Republican and Democratic establishments—in short, he appeared the perfect gentleman from sole to crown.
If there was an unresolved mystery about him, it was the quirky drive of someone who had wealth like water, yet who still lived as though he worked in a sweatshop. He slept a mere two hours a night. “The machine doesn’t like to stop,” he explained to an interviewer two years ago.
The son of a Russian immigrant delicatessen owner in Detroit, Boesky had a restless, floundering youth, dropping in and out of college, unable to land a satisfying job even after he graduated from the Detroit College of Law at 27. But he married well. Seema Silberstein was the daughter of real estate tycoon Ben Silberstein. Muriel Slatkin, Seema’s sister, has said her father had a low opinion of Boesky, who he said had “the hide of a rhinoceros and the nerve of a burglar.”•
“Will Life Be Worth Living in 2000AD?” is the title of a byline-less, futuristic 1961 article from Australia’s Weekend Magazine. Plenty of wrong predictions of what the world would be like but correct in anticipating email, the Internet and primitive study aids. An excerpt:
Mail and newspapers will be reproduced instantly anywhere in the world by facsimile.
There will be machines doing the work of clerks, shorthand writers and translators. Machines will “talk” to each other.
It will be the age of press-button transportation. Rocket belts will increase a man’s stride to 30 feet, and bus-type helicopters will travel along crowded air skyways. There will be moving plastic-covered pavements, individual hoppicopters, and 200 m.p.h. monorail trains operating in all large cities.
The family car will be soundless, vibrationless and self-propelled thermostatically. The engine will be smaller than a typewriter. Cars will travel overland on an 18 inch air cushion.
Railways will have one central dispatcher, who will control a whole nation’s traffic. Jet trains will be guided by electronic brains.
In commercial transportation, there will be travel at 1000 m.p.h. at a penny a mile. Hypersonic passenger planes, using solid fuels, will reach any part of the world in an hour.
By the year 2020, five per cent of the world’s population will have emigrated into space. Many will have visited the moon and beyond.
Our children will learn from TV, recorders and teaching machines. They will get pills to make them learn faster. We shall be healthier, too. There will be no common colds, cancer, tooth decay or mental illness.
Medically induced growth of amputated limbs will be possible. Rejuvenation will be in the middle stages of research, and people will live, healthily, to 85 or 100.•
It’s probably not going to be so neat, the future resistant to being any one thing, but it seems likely the foundations of education and Labor will be radically remade. How do we reimagine economies that have been largely free-market ones if a full-employment society is no longer a reality?
Important to Srnicek and Williams isn’t just basic income but also the end of the fetishization of the work ethic. The opening:
Vice:
Can you explain what you mean by a “high-tech future free from work”?’
Alex Williams:
The idea of the book is to argue for a different kind of left-wing politics to the kind we may be used to in America and in the UK, where traditionally, the role of the Democratic Party or, in the UK, the Labour Party, is one where we’re going to help poorer people by giving them jobs. For a variety of reasons, which we go into in the book, we view that as no longer possible, and possibly no longer desirable in the same way. This is all related, in part, to the increasing role of automation—this new wave of automation that a quiet wide variety of economists, technologists, and sociologists have begun thinking about.
Vice:
Right—the idea that “robots are stealing our jobs.”
Alex Williams:
Right. Our kind of perspective on this is, well, is it possible that robots stealing jobs might be a good thing? What would it require to make it a good thing?
Nick Srnicek:
We have all this amazing technology around us. It seems like we’re in a rapidly changing world and we’ve got new potential sprouting out everywhere. But at the same time, our everyday lives are crushed by debt and work and all of these obsolete social relations. It seems that we could be doing much better with the technologies that we have. Our argument has to do with capitalism. This isn’t fundamentally different from what Marx was saying 150 years ago, but it is a matter of capitalism constraining the potentials available within technology and within humanity.•
Fascist Twinkie hoarder Donald Trump, who boasts with the desperation of a man in possession of the world’s smallest micropenis, often says that he’s going to “have to look at a lot of things” when asked for specifics about his policies should he become President. That’s his way of trying to make it sound like he’s not an impetuous, adult baby who has no idea what he’s talking about and is too lazy to conduct basic research.
When it comes to doing “regretfully” unpleasant things to people who are not very white, he is definitely going to have to look at a lot of things. Nipple clamps? Cattle prods? Internment camps? Well, they are the linchpins of democracy.
In Hunter Walker’s Yahoo! Politics piece, Trump not only serves up Torquemada-esque talking points, but the skank even has the lack of self-awareness to mention Monica Lewinsky. An excerpt:
Yahoo News asked Trump whether his push for increased surveillance of American Muslims could include warrantless searches. He suggested he would consider a series of drastic measures.
“We’re going to have to do things that we never did before. And some people are going to be upset about it, but I think that now everybody is feeling that security is going to rule,” Trump said. “And certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy. And so we’re going to have to do certain things that were frankly unthinkable a year ago.”
Yahoo News asked Trump whether this level of tracking might require registering Muslims in a database or giving them a form of special identification that noted their religion. He wouldn’t rule it out.
“We’re going to have to — we’re going to have to look at a lot of things very closely,” Trump said when presented with the idea. “We’re going to have to look at the mosques. We’re going to have to look very, very carefully.”•
Almost all of wealth inequality’s consequences, intended or not, are bad, but I doubt so much money would be pouring into the Immortality Industrial Complex were it not for haves having quite so much. No one wants to die when they have stock options. In London as in Silicon Valley, those with ridiculous disposable wealth pursue radical life extension. From Rebecca Newman at the Evening Standard:
Across the road from Harrods sits Omniya clinic, a calm, contemporary white space amid the hustle of Knightsbridge. At street level it is a luxuriously reimagined pharmacy, whose curated selection includes recent launches from Hollywood’s favourite ‘cosmeceutical’ brands Zo Skin Health and Dr Levy. ‘I wanted to create a place that brings the newest advancements in medical and regenerative health to London,’ says co-founder Danyal Kader, a former lawyer, radiant with bien-être. He was so depressed by the difficulty of finding the best medical treatment for his father, who suffers from a heart condition, that he decided to create his own one-stop conduit to wellness. ‘We optimise the lives our clients can lead, body, mind and soul.’ To this end, he has brought together a team of leading specialists who analyse the health of their clients in the most minute and sophisticated detail — a kind of space-age human MOT.
One of these is cellular ageing specialist Dr Mark Bonar. As his title suggests, Bonar is passionate about the very specific degradations that happen in the cells of the body as we age — and still more excited about the new ways he can use to slow such deterioration. Consider, for example, telomeres. ‘Telomeres are the caps on the ends of our DNA,’ Bonar explains. ‘A bit like the plastic on the end of a shoe lace, they prevent the ends from fraying. By measuring their length in the lab we can determine how well the body is ageing’ — for instance, if at 30, you show the wear and tear you’d expect in a 40-year-old. ‘The length can also inform you about your risk of various kinds of disease such as breast or bowel cancer.’
More dramatically, Bonar continues, a product has been patented — it has been around in the States since 2011 — called TA-65, which can rebuild your telomeres, pausing this process central to ageing. In fact, by making the telomere length longer, you can actually make cells ‘younger’, he argues. In one study, fruit flies given TA-65 doubled their life expectancy, while another study on rats discovered that the risk of them developing certain cancers fell by some 30 per cent. And yes, Bonar can prescribe it for you, in a capsule or a cream.•
Nikola Tesla was dreaming his seemingly impossible dreams of drones–bolts of Thor!–a century ago, but some military analysts in the U.S. began to seriously consider pilotless planes dropping bombs and aiding in surveillance in the relatively pacific period between the two World Wars. In an article from the September 8, 1934 Brooklyn Daily Eagle, which also looked at experimentation in germ warfare, robotic fliers were proposed, though the U.S. military wanted to no part in such outlandish speculation.
Writer Carl Zimmer did a predictably smart Ask Me Anything at Reddit, fielding all manner of queries on his forte, science. One exchange is about ancient humans, who were a decidedly more heterogeneous mix than we are, something that could again be true of our species in the future. An excerpt:
Question:
What ancient human fact do you find to be the most fascinating?
Carl Zimmer:
Can I give two answers? A tie?
First off, all the species of ancient humans! One scientist I interviewed recently said he likes to say that the Middle Pleistocene was like Middle Earth, with orc and elves and such. I guess that might be a bit strained if you consider that the different hominin species probably couldn’t talk to each other. (Imagine the Lord of the Rings movies with no dialogue…) Still, tiny Homo floresiensis, Denisovans and Neanderthals having sex (and babies too) with modern humans, plus Homo erectus and probably a bunch of other species/lineages we have yet to find. Our current loneliness is a fluke of human evolution.
The other fact is that in one respect Darwin got human evolution very wrong. He saw bipedalism and human behavior as intimately tied together. But the earliest hominins, 6-7 million years ago, were fairly upright (even if they could scale trees to get away from the occasional leopard). Despite being upright, their brains were puny till less than 2 million years ago. So for most of hominin evolution, they were essentially bipedal apes, rather than what we’d call human. Which, of course, leaves us with the question of why human brains got big so fast when smaller ones did just fine, thank you very much.•
Roughly five years after DARPA’s 2004 not-ready-for-prime-time driverless-car contest mercilessly dubbed the “Debacle in the Desert,” Google was testing autonomous vehicles on American streets and highways. The latest DARPA Robotics Challenge earlier this year, the one aimed at pushing humanoid rescue machines to new limits, was likewise far less than a complete success, though I doubt we’ll be bossing around tin butlers by 2020. However, progress will be made.
Even though opposable thumbs were a lovely development in the course of life on Earth, there’s some question as to whether we should be so concerned about humanoid-type robots. We didn’t create a “robot person” to handle the wheel of a driverless car and drones don’t flap wings like birds (nor do airplanes). There is something satisfyingly narcissistic about inanimate metal and plastic replicating familiar life forms–especially our own–but many of the needed maneuvers in Fukushima and flood zones could probably be accomplished by unfamiliar figures.
In a GQ article, Bucky McMahon looks back at DARPA’s June contest and looks ahead. An excerpt:
Southern California, June 2015
Here, now, on day one of the DARPA Robotics Challenge, the little humans in the crowd are the opposite of patient. Overstimulated, brainy children—future roboticists, perhaps—they half-pack the stands at the Los Angeles County fairgrounds, chanting “Go, robots, go!” as the first four competitors enter the open-air arena. In just a few short minutes, against a backdrop of rusty corrugated iron and dusty windows reminiscent of a 1950s sci-fi-movie set, the robots will navigate the course like giant praying mantises cast in community theater. Five Jumbotrons will offer fans close-ups and instant replays as the robots wrestle with various Mr. Fix-It tasks, clear debris, or walk on a jumble of concrete blocks. It’s a little like the Roman Colosseum, but with bizarre humanoid machines instead of gladiators, masterpieces of engineering that are still battling their own limitations as much as one another.
Twenty-three robotics teams from seven nations are here in Pomona, California, competing for $3.5 million in federally funded prize money dangled by DARPA. (Full name: the Defense Advanced Research Projects Agency.) Despite its buzz-cut reputation, the agency is putting on a pretty good show here at the DRC Finals. There’s a festival atmosphere, with a tech expo outside the stadium, under dozens of white tents. And though the purpose of the event is grim enough—to test robotic technology for emergency disaster relief at the Three Mile Islands, Chernobyls, and Fukushimas of the future—the competition format is kind of hilarious.
For each heat of the two-day comp, up to four robots at a time walk or drive vehicles from the racetrack oval toward the grandstand—a Futurama Shriners parade—arriving at four identical obstacle courses. You are invited to imagine that this is the scene of an industrial accident—a chemical spill, say, or toxic-gas leak—some terrible thing man has wrought that man ought not, something so bad we just can’t be there, not in person. But our metal friends? If they’re up to snuff, yes indeed. The final task is to climb a stairway, at the top of which is catastrophic doom’s off button, figuratively speaking.
For the robotics world, this is the greatest show on earth, and everybody’s here—all the best U.S. university robotics programs, industry reps, toy companies, even a would-be pirate with a robotic parrot named Polymer. You couldn’t swing a robotic cat—there are a few of those, too—without hitting a genius.•
The future usually arrives gradually and only occasionally with an asteroid strike. For the taxi industry in New York, Chicago and many other major cities, it’s been the latter. Ridesharing (a misnomer) via companies like Uber and Lyft is an improvement in numerous ways over business as usual, but, wow, lots of people are being hurt in the transition. Many more will be in a similarly tight spot in time, since Uber has vowed to replace all its piecework drivers with autonomous cars as soon as technologically possible.
Taxi owners and lenders on Tuesday sued New York City and its Taxi and Limousine Commission, saying the proliferation of the popular ride-sharing business Uber was destroying their businesses and threatening their livelihoods.
The lawsuit filed in Manhattan federal court accused the defendants of violating yellow cab drivers’ exclusive right to pick up passengers on the street by letting Uber drivers who face fewer regulatory burdens pick up millions of passengers who use smartphones to hail rides.
According to the complaint, the number of Uber rides in the “core” of Manhattan increased by 3.82 million from April to June 2015 compared with a year earlier, while medallion cab pickups fell by 3.83 million.
They said this had driven down the value of medallions, which yellow cab drivers need to operate, by 40 percent from a peak exceeding $1 million and caused more defaults.
Uber’s rise also contributed to the July 22 bankruptcy of 22 companies run by taxi magnate Evgeny Freidman, and the state’s Sept. 18 seizure of Montauk Credit Union, which specialized in medallion loans, the complaint said.•
A couple months before its historic eruption on May 18, 1980, Mount St. Helens began to slowly awaken. Tourists toting binoculars went to the mountain to get a better look, but some experts warned them to not expect too much, predicting it very unlikely to be a major geological event. The experts were wrong. From the April 21, 1980 People magazine:
It was hardly Vesuvius or Krakatoa, but when Mount St. Helens—near Washington’s border with Oregon—began to gurgle seriously last month, geologists and thrill-seekers gathered from all over the world. They hoped to see one of the rarest and most spectacular of nature’s performances: a volcanic eruption. Not since Mount Lassen in California began seven years of activity in 1914 has a volcano in the lower 48 states put on such a show. Still, some watchers may be disappointed by Mount St. Helens. “People have this idea about lava from old South Sea movies,” says Donal Mullineaux, a volcanologist in the U.S. Geological Survey, “with everybody in sarongs hotfooting it away from this smoky, glowing stuff that comes oozing out of the crater and down the mountain like cake batter. Lava can be dangerous, sure, but that’s only a part of it.”
The rest of it—clouds of poisonous gas, searing hot winds and cascades of mud and rock—now seems unlikely at Mount St. Helens. Mullineaux, who had predicted an eruption in a scholarly 1975 article, is maintaining a vigilant calm. “The probability of a big, big eruption is very low,” he says. Asked if the gases already escaped pose a pollution threat, he smiles and says, “Any comment I could make would be facetious. I grew up in a paper-mill town.”•
The CBS News report three days after the volcano blew, with Dan Rather and his folksy whatthefuck subbing for Walter Cronkite.
_________________________
Most men (and women) lead lives of quite desperation, but from Brooklyn to Big Sur Henry Miller hollered. That resulted in some genius writing and some considerably lesser material. In 1961, the author explained in a Paris Review interview how he believed his tools shaped his writing:
Paris Review:
Do you edit or change much?
Henry Miller:
That too varies a great deal. I never do any correcting or revising while in the process of writing. Let’s say I write a thing out any old way, and then, after it’s cooled off—I let it rest for a while, a month or two maybe—I see it with a fresh eye. Then I have a wonderful time of it. I just go to work on it with the ax. But not always. Sometimes it comes out almost like I wanted it.
Paris Review:
How do you go about revising?
Henry Miller:
When I’m revising, I use a pen and ink to make changes, cross out, insert. The manuscript looks wonderful afterwards, like a Balzac. Then I retype, and in the process of retyping I make more changes. I prefer to retype everything myself, because even when I think I’ve made all the changes I want, the mere mechanical business of touching the keys sharpens my thoughts, and I find myself revising while doing the finished thing.
Paris Review:
You mean there is something going on between you and the machine?
Henry Miller:
Yes, in a way the machine acts as a stimulus; it’s a cooperative thing.•
Robert Snyder’s deeply enjoyable 1969 documentary of Miller in his middle years, when he had befriended, among many others, astrologer Sydney Omarr, a relationship which helped the author indulge his curiosity in the occult.
_________________________
Wilt Chamberlain was a hybrid of topdog and underdog, fully aware that all his greatness could never make the public quite love a Goliath. To merely be himself was to be unfair. In Allen Barra’s 2012 Atlantic appreciation of the late NBA, volleyball and track & field star, the writer compares the legendary basketball player favorably with Babe Ruth, and recalls the humble environs in which he recorded the NBA’s only triple-digit scoring performance. An excerpt:
The celebration of Wilt Chamberlain’s career that accompanied the 50th anniversary of his 100-point game last weekend was too short and passed too quickly.
Wilt Chamberlain was the Babe Ruth of pro basketball. Like Ruth, he was by far the most dominant force in his time, and quite possibly of all time. Like the Babe, Wilt was the lightning rod for interest in the sport in a time when it was badly needed. In Chamberlain’s case, he was more important to basketball than Ruth was to baseball.
Contrary to popular opinion, baseball was doing quite well at the turnstiles when Ruth came along and would have survived the stink of the Black Sox gambling scandal with or without him (though the recovery certainly would have taken longer). But without Wilt, who knows if the NBA would have made it from the 1960s—when it was scarcely one of the big three pro sports behind baseball and football—to the Magic Johnson-Larry Bird boom of the late 1970s and the Michael Jordan tidal wave a few years later?
If you doubt this, consider one extraordinary fact: Wilt played his 100-point game not in New York or even in the Warriors’ home city of Philadelphia but in an odd-looking, plain concrete barn-like structure with an arched roof in Hershey, Pennsylvania, where the Warriors played several games a year in order to increase a fan base that wasn’t showing them overwhelming support in Philly.
Try and imagine the equivalent in baseball: Babe Ruth hitting his 60th home run in, say, Newark, New Jersey, at a Yankees “secondary” park in front of a handful of fans. If not for an unknown student listening to a late night rebroadcast of the game who thought to tape the fourth quarter on a reel-to-reel, we’d have no live coverage of the game at all.
Chamberlain’s triumph came at the Hershey Sports Arena. Today the HersheyPark Arena looks virtually the same, a practice facility for the AHL’s Hershey Bears and home ice for a local college that is also open for public skating. It’s easy to miss the notices that here Chamberlain played his landmark game: a small sign on a pole outside the main gates and a copy of the photo of Wilt holding up the handmade “100” in the back side of the lobby.
There is one primary difference between the careers of Babe Ruth and Wilt Chamberlain: Ruth was—and is—regarded by most baseball analysts as the greatest player in his game. But basketball people have never quite been able to make up their minds about Wilt.•
Ed Sullivan interviews Chamberlain soon after his heroics in Hershey.
I think I went from devoted moviegoer (250-300 films every year) to completely uninterested in the medium for reasons deeper than the seismic shifts of the business wrought by globalization (i.e., the march of comic-book blockbusters not dependent on nuance or language), but these changes helped usher me out of the theater.
The new Star Wars is coming out and I don’t care, but at least Adam Rogers’ Wired article “The Force Will Be With Us. Always.” makes me interested in the dynamics behind the endless stream of familiar films that are more than mere sequels. The various productions are designed as an ever-expanding supply of “connective tissue” that can stretch and cover us with comforting entertainment until we’re mummies. Unless, of course, some unforeseen disruption takes the industry in another direction. I mean, nothing is truly forever. For now, though, the template is fixed.
An excerpt:
The company intends to put out a new Star Wars movie every year for as long as people will buy tickets. Let me put it another way: If everything works out for Disney, and if you are (like me) old enough to have been conscious for the first Star Wars film, you will probably not live to see the last one. It’s the forever franchise.
These new movies won’t just be sequels. That’s not the way the transnational entertainment business works anymore. Forget finite sequences; now it’s about infinite series. Disney also owns Marvel Comics, and over the next decade you can expect 17 more interrelated movies about Iron Man and his amazing friends, including Captain America: Civil War, two more Avengers movies, another Ant-Man, and a Black Panther (not to mention five new TV shows). Thanks to licensing agreements, Disney doesn’t own the rights to every Marvel property—Fox makes movies about the X-Men and related mutants like Gambit and Deadpool. So you’ll get interrelated comic-book movies there too. Warner Bros. Entertainment, which owns DC Comics, is prepping a dozen or so movies based on DC characters, with Batman v. Superman: Dawn of Justice and Suicide Squad in 2016, Wonder Woman, and eventually the two-part team-up Justice League. Warner is also trying to introduce Godzilla to King Kong (again). Paramount is working on a shared universe for its alien robot Transformers. Universal continues, with limited success, to try to knit together its famous bestiary (Frankenstein’s monster, Dracula, the Wolf Man, the Mummy, etc.).
Everywhere, studio suits are recruiting creatives who can weave characters and story lines into decades-spanning tapestries of prequels, side-quels, TV shows, games, toys, and so on. Brand awareness goes through the roof; audiences get a steady, soothing mainline drip of familiar characters. Forget the business implications for a moment, though. The shared universe represents something rare in Hollywood: a new idea.•
Maybe we should tell the GOP Presidential candidates that climate change is Muslim?
Like most people, I don’t give a crap what the Islamic State wants, except insofar as comprehending that mindset will aid in the destruction of the terrorist organization. Empathy is a useful tool, even in war. In addition to understanding the ideology of the monstrous group, I’m also curious how much drug use is employed to “program” the modern suicidal soldiers. We know that a mélange of medieval brutality, Hollywood technique and social media are the heart of its methods, but what is driving the execution?
When it comes to the murderous principles, Graeme Wood raises many intelligent points about the terrorists’ apocalyptic nature in his new Atlantic essay. The opening:
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
The group seized Mosul, Iraq, last June, and already rules an area larger than the United Kingdom. Abu Bakr al-Baghdadi has been its leader since May 2010, but until last summer, his most recent known appearance on film was a grainy mug shot from a stay in U.S. captivity at Camp Bucca during the occupation of Iraq. Then, on July 5 of last year, he stepped into the pulpit of the Great Mosque of al-Nuri in Mosul, to deliver a Ramadan sermon as the first caliph in generations—upgrading his resolution from grainy to high-definition, and his position from hunted guerrilla to commander of all Muslims. The inflow of jihadists that followed, from around the world, was unprecedented in its pace and volume, and is continuing.
Our ignorance of the Islamic State is in some ways understandable: It is a hermit kingdom; few have gone there and returned. Baghdadi has spoken on camera only once. But his address, and the Islamic State’s countless other propaganda videos and encyclicals, are online, and the caliphate’s supporters have toiled mightily to make their project knowable. We can gather that their state rejects peace as a matter of principle; that it hungers for genocide; that its religious views make it constitutionally incapable of certain types of change, even if that change might ensure its survival; and that it considers itself a harbinger of—and headline player in—the imminent end of the world.•
Rodney Brooks still thinks the future will be fast and cheap, but perhaps not out of control.
In Errol Morris’ 1997 documentary, the roboticist famously says that in tomorrow’s world of intelligent machines, humans might not be necessary. In a BBC piece written by Regan Morris, Brooks backs off that statement (“You can’t expect me to stand by something I said during a long day of filming 20 years ago!”), and also explains why he doesn’t fear technological unemployment or appreciate “cute” robots.
Brooks is certainly right that AI handling rote, backbreaking tasks is a good thing in the big picture, though distribution of wealth is a sticky point. And the work that disappears won’t only be of the blue-collar variety.
An excerpt:
Some people remain nervous about the growing role of robots.
Martin Ford, the author of Rise of the Robots, says robots will change the global economy in drastic ways beyond manufacturing. White collar jobs, are equally susceptible and likely more at risk, he says.
“I think it’s inevitable that robots will displace a lot of jobs, if you have a PhD in science and engineering, you’re probably safe. But that’s not many people,” Ford says.
“We can’t stop it. We can’t educate ourselves out of it. Top level, highly creative, highly skilled jobs will survive. But most people do average stuff. Even if we tried we couldn’t educate every person to be a rocket scientist or brain surgeon.”
Baby boomers need bots
Mr Brooks is less concerned. He thinks fears of robots taking over jobs are overblown and that robots will improve people’s lives.
“I think there’s a misconception amongst the wealthy people in the bubble that there are endless rows of people wanting dull, boring jobs in factories. It’s not true,” Mr Brooks says, adding that robots will become more pervasive in society as baby boomers age and require more self-driving cars and home healthcare.•
Despite his intelligence–or perhaps because of it–philosopher Nick Bostrom could have just as readily fallen through the cracks as rose to prominence, making an unlikely space for himself with the headiest of endeavors, calculating the likelihood of humans to escape extinction. He’s a risk manager on the grandest scale.
Far from a crank screaming of catastrophes, the Oxford academic is a rigorous researcher and intellectual screaming of catastrophes, especially the one he sees as most likely to eradicate us: superintelligent machines. In fact, he thinks self-teaching AI of a soaring IQ is even scarier than climate change. In aNew Yorker pieceon Bostrom, the best profile yet of the philosopher, Raffi Khatchadourian writes that the Superintelligence author sees himself as a “cartographer rather than a polemicist,” though he’s clearly both.
In addition to attempting to name the threats that may be hurtling our way, Bostrom takes on the biggest of the other big questions. For example: What will life be like a million years from now? He argues that long-term forecasting is easier than the short- and mid-term types, because the assumption of continued existence means most visions will be realized. He refers to this idea as the “Technological Completion Conjecture,” saying that “if scientific-and technological-development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.”
My own thoughts on these matters remain the same: In the long run, we either become what those of us alive right now would consider a Posthuman species, the next evolution, or we’ll cease to be altogether. A museum city can linger for a long spell, beautiful in its languor, but humans doubling as statues from the past will eventually be toppled.
An excerpt:
Bostrom has a reinvented man’s sense of lost time. An only child, he grew up—as Niklas Boström—in Helsingborg, on the southern coast of Sweden. Like many exceptionally bright children, he hated school, and as a teen-ager he developed a listless, romantic persona. In 1989, he wandered into a library and stumbled onto an anthology of nineteenth-century German philosophy, containing works by Nietzsche and Schopenhauer. He read it in a nearby forest, in a clearing that he often visited to think and to write poetry, and experienced a euphoric insight into the possibilities of learning and achievement. “It’s hard to convey in words what that was like,” Bostrom told me; instead he sent me a photograph of an oil painting that he had made shortly afterward. It was a semi-representational landscape, with strange figures crammed into dense undergrowth; beyond, a hawk soared below a radiant sun. He titled it “The First Day.”
Deciding that he had squandered his early life, he threw himself into a campaign of self-education. He ran down the citations in the anthology, branching out into art, literature, science. He says that he was motivated not only by curiosity but also by a desire for actionable knowledge about how to live. To his parents’ dismay, Bostrom insisted on finishing his final year of high school from home by taking special exams, which he completed in ten weeks. He grew distant from old friends: “I became quite fanatical and felt quite isolated for a period of time.”
When Bostrom was a graduate student in Stockholm, he studied the work of the analytic philosopher W. V. Quine, who had explored the difficult relationship between language and reality. His adviser drilled precision into him by scribbling “not clear” throughout the margins of his papers. “It was basically his only feedback,” Bostrom told me. “The effect was still, I think, beneficial.” His previous academic interests had ranged from psychology to mathematics; now he took up theoretical physics. He was fascinated by technology. The World Wide Web was just emerging, and he began to sense that the heroic philosophy which had inspired him might be outmoded. In 1995, Bostrom wrote a poem, “Requiem,” which he told me was “a signing-off letter to an earlier self.” It was in Swedish, so he offered me a synopsis: “I describe a brave general who has overslept and finds his troops have left the encampment. He rides off to catch up with them, pushing his horse to the limit. Then he hears the thunder of a modern jet plane streaking past him across the sky, and he realizes that he is obsolete, and that courage and spiritual nobility are no match for machines.”•
Whenever seemingly intelligent people get taken in by what’s so obviously a con, there’s an urge to dismiss them, to laugh at their utter foolishness. But it would be far better to understand what went awry, because in some ways we all get taken, by ideologies or belief systems or lies we want to be true.
Niall Rice, an Internet consultant troubled by anxiety and addiction, got “sucked in” and cleaned out by psychics who promised him something they couldn’t possibly deliver. How could a smart, successful person so completely lack a filter to prevent grifters from exploiting him? Is there something he failed to learn despite learning so many other things? Why did his emotional problems supersede intellect at key moments? Are some people just wired to be more prone to such scams?
The opening of Michael Wilson’s fascinating New York Times article about a man who wanted, too much, to believe:
He sat in a Denny’s restaurant, drinking coffee between cigarette breaks after a long and sleepless night, answering question after question.
He knew none of it made sense: He was a successful and well-traveled professional, with close to seven figures in the bank, and plans for much more. And then he gave it all away, more than $718,000, in chunks at a time, to two Manhattan psychics.
They vowed to reunite him with the woman he loved. Even after it was discovered that she was dead. There was the 80-mile bridge made of gold, the reincarnation portal.
“I just got sucked in,” the man, Niall Rice, said in a telephone interview last week from Los Angeles. “That’s what people don’t understand. ‘How can you fall for it?’”
There was even, between payments to one of the psychics for a time machine to cleanse the past, a brief romance.