Driverless cars seem more a matter of when than if, though the ETA differs wildly depending on who’s doing the talking. For the technology to be transformational, the wheel must be torn from the dash, the vehicle in control of itself 100% of the time. Otherwise, it’s a useful tool and one that would still likely reduce deaths, but it won’t be world-changing. If autonomous truly comes to pass, not only will individual vehicle ownership be unnecessary, even fleets of taxis can essentially own themselves.
For a different vision of the driverless future, visit Heathrow airport outside London, and head to a “pod parking” area. Transfers between the car park and terminal are provided by driverless electric pods moving on dedicated elevated roadways. Using a touchscreen kiosk, you summon a pod and specify your destination. A pod, which can seat four people, pulls up, parks itself and opens its doors. Jump in, sit down and press the start button—the only control—and it drives you to your destination, avoiding other pods and neatly parking itself when you arrive, before heading off to pick up its next passengers.
Like riding in the autonomous Audi, travelling by pod is thrilling for the first 30 seconds—but quickly becomes mundane. The difference is that self-driving vehicles that can be summoned and dismissed at will could do more than make driving easier: they promise to overturn many industries and redefine urban life. The spread of driver-assistance technology will be gradual over the next few years, but then the emergence of fully autonomous vehicles could suddenly make existing cars look as outmoded as steam engines and landline telephones. What will the world look like if they become commonplace?
The switch from horse-drawn carriages to motor cars provides an instructive analogy. Cars were originally known as “horseless carriages”—defined, like driverless cars today, by the removal of a characteristic. But having done away with horses, cars proved to be entirely different beasts, facilitating suburbanisation and becoming symbols of self-definition. Driverless vehicles, too, will have unexpected impacts. They will look different. Early cars resembled the carriages from which they were derived, and car design took some years to escape its horse-drawn past. By the same token, autonomous vehicles need look nothing like existing cars. Already, Google’s futuristic pods are on the public roads of California, and some concept designs, liberated from the need to have steering wheels and pedals, have seats facing each other around a table.
Autonomous vehicles will also challenge the very notion of car ownership.•
When Russian oligarch Dmitry Itskov vows that by 2045 we’ll be able to upload our consciousness into a computer and achieve a sort of immortality, I’m perplexed. Think about the unlikelihood: It’s not a promise to just create a general, computational brain–difficult enough–but to precisely simulate particular human minds. That ups the ante by a whole lot. While it seems theoretically possible, this process may take awhile.
In his latest excellent Atlantic article, Princeton neuroscientist Michael Graziano plots the steps required to encase human consciousness, to create a second life that sounds a bit like Second Life. He acknowledges opinions will differ over whether we’ve generated “another you” or some unsatisfactory simulacrum, a mere copy of an original. Graziano’s clearly excited, though, by the possibility that “biological life [may become] more like a larval stage.”
Let’s presume that at some future time we have all the technological pieces in place. When you’re close to death we scan your details and fire up your simulation. Something wakes up with the same memories and personality as you. It finds itself in a familiar world. The rendering is not perfect, but it’s pretty good. Odors probably don’t work quite the same. The fine-grained details are missing. You live in a simulated New York City with crowds of fellow dead people but no rats or dirt. Or maybe you live in a rural setting where the grass feels like Astroturf. Or you live on the beach in the sun, and every year an upgrade makes the ocean spray seem a little less fake. There’s no disease. No aging. No injury. No death unless the operating system crashes. You can interact with the world of the living the same way you do now, on a smart phone or by email. You stay in touch with living friends and family, follow the latest elections, watch the summer blockbusters. Maybe you still have a job in the real world as a lecturer or a board director or a comedy writer. It’s like you’ve gone to another universe but still have contact with the old one.
But is it you? Did you cheat death, or merely replace yourself with a creepy copy?•
I’m not exactly happy that doping and organized crime are mounting problems for eSports, but it is sort of amusing, speaking as it does to the human ability to develop, nurture and ruin almost anything.
Some gambling books accept wagers on professional wrestling, for chrissakes, in which predetermined finishes are known to a certain amount of employees and their friends and families, so why shouldn’t an actual contest like video games attract “legitimate businessmen” looking for someone to take a dive? And if classical musicians down beta blockers to still nerves, of course eSports competitors use PEDs to fight stronger and longer.
ESIC was announced at Lord’s Cricket Ground in London, and introduced by Ian Smith, the body’s first integrity commissioner. A UK lawyer, Smith’s background has largely been in traditional sports such as football and cricket, and he sat on the Athletes Committee of UK Anti-Doping for five years.
Smith says there are four key areas that ESIC wants to tackle. Three – cheating using softwarehacks such as aimbots; DDoS attacks to slow down opponents’ ability to react in matches; and doping – he describes as “easy” to deal with in the longer term. The fourth, match fixing, presents a much bigger – and growing – problem.
“We’ve had very prominent arrests in Korea in Starcraft II, and there have been a number of other cases and allegations […] around fixing,” Smith says. “We’ve found that that’s actually pretty low-level fixing, but the main issue is the growth of the esports betting market. Looking at 2015, the legitimate esports betting market was at around the $250m mark. That probably means the illegitimate market […] was running at around two to three billion dollars.”
While acknowledging that those figures are currently “peanuts” inbettingterms, Smith adds that projections put the legitimate market at $23bn by 2020 – and the illegitimate market, if current trends continue, at $200-300bn.
“That’s the point at which organised crime knows that there’s a decent return on any corrupt investment they make in the sport,” Smith says.•
Zoltan Istvan’s Presidential campaign has failed, if you grade it on votes and other such mundane things.
The Transhumanist Presidential candidate, however, never was running to win but to raise consciousness about immortality and genetic engineering and other outré matters. Some of his far-reaching ideas are covered in “What to Eat for Breakfast if You Want to Live Forever,” Carey Dunne’s Extra Crispy article. I winced a little when I first read some of his predictions but was happy to discover the phrase “in the next few centuries.” Usually, Transhumanists are so aggressive in their prognostications it really damages their arguments. Even several hundred years is probably too bold for what Istvan proposes, though in its essence, it isn’t really any different than what Sir Martin Rees sees eventually happening.
As president, Istvan might push for a doughnut tax. “We need guidelines saying doughnuts and things like that are bad,” Istvan says, echoing some current public health advocates. “Humans can’t control their appetites. We need legislation that would discourage people from [unhealthy] eating. I wouldn’t mind creating new taxes for fast foods. They’re just as much of a killer as cigarettes.”
Anti-doughnut laws would be a provisional measure, though, until we all “become machines.” In Istvan’s transhumanist dream world, breakfast wouldn’t exist at all. “I advocate for getting rid of food entirely,” he says. “I love eating and drinking—that’s why I own a vineyard, Zolisa, in Argentina—but from a transhumanist perspective, it’s a terrible system. Same thing with pooping: Total waste of time, totally nonfunctional. There’s no question we’re gonna get rid of our organs within the next [few centuries]. These things are going the way of the dinos.” For a more efficient system, Istvan predicts, “Biohackers will learn to splice DNA into cells to photosynthesize our energy—that’s the future of the human being, if we remain biological.”•
The one time I interviewed Werner Herzog, in 2005, I asked him how he survived the threatening situations he encountered while making his sometimes death-defying films and in his life. He replied: “I’ve been fortified by enough philosophy.” Ever since then, I’ve always asked myself if I’ve been similarly fortified, if I’ve read and thought enough so that even when I’m deeply shaken, there’s something essential within me that remains solid.
Herzog just did a Reddit AMA, which includes an exchange that speaks to this idea. The excerpt:
You’ve covered everything from the prehistoric Chauvet Cave to the impending overthrow of not-so-far-off futuristic artificial intelligence. What about humankind’s history/capability terrifies you the most?
It’s a difficult question, because it encompasses almost all of human history so far. What is interesting about this paleolithic cave is that we see with our own eyes the origins, the beginning of the modern human soul. These people were like us, and what their concept of art was, we do not really comprehend fully. We can only guess.
And of course now today, we are into almost futuristic moments where we create artificial intelligence and we may not even need other human beings anymore as companions. We can have fluffy robots, and we can have assistants who brew the coffee for us and serve us to the bed, and all these things. So we have to be very careful and should understand what basic things, what makes us human, what essentially makes us into what we are. And once we understand that, we can make our educated choices, and we can use our inner filters, our conceptual filters. How far would we use artificial intelligence? How far would we trust, for example, into the logic of a self-driving car? Will it crash or not if we don’t look after the steering wheel ourselves?
So, we should make a clear choice, what we would like to preserve as human beings, and for that, for these kinds of conceptual answers, I always advise to read books. Read read read read read! And I say that not only to filmmakers, I say that to everyone. People do not read enough, and that’s how you create critical thinking, conceptual thinking. You create a way of how to shape your life. Although, it seems to elude us into a pseudo-life, into a synthetic life out there in cyberspace, out there in social media. So it’s good that we are using Facebook, but use it wisely.•
When I first learned Google was testing driverless cars, I wondered how long it would be before hackers were able to wrest the wheel from robotic hands. Of course, with the amount of computing power increasingly installed in newer models, vehicles needn’t really be fully driverless for such a reality to potentially come to pass.
According to “Motoring with the Sims,” an Economist report on how simulated driving is helping autonomous-car manufacturers test situations that would be too dangerous to try out on public roadways, it may just be five years until our vehicles can be turned into rolling hostage situations. An excerpt:
On top of this testing of accidental interference with a car’s wireless traffic, the team will also try to hack deliberately into vehicles—something that it would be illegal as well as irresponsible to attempt on public roads. Such tests, nevertheless, need to be done. Carsten Maple, a cyber-security expert at Warwick, reckons criminals are only about five years away from being able to disable a car’s ignition remotely, holding it to ransom until the owner has made a payment. Indeed, in 2015 Fiat Chrysler recalled 1.4m vehicles in America after security researchers showed it was possible to take control of a Jeep Cherokee via its internet-connected entertainment system.
Despite the potential problems, though, Dr Jennings and his team are convinced that genuinely driverless vehicles have a big future. At first this future could be in controlled and specially designated areas, such as city centres. One vehicle that will be tested in the simulator has been designed with just such a purpose in mind. It is an electrically powered passenger-carrying pod produced by RDM, a firm in Coventry. The pods are already being tested in pedestrianised areas of Milton Keynes, a modernist British city. RDM says they are also intended for use in places such as airports, shopping centres, university campuses and theme parks.
On the open road, however, it may take longer before steering wheels become obsolete. Even after extensive testing in simulators, the performance of autonomous systems will still need to be verified in the real world. And no self-driving system will ever be completely foolproof. As the Florida crash showed, accidents will still happen—although, mercifully, there may be fewer of them.•
There are tons of futurists now, even if they identify by other names (economists, political scientists, etc.) You could easily make an argument that today is the golden age of tomorrow.
An aversion to myopia is great, though thinking solely about the future also has its costs. In a Fast Company article about the current fixation on futurism, D.J. Pangburn focuses on Hal Niedzviecki’s Trees on Mars, a book that questions our constant obsession with the next big thing and distrust of those who don’t buy into such sci-fi scenarios.
When wealthy technologists talk excitedly about space-mining minting the first trillionaires while offering those left behind the promise of some basic income, it becomes clear they don’t realize they’re encouraging bloody revolution. But a scan of books published in the last few years reveals numerous titles by technologists and futurists wary of where we’re headed, believing investments must be made in the present as well.
Another recurring theme in Trees on Mars is Niedzviecki skeptic’s view of the futurist. He sees the ascension of the futurist to a preeminent place in society—and the idea that all should become futurists for individual and collective progress—as deeply problematic.
Should everyone be a futurist? Niedzviecki doesn’t think so, but he is seeing a massive revolution in how societies are positioning themselves around technological success, a repositioning of education around technology; a reorganization of societal goals around the “latest chimeras of success”—the best futurists who knew what was going to happen before it happened, like Steve Jobs and Mark Zuckerberg. In doing this, Niedzviecki believes we run the risk of condemning those people who really don’t feel it’s necessary or interesting to think as futurists.
“It’s self-satisfying bullshit from a small set of people who were able to take advantage of this and sell this,” Niedzviecki says. “And my line of frustration runs through the whole book and perhaps culminates when I go to SXSW Interactive.”
There Niedzviecki sat in on a panel dealing with disruption, where he listened to “high-priced, famous gurus” tell attendees that if they can’t keep up with the pace of disruption then they are failures that will be left behind. Niedzviecki recalls sitting there thinking: “That’s not the way it is—that’s the way you have made it.”
“I think the vast majority of people who preach disruption do not understand the ramifications of what they’re saying,” Niedzviecki said.•
Only slightly less misbegotten than the former Tribune Publishing’s new five-letter curse of a name, “tronc” is the vision forward of its leading stockholders, who may, if they maintain control of the company, hold sway over the Digital Age reinvention of 160 of what we used to call “publications.”
In “Desperate Times, Desperate Measures,” an excellent LA Weekly piece by Hillel Aron, the writer traces how Michael Ferro, with a poor won-loss record in the business, plans to revitalize the flagging fortunes of not only the Los Angeles Times (as a “global entertainment brand”) but also the wider industry. Aiding him is the second-largest stockholder, Patrick Soon-Shiong, a billionaire surgeon who seems to not realize that while great journalism might seem magical, it is not magic. It’s mostly drudgery and some inspiration. He’s quoted in the piece as saying that “one piece of technology…would use artificial intelligence to take a text story and convert it to video, generating as many as 2,000 videos a day.” Sounds like a plan, though not a particularly good one.
The bigger question may not be whether Ferro and Soon-Shiong fail, but if anyone can succeed. I don’t know that trying to remake the New York Times into The Daily Show is any better of an idea. As Aron relates, it isn’t merely the Internet or destabilizing new tools and shifting cultural attitudes that caused the ink to bleed red.
Ferro’s track record is spotty at best. While he did help stabilize the Chicago Sun-Times after it emerged from bankruptcy, his three years as chairman of the company that owned the paper included laying off every staff photographer and creating a content farm called Aggrego, which produced a flood of non-reported blog posts — and did not prove to be a significant economic or technological success.
These grandiose proclamations, this bluster, this pretense that he has the answers that no one in the industry has come up with — that’s what you have to buy into in order to accept that all of this is real,” says Robert Feder, a former media columnist for the Chicago Sun-Times, who now writes a daily media blog licensed by the Chicago Tribune.
“I don’t want to shit all over Ferro, because I wish there were a lot more people willing to experiment and take risks,” former L.A. Times deputy publisher Nicco Mele says. “But there is no silver bullet, and to suggest that there is is wildly misleading.” …
One of the myths about the newspaper industry is that it’s getting killed by the internet, by technology and social media. The reality is more complicated — and more troubling for journalism.
“It’s not just about the internet,” former L.A. Times deputy publisher Mele says. “It’s about changing habits and deep cultural changes. People are valuing opinion over news. People are less engaged in the day-to-day of their own communities. People are much more mobile and transient. If there was one trend that is really underappreciated, it’s, since Watergate, the continuing erosion of trust in the institutions that once made America great — the press first and foremost.”•
Pill dinners never quite caught on, and I don’t believe Virtual Reality dining will become a going concern anytime soon. Certainly there’s great potential in the entertainment portion of the hospitality industry. Do a karaoke duet with a hologram of your favorite pop singer or enjoy hearing a virtual Bobby Short at a piano bar. Even the windows and decor could be changed at will.
When it comes to the actual food, however, humans tend to like having their senses pleased rather than tricked. That hasn’t stopped the people at Project Nourished from experimenting with tools that go far beyond your basic utensils. From Erin Carson at CNET:
It might be the best meal you’ve never had.
A group of about 30 people in Los Angeles is experimenting with how we eat food, but not like Uber offering delivery or a bakery concocting a new donut-pastry combo. This time, you can put away the forks, knives, oven mitts and double mezzalunas.
It’s called Project Nourished, and what’s on your dinner table is a virtual reality headset, some devices that look like they came from a modern art museum, and something called “3D printed food.”
The way it works: You put on the headset and you’re transported to an interesting location, which is probably the most normal element of this exercise.
Also on the table are several other devices. One is an “aromatic diffuser,” which has a tube sticking out of it that blasts food aromas at you. Another is a “bone conduction transducer” that wraps around the back of your head to mimic the sounds and vibrations of chewing. There’s a cup for drinking. Finally, there’s a utensil shaped like tweezers. For eating.
Put all those items together, and you could be eating sushi in Japan, or be having a simulated food experience totally foreign to this world.
In reality, you’d be wearing odd-shaped devices that make you look like someone glued pieces of a honeycomb-shaped ball on your head, all while you chew on a piece of algae.•
When we talk of finding life on other planets, we often tend to think, narcissistically, of something that at least vaguely resembles us, or the Earth’s animals, plants and microscopic organisms. The real mindblower, though, would be if we discover something Other, life that reminds us of nothing we’ve known, not contained in our books or brains. So much more will become possible then.
In a smart Atlantic essay, Ross Andersen writes of accompanying astronomer Lisa Kaltenegger to Colorado’s majestic Maroon Bells mountains and coming to understand how small they might actually be when we learn more about our neighbors in space. An excerpt:
As we stood admiring the Maroon Bells, I asked Kaltenegger whether she thought there were scenes like this all across the cosmos. “No,” she said, before explaining that Earth was but a single, limited expression of nature’s raw creative power. Just as our planet contains many habitats with many ecologies, each with its own diverse creatures, other planets may play host to living worlds that look nothing like our own.
“Take mountains,” she said. “Earth’s crust is quite thin, which means its mountains can only reach so high.” This seemed ungrateful, surrounded as we were by jagged, vertical rock faces, studded with dense pine stands. “If Earth’s crust were thicker, the mountains could be much larger,” she said.
On some distant planet, there might be peaks that tower more than 100,000 feet above an alien sea. These extraterrestrial peaks might be forested, or they might be coated in an alien form of vegetation, or something beyond the reach of our current imagination.•
Racism and guns are never far away in America, but the bloodshed of the last few days has been particularly sickening, a reminder that African-Americans are still prone to an instant death penalty for minor or phantom offenses, and that the endless supply of powerful guns has made us all, even the police, sitting ducks.
Adding to the troubling nature of the carnage is the unprecedented domestic use of a “bomb robot” by Dallas officers to kill a suspected sniper, a tactic employed by U.S. soldiers in Iraq that’s the latest “dividend” to return home from that misbegotten war. I’m sure the police were just trying to keep any more innocent people from being murdered, but the precedent is chilling.
“Negotiations broke down. We had an exchange of gunfire with the suspect,” Dallas police chief David Brown explained in a press conference. “We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was.”
You read that correctly: “bomb robot.”
Typically, in violent standoffs involving gunfire, police wait out the suspects, or try to deploy snipers of their own to remove the threat. The general rule is that if police are not directly under threat of taking fire, they should try to bring home the suspect alive. Brown, though, said the robot was the only choice the force had.
“Other options would have exposed our officers to grave danger. The suspect is deceased,” he said.
The use of a robot to kill someone has taken police observers aback.•
“We are flummoxed by today’s nationalist, regressively anti-global sentiments only because we are interpreting politics through that now-obsolete television screen,” writes Douglas Rushkoff in an excellentFast Company essayabout the factious nature of the Digital Age. The post-TV landscape is a narrowcasted one littered with an infinite number of granular choices and niches. It’s empowering in a sense, an opportunity to vote “Leave” to everything, even a future that’s arriving regardless of popular consensus. It’s a far cry from not that long ago when an entire world sat transfixed by Neil Armstrong’s giant leap. Now everyone is trying to land on the moon at the same time–and no one can agree where it is. It’s more democratic this way, but maybe to anuntenable degree, perhaps to the point where it’s a new form of anarchy.
Two excerpts follow from: 1) Rushkoff’s FC piece, and 2) Scott Timberg’s smart Salon Q&Awith the media theorist.
A media environment is really just the kind of culture engendered by a particular medium. The invention of text encouraged written history, contracts, the Bible, and monotheism. The clock tower in medieval Europe led to hourly wages and the time-is-money ethos of the industrial age. Different media environments encourage us to play different roles and to see, think, or act in particular ways.
The television era was about globalism, international cooperation, and the open society. TV let people see for the first time what was happening in other places, often live, as it happened. We watched the Olympics, together, by satellite. Neil Armstrong walked on the moon. Even 9-11 was a simultaneously experienced, global event.
Television connected us all and broke down national boundaries. Whether it was the British Beatles playing on The Ed Sullivan Show in New York or the California beach bodies of Baywatch broadcast in Pakistan, television images penetrated national divisions. I interviewed Nelson Mandela in 1994, and he told me that MTV and CNN had more to do with ending the divisions of apartheid than any other force.
But today’s digital media environment is different. At the height of his media era, a telegenic Ronald Reagan could broadcast a speech in front of the Brandenburg Gate in Berlin and demand that Gorbachev “tear down this wall.” Today’s ultimate digi-genic candidate Donald Trump demands that we build a wall to protect us from Mexicans.
This is because the primary bias of the digital media environment is for distinction.•
Timberg’s opening question:
You argue that the support for Donald Trump and the puzzling Brexit vote both have to do, in important ways, with the dominance of the Internet. Not with anything political, but in the ways we communicate. How do you see these things related?
I don’t know if I’d blame the Internet as much as the idea that we’re in a digital media environment. The idea of being in a media environment, a technological environment, is really old – this guy [Lewis] Mumford is the one who came up with it…. And the beauty of that analysis is not that it says that one thing causes another – that the printing press led to the mechanization of world culture — but it sort of went hand in hand. We developed mechanical abilities, we made machines, then we took on some of the qualities of those machines. Because they’re around us, they’re part of the world we live in.
The thing I’ve been interested in is the shift from the television media environment, which we all grew up in, which was so globalist in spirit, and in funding — it promoted a global view and global markets and global simultaneity.
The digital media environment is so different in the way it’s structured and biased. We know that the algorithms in our social-media feeds tend to isolate us in our highly individuated factions or filter bubble — so we don’t interact with people with different ideas.
What are the biases of these technologies? All of these revolutions have been very discrete — we’re going to restore Egypt, we’re going to restore the caliphate — there’s this sense of nationalism and segmentation and difference.•
Encouraging Jonah Lehrer to return to publishing is not much different than ushering a gambling addict back to a craps table and telling him to have better luck this time. The game will not likely end well.
Something in his makeup triggers in the disgraced neuroscientist unacceptable, compulsive behavior when he’s at a keyboard (and maybe elsewhere?), and it would seem he’d be best served by laboring in a different vocation while dedicating the many years it will take to figuring out the source of his waywardness. The plagiarist and fabulist doesn’t need to disappear from the face of the earth or anything, but it’s probably good to keep him a safe distance from the dice.
Lehrer has just written A Book About Love, which, Simon & Schuster would like you to know, is a book about love. Jennifer Senior of the New York Timesreviews the title in her usual sharp, lucid manner, finding an author who hasn’t truly reversed course but merely shifted. An excerpt:
In retrospect — and I am hardly the first person to point this out — the vote to excommunicate Mr. Lehrer was as much about the product he was peddling as the professional transgressions he was committing. It was a referendum on a certain genre of canned, cocktail-party social science, one that traffics in bespoke platitudes for the middlebrow and rehearses the same studies without saying something new.
Apparently, he’s learned nothing. This book is a series of duckpin arguments, just waiting to be knocked down. …
As for the question that’s on everyone’s mind — did Mr. Lehrer play by the rules in this book? — I think the answer is complicated, but unpromising.
In an author’s note, Mr. Lehrer says that he se”nt his quotes to everyone he interviewed and that his book was independently fact-checked. And it’s true that this book contains far more citations than his previous work.
But I fear Mr. Lehrer has simply become more artful about his appropriations.•
I’d be happy to wager there have never in our history been more intelligent people toiling in the area of futurism. They may formally identify as economists or political scientists or technologists rather than futurists, but there’s a steady deluge of books and papers on the promise and perils of tomorrow, which contain advice on how we can maximize the former and minimize the latter.
This crowded field goes unmentioned in Farhad Manjoo’s New York Times column, which paints a dire picture of futurism in a post-Toffler world. The writer is absolutely correct to call out the U.S. government for ignoring to a good extent the chorus of clarion calls, protected by gerrymandering from the consequences of myopia. The present (e.g., infrastructure) is barely acknowledged let alone the next wave. I do believe, however, that the critical mass of thinkers in this area will ultimately serve us well, even in the recalcitrant public sector, which, as always, is prone to the sweep of history.
All around, technology is altering the world: Social media is subsuming journalism, politics and even terrorist organizations. Inequality, driven in part by techno-abetted globalization, has created economic panic across much of the Western world. National governments are in a slow-moving war for dominance with a handful of the most powerful corporations the world has ever seen — all of which happen to be tech companies.
But even though these and bigger changes are just getting started — here come artificial intelligence, gene editing, drones, better virtual reality and a battery-powered transportation system — futurism has fallen out of favor. Even as the pace of technology keeps increasing, we haven’t developed many good ways, as a society, to think about long-term change.
Look at the news: Politics has become frustratingly small-minded and shortsighted. We aren’t any better at recognizing threats and opportunities that we see emerging beyond the horizon of the next election. While roads, bridges, broadband networks and other vital pieces of infrastructure are breaking down, governments, especially ours, have become derelict at rebuilding things — “a near-total failure of our political institutions to invest for the future,” as the writer Elizabeth Drew put it recently.
In many large ways, it’s almost as if we have collectively stopped planning for the future. Instead, we all just sort of bounce along in the present, caught in the headlights of a tomorrow pushed by a few large corporations and shaped by the inescapable logic of hyper-efficiency — a future heading straight for us. It’s not just future shock; we now have future blindness.•
A closer look at the numbers reveals our prosperity has grown increasingly top-heavy for decades, a failure that’s not an orphan. Among the factors suppressing the earnings of the vast majority are tax codes, the decline of unions, corporate pay structures, globalization and automation.
The future looks bright in the big picture, but only if we find a way to allow working-class people to participate in the wealth created. Otherwise we’ll develop a large underclass distracted intermittently by the few amazing, cheap gadgets in their pockets, by bread and Kardashians.
Investment in workers and infrastructure is key, as always. It’s worth noting that if too many jobs are automated out of existence too quickly, we may have a challenge that even education can’t remedy.
From Edward Alden and Rebecca Strauss’ smart Foreign Affairs article, “Is America Great?“:
In our own research, we have looked in detail at how the United States measures against other advanced economies on many of the attributes that underlie national competitiveness, from innovation to education. The picture is a pretty good one. On innovation, for example, which drives economic growth in wealthy nations, the United States is far ahead of any country in the world. Corporate taxes and regulations, although both in real need of reform and modernization, do not pose the serious competitive disadvantage that many Republicans have suggested. The United States has slipped in global education rankings, but there are encouraging signs of progress, with high school graduation rates recently reaching record levels.
So if the United States is doing so well compared to its economic rivals, what accounts for the political appeal of claims that it has been a loser in global competition? The answer lies in the growing disconnect between the macro-level performance of the U.S. economy, which has been reasonably good, and the economy as it is lived by many Americans, which has been far from good.
The economist Michael Porter and his colleagues at Harvard Business School have called it “an economy doing only half its job.” Porter defines a competitive economy as one in which companies can compete successfully in global markets while also supporting rising wages and living standards for ordinary citizens. U.S. companies such as Amazon, Apple, Facebook, and Google account more than half of the top 100 companies in the world by market value, and such firms have only gained ground over the past five years. But despite this competitive triumph, wages and living standards for the average American have stagnated for decades. Real wages have been flat since the 1970s, which roughly corresponds with the time when the United States began facing tougher overseas competition, first from Japan and Germany and later from China. Young men today, who have been hit particularly by the disappearance of manufacturing jobs, on average earn less than their fathers did.
Porter and his colleagues argue that the biggest cause of this growing divide is the failure of governments, and of companies themselves, to invest in Americans—to give them the education, skills, infrastructure, and access to capital they to need to prosper along with U.S. companies.•
Alvin Toffler just died, but Douglas Rushkoff, an intellectual descendant of the Future Shock author and Marshall McLuhan, continues on. The media and cultural theorist is driven more by politics–specifically politics from the Left–than his predecessors, though he’s also examining the same macro questions: What have we created with our cleverness? Is it good for us? How can we best manage the downsides?
In a smart 52 Insights Q&A, Rushkoff speaks to the American corporatocracy and what he sees as the intrusion of new tools in our lives. One comment he makes in regard to our gadgets: “Maybe they will just fade into the background. Maybe you’ll have smart devices that can get data from what you’re doing but they don’t affect you as much.”
On some level devices that gather information from us do have an impact on us, even if the process is stealthy. Much good will come from the Internet of Things, but it’s a system with no OFF switch, no escape hatch. At that point, we’re inside the machine for good.
I get on the bus every morning and I am succumb to my technology addiction like everyone else, but sometimes I look up and check out how many people are actually looking out of the window rather than at their phones. It’s usually about 50/50. Do you think this trend will continue in 50 years?
It’s hard to know what will happen. I like the optimism implicit in your question, asking, what will we be like in 50 years rather than whether we will be here in 50 years. The question of how we will have adapted to technology seems to be a much smaller proportion of the impact of technology than all of the externalized impacts of technology that we don’t talk about.
I’m less concerned with how the iPhone is changing my vision than the two refrigerators’ worth of electricity the iPhone is using when it’s operating, or the African kids that are being sent into caves to get rare earth metals to put into my battery, or the electronic waste that’s being buried in South America and China, or the children of Pakistan who are being poisoned by old CRT monitors. These people are going to be impacted way more. In my own crowd and the young people I talk to, I actually don’t see people so enamoured of their technology as older people. It’s the boomer and maybe some Gen-X-ers or Gen-Y who love all of this stuff, their Internet of Things. Younger people either know they can’t afford that stuff or really just don’t care so much. They don’t see it as so central to their experience. Yeah there’s a lot of texting going on but even that. . . I look at my daughter’s class, they’re 10 or 11 years old and they don’t like the stuff. I think we’re going to see people using technology much more appropriately in the future and in a more limited fashion. That could mean a very big disruption for the growth of all these internet service companies that think we will just want to do more and more. Then again maybe they will just fade into the background. Maybe you’ll have smart devices that can get data from what you’re doing but they don’t affect you as much.
What really keeps you up at night? What are you most concerned about in society?
The thing that disturbs me most is when people accept the artifacts that have been left for them as the given circumstances of nature. When people look at corporate capitalism, or Facebook, or the religion they have, as if they were given by god and not invented by people. It’s this automatic acceptance of how things are that leads to a sense of helplessness about changing any of them. I am deeply concerned about the environment and the degree to which temperatures are rising, and how the worst expectations of environmentalists have already been surpassed.•
The first driverless car fatality has apparently occurred, which is a sad situation that’s received a great deal of press. Certainly autonomous risks should be investigated and discussed, but since that crash there’ve been numerous road deaths in America in standard vehicles that have received scant attention. Those are the kind of accidents we’re used to and seem acceptable because human hands were involved. No one should think driverless cars won’t malfunction, especially in their early days–they’ll hopefully greatly reduce deaths, not eliminate them.
When this technology arrives, it will likely be great for traffic safety but a huge blow to American Labor as drivers of trucks, taxis and delivery vehicles will be made redundant. What will happen to them and all the businesses they support while on the road? In a recent Wall Street Journal article, Baidu’s Andrew Ng suggested a new New Deal might be the answer. An excerpt:
IS IT TIME TO RETHINK YOUR CAREER? Andrew Ng, chief scientist at Chinese Internet giant Baidu, on how AI will impact what we do for a living
Truck driving is one of the most common occupations in America today: Millions of men and women make their living moving freight from coast to coast. Very soon, however, all those jobs could disappear. Autonomous vehicles will cover those same routes in a faster, safer and more efficient manner. What company, faced with that choice, would choose expensive, error-prone human drivers?
There’s a historical precedent for this kind of labor upheaval. Before the Industrial Revolution, 90% of Americans worked on farms. The rise of steam power and manufacturing left many out of work, but also created new jobs—and entirely new fields that no one at the time could have imagined. This sea change took place over the course of two centuries; America had time to adjust. Farmers tilled their fields until retirement, while their children went off to school and became electricians, factory foremen, real-estate agents and food chemists.
‘We’re about to face labor displacement of a magnitude we haven’t seen since the 1930s.’ Truck drivers won’t be so lucky. Their jobs, along with millions of others, could soon be obsolete. The age of intelligent machines will see huge numbers of individuals unable to work, unable to earn, unable to pay taxes. Those workers will need to be retrained—or risk being left out in the cold. We could face labor displacement of a magnitude we haven’t seen since the 1930s.
In 1933, Franklin Roosevelt’s New Deal provided relief for massive unemployment and helped kick-start the economy. More important, it helped us transition from an agrarian society to an industrial one. Programs like the Public Works Administration improved our transportation infrastructure by hiring the unemployed to build bridges and new highways. These improvements paved the way for broad adoption of what was then exciting new technology: the car.
We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.•
Technology could enable abundance by century’s end, but it will be a rough road getting there (in our 3D-printed driverless EVs) if we don’t mitigate the near-term challenges of the transition (e.g., technological unemployment) with wise and nimble policy. In a Forbes article, Bernard Marr examines the possibility of machine-powered “Luxury Communism.” An excerpt:
What if the prognosis weren’t all doom and gloom? What if all this automation were instead to provide so much luxury that we enter a post-work era, when humans are required to do very little labor and machines provide everything we need?
This is the theory of “Fully Automated Luxury Communism,” an idea and ideology that in the (near) future, machines could provide for all our basic needs, and humans would be required to do very minimal work — perhaps as little as 10–12 hours a week — on quality control and similar oversight, to ensure luxury for everyone.
Robots, AI, machine learning, big data, etc. could basically make human labor redundant and instead of creating even further inequalities it could lead to a society where everyone lives in luxury and where machines produce everything.•
A couple of concerns come to mind in regard to allowing algorithms to remove bureaucracy from the legal system, whether we’re talking parking tickets or pre-nups. As prejudices are baked into people, they can also be keyed into algorithms. A modicum of careful oversight should be able to mitigate this problem, however, especially if we’re not talking about criminal cases. A more practical problem is the public-sector and lawyer jobs that will be lost have long been among the steadiest, a longtime entry into the middle class. The U.S. has dragged its feet with such automation, but Europe is moving forward apace. It seems a matter if time until there’s near-universal adoption.
Buyers and sellers on EBay use the site’s automated dispute-resolution tool to settle 60 million claims every year. Now, some countries are deploying similar technology to let people negotiate divorces, landlord-tenant disputes, and other legal conflicts, without hiring lawyers or going to court.
Couples in the Netherlands can use an online platform to negotiate divorce, custody, and child-support agreements. Similar tools are being rolled out in England and Canada. British Columbia is setting up an online Civil Resolution Tribunal this summer to handle condominium disputes; it will eventually process almost all small-claims cases in the province. Until now, says Suzanne Anton, the province’s minister of justice, “if you had a complaint about noise or water coming through your ceiling, you might have to go to the Supreme Court,” spending years and thousands of dollars to get a ruling.
These online legal tools are similar to EBay’s system, which uses algorithms to guide users through a series of questions and explanations to help them reach a settlement by themselves. Like EBay, the services can bring in human adjudicators as a last resort. Several of the new platforms were designed with help from Colin Rule, who started EBay’s dispute-resolution unit in 2004 and ran it until 2011. Soon after leaving EBay, Rule started Modria, a San Jose-based company that markets dispute-resolution software for e-commerce.
Employing online tools to settle routine legal disputes can improve access to justice for people who can’t afford to hire a lawyer, while freeing up court dockets for more complex cases, enthusiasts say.•
Alvin Toffler, the sociological salesman who anticipated and feared tomorrow, just died at 87.
Has there ever been a biography written about the man whose pants were forever being scared off? I’d love to know what it was about his life that positioned him, beginning in the 1960s, to look ahead at our future and be shocked. There was always a strong sci-fi strain to his work, though it’s undeniably important to think about how science and technology could go horribly wrong. By imagining the worst, perhaps we can avoid it. Like anyone else who toiled in speculative markets, Toffler was sometimes way off the mark, though he was also incredibly prescient on other occasions.
Below is an excerpt from his BBC obituary and a few Afflictor posts about Toffler from over the years.
From the BBC:
Online chat rooms
Although many writers in the 1960s focused on social upheavals related to technological advancement, Toffler wrote in a page-turning style that made difficult concepts easy to understand.
Future Shock (1970) argued that economists who believed the rise in prosperity of the 1960s was just a trend were wrong – and that it would continue indefinitely.
The Third Wave, in 1980, was a hugely influential work that forecast the spread of emails, interactive media, online chat rooms and other digital advancements.
But among the pluses, he also foresaw increased social alienation, rising drug use and the decline of the nuclear family.
Not all of his futurist predictions have come to pass. He thought humanity’s frontier spirit would lead to the creation of “artificial cities beneath the waves” as well as colonies in space.
One of his most famous assertions was: “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn and relearn.”•
“Who Is To Write The Evolutionary Code Of Tomorrow?”
A passage about genetic engineering, a fraught field but one with tremendous promise, from a 1978 Omni interview with Toffler conducted by leathery beaver merchant Bob Guccione:
What’s good about genetic engineering?
Genetic manipulation can yield cheap insulin. It can probably help us solve the cancer riddle. But, more important, over the very long run it could help us crack the world food problem.
You could radically reduce reliance on artificial fertilizers–which means saving energy and helping the poor nations substantially. You could produce new, fast-growing species. You could create species adapted to lands that are now marginal, infertile, arid, or saline. And if you really let your long-range imagination roam, you can foresee a possible convergence of genetic manipulation, weather modification, and computerized agriculture–all coming together with a wholly new energy system. Such developments would simply remake agriculture as we’ve known it for 10,000 years.
What is the downside?
Horrendous. Almost beyond our imagination, When you cut up genes and splice them together in new ways, you risk the accidental escape from the laboratory of new life forms and the swift spread of new diseases for which the human race no defenses.
As is the case with nuclear energy we have safety guidelines. But no system, in my view, can ever be totally fail-safe. All our safety calculations are based on certain assumptions. The assumptions are reasonable, even conservative. But none of the calculations tell what happens if one of the assumptions turns out to be wrong. Or what to do if a terrorist manages to get a hold of the crucial test tube.
A lot of good people are working to tighten controls in this field. NATO recently issued a report summarizing the steps taken by dozens of countries from the U.S.S.R. to Britain and the U.S. But what do we do about irresponsible corporations or nations who just want to crash ahead? And completely honest, socially responsible geneticists are found on both sides of an emotional debate as to how–or even whether–to proceed.
Farther down the road, you also get into very deep political, philosophical, and ecological issues. Who is to write the evolutionary code of tomorrow? Which species shall live and which shall die out? Environmentalists today worry about vanishing species and the effect of eliminating the leopard or the snail darter from the planet. These are real worries, because every species has a role to play in the overall ecology. But we have not yet begun to think about the possible emergence of new, predesigned species to take their place.•
“Shut Down The Public Education System”
Toffler called for the dismantling of the U.S. public-education system in a 2007 interview at Edutopia. An excerpt:
You’ve been writing about our educational system for decades. What’s the most pressing need in public education right now?
Shut down the public education system.
That’s pretty radical.
I’m roughly quoting Microsoft chairman Bill Gates, who said, “We don’t need to reform the system; we need to replace the system.”
Why not just readjust what we have in place now? Do we really need to start from the ground up?
We should be thinking from the ground up. That’s different from changing everything. However, we first have to understand how we got the education system that we now have. Teachers are wonderful, and there are hundreds of thousands of them who are creative and terrific, but they are operating in a system that is completely out of time. It is a system designed to produce industrial workers….
The public school system is designed to produce a workforce for an economy that will not be there. And therefore, with all the best intentions in the world, we’re stealing the kids’ future.
Do I have all the answers for how to replace it? No. But it seems to me that before we can get serious about creating an appropriate education system for the world that’s coming and that these kids will have to operate within, we have to ask some really fundamental questions.
And some of these questions are scary. For example: Should education be compulsory? And, if so, for who? Why does everybody have to start at age five? Maybe some kids should start at age eight and work fast. Or vice versa. Why is everything massified in the system, rather than individualized in the system? New technologies make possible customization in a way that the old system — everybody reading the same textbook at the same time — did not offer.•
“This Technology Is Exacting A Heavy Price”
Orson Welles narrates this 1972 documentary that McGraw-Hill produced about sociologist Toffler‘s gargantuan 1970 bestseller, Future Shock. Toffler caused a sensation with his views about the human incapacity to adapt in the short term to remarkable change, in this case of the technological variety. The movie is odd and paranoid and overheated and fun.
Edward Snowden, that mixed blessing, isn’t Joseph K., as he wasn’t traduced, but there is something Kafkaesque about his shape-shifting transition into a virtual citizen, a ghost in the machine, a BeamPro boulevardier who rolls around art museums and TED gatherings.
The former NSA contract employee is now a disembodied voice of the people–some of them–who’s found a workaround for a cancelled passport: He’s sort of become a robot. It’s no small irony that the one who struck back against the unholy marriage of Cold War politics and Digital Age tools now finds himself inside Putin’s oppressive Soviet throwback when at home and a piece of cutting-edge technology when he goes out. Despite the awareness he fostered with his Paul Revere-ish leaks–“The Machines are coming!“–it seems like we’re all headed for at least the latter part of that equation.
In Andrew Rice’s excellent New York article about his encounters with the world’s most-wanted leaker, or at least his telepresence, the writer acknowledges the strangest thing about of the whole disembodied setup is how easy it is to forget that the Snowden he meets is a robot. An excerpt:
Over the past few months, we have encountered one another with some regularity, and while I can’t claim to know him as a flesh-and-blood person, I’ve seen his intellect in its native habitat. He is at once exhaustively loquacious and reflexively self-protective, prone to hide behind smooth oratory. But occasionally, he has let down his guard and talked like a human being. “I’m able to actually have influence on the issues that I care about, the same influence I didn’t have when I was sitting at the NSA,” Snowden told me. He claims that many of his former colleagues would agree that the programs he exposed were wrongfully intrusive. “But they have no voice, they have no clout,” he said. “One of the weirder things that’s come out of this is the fact that I can actually occupy that role.” Even as the White House and the intelligence chiefs brand him a criminal, he says, they are constantly forced to contend with his opinions. “They’re saying they still don’t like me — tut-tut, very bad — but they recognize that it was the right decision, that the public should have known about this.”
Needless to say, it is initially disorienting to hear messages of usurpation emitted, with a touch of Daft Punk–ish reverb, from a $14,000 piece of electronic equipment. Upon meeting the Snowbot, people tend to become flustered — there he is, that face you know, looking at you. That feeling, familiar to anyone who’s spotted a celebrity in a coffee shop, is all the more strange when the celebrity is supposed to be banished to the other end of the Earth. And yet he is here, occupying the same physical space. The technology of “telepresence” feels different from talking to a computer screen; somehow, the fact that Snowden is standing in front of you, looking straight into your eyes, renders the experience less like enhanced telephoning and more like primitive teleporting. Snowden sometimes tries to put people at ease by joking about his limitations, saying humans have nothing to fear from robots so long as we have stairs and Wi-Fi dead zones in elevators. Still, he is quite good at maneuvering on level ground, controlling the robot’s movements with his keyboard like a gamer playing Minecraft. The eye contact, however, is an illusion—Snowden has learned to look straight into his computer’s camera instead of focusing on the faces on his screen.
Here’s the really odd thing, though: After a while, you stop noticing that he is a robot, just as you have learned to forget that the disembodied voice at your ear is a phone. Snowden sees this all the time, whether he is talking to audiences in auditoriums or holding meetings via videoconference. “There’s always that initial friction, that moment where everybody’s like, ‘Wow, this is crazy,’ but then it melts away,” Snowden told me, and after that, “regardless of the fact that the FBI has a field office in New York, I can be hanging out in New York museums.” The technology feels irresistible, inevitable. He’s the first robot I ever met; I doubt he’ll be the last.•
When it comes to human-made material goods, it would seem that cheap abundance is within sight for the first time in our species’ history. The rub is that the cost of getting there has been sky-high environmentally, with scary repercussions staring us in the maw.
Yesterday, there was a picture on r/pics of a California lake (almost empty) in 2014 and the same lake with much more water in it from this year. How are things going in California? (I realize you no longer live there.) Are conditions improving there? What needs to happen now to get them even better?
Yes, I did too. I hope that some of the 5000+ people who upvoted it see your comment :)
“Things” are ok. The environment is really under stress due to drought and climate change (hard to separate), and El Niño didn’t fix anything. The biggest problem in the State is groundwater, which is barely regulated and hardly measured (there are laws now, but it will take 5+ years to implement anything).
People in cities may say “nothing’s wrong” b/c their taps flow but they are missing the environmental and groundwater stress.
I’m not an optimist in terms of improvements, as the dominant perspective is growth of population, agriculture and urban landscapes. All of these are increasing demand in a system that’s “managed” to the hilt, meaning there’s very little space for safety if things go wrong. (The big nightmare is an earthquake that “disturbs” the Delta, thereby cutting off water to SF as well as half of SoCal. That could happen tomorrow.)
I’ve suggested for years that California needs to reduce water transfers, to get regions to focus more on local supplies (i.e., recycling wastewater, saving rainwater) rather than calling for more dams or transfers.
I moved to the Netherlands b/c I don’t trust California’s water management to do much more than get by, with a good chance it will fail (it already has for communities losing access to well water or facing polluted well water).
Do you view cities like LA and LV as unsustainable, or is there a way for large cities to exist in desert climates without robbing other regions?
Good question. EVERY city is unsustainable in some way, due to the way they need to concentrate food, energy, water, etc. Those that are farther from those sources thus need to be smaller. LA was amazing back in the 30s, but grew off imported water (you can even go back earlier, to the 1913 LA Aqueduct if you want to pinpoint an issue).
The main idea is that ALL cities should pay the full cost of their resource use/environmental impact. Very few do, but it’s FAR worse when politicians allow them to get away with stuff/subsidize their growth.
If the planet is made up of mostly water, why are we concerned about the scarcity of water?
I’ve always wondered – why not just price water according to its scarcity? Give the first x gallons cheap or free to residential customers, then charge against an accelerating price scale? That would dissuade large inefficient users, but still allow people to stay clean and healthy in their homes.
You’re right in principle, but the details should be implemented differently. More.
What little things can people do to help use less water ?
Little: Turn off taps when not using water. Bigger: Don’t have a lawn. Fix leaks. Biggest: Don’t eat meat.
Mega: Get involved in regional water management, to help those who do not care as much change their habits (via changed incentives — prices — more than preaching).•
For those raised under capitalism who’ve absorbed the teachings of that system, a post-scarcity Second Machine Age sans labor is awfully difficult to envision. It’s essentially the technology-driven collapse that Karl Marx envisioned. Something has to replace the work that disappears, doesn’t it? Some mixed blessing for us to enjoy/endure? Even if intelligent machines can somehow make such a tiol-free scenario possible, we’re not even sure that we want it. Few aspire to drudgery. but genuine productivity feels good.
Eventually and maybe not gradually enough to make the transition smooth, we’ll be inside a new machine that operates under different rules, and we’ll have to likewise reinvent ourselves. Right now the spectre of mass technological unemployment has allowed the idea of Universal Basic Income to capture hearts and minds in Silicon Valley, discussion that has reverberated far beyond that well-appointed patch of Silicon Valley, even into the Oval office. Not all the plans are equal–or even good–but they are being discussed in halls of power.
Two excerpts below from: 1) President Obama discussing Basic Income in a Bloomberg interview, and 2) Ilana E. Strauss’ Atlantic piece about the possibility of a labor-free society that doesn’t promote ennui.
Some economists suggest that globalization is going to start targeting all those services jobs. If you want to keep up wages in that area, doesn’t it push us toward something like a universal basic income?
The way I describe it is that, because of automation, because of globalization, we’re going to have to examine the social compact, the same way we did early in the 19th century and then again during and after the Great Depression. The notion of a 40-hour workweek, a minimum wage, child labor laws, etc.—those will have to be updated for these new realities. But if we’re smart right now, then we build ourselves a runway to make that transition less abrupt, because we’re still growing, and we’re beating the competition around the world. Look, for example, at smart cars, where the technology basically exists now. The number of people who are currently employed driving vehicles of some sort is enormous. And some of those jobs are pretty good jobs. You know, people are worried about Uber, but the fear is actually driverless Uber, right? Or driverless buses or what have you.
Now, there are all kinds of reasons why society may be better off if smart cars are the norm. Significant drops in traffic fatalities, much more efficient use of the vehicle, so that we’re less likely to emit as much pollution and carbon that causes climate change. You know, drastically reduced traffic, which means we’re giving back hours to families that are currently taken up in road rage. All kinds of reasons why we may want to do that. But if we haven’t given any thought to where are the people who are currently making a living driving transferring into, then there’s going to be deep resistance.
So trying to separate out issues of efficiency and productivity from issues of distribution and how people experience their own lives and their ability to take care of their families, I think, is a bad recipe. It’s not an either/or situation. It’s a both/and situation.•
People have speculatedfor centuriesabout a future without work, and today is no different, with academics, writers, and activists once again warning that technology is replacing human workers. Some imagine that the coming work-free world will be defined by inequality: A few wealthy people will own all the capital, and the masses will struggle in an impoverished wasteland.
A different, less paranoid, and not mutually exclusive prediction holds that the future will be a wasteland of a different sort, one characterized by purposelessness: Without jobs to give their lives meaning, people will simply become lazy and depressed. Indeed, today’s unemployed don’t seem to be having a great time. One Gallup pollfoundthat 20 percent of Americans who have been unemployed for at least a year report having depression, double the rate for working Americans. Also, some researchsuggeststhat the explanation for rising rates of mortality, mental-health problems, and addiction among poorly-educated, middle-aged people is a shortage of well-paid jobs. Another study shows thatpeople are often happier at work than in their free time. Perhaps this is whymany worryabout the agonizing dullness of a jobless future.
But it doesn’t necessarily follow from findings like these that a world without work would be filled with malaise.•
The New York Times has been dealing with computers in one way or another since the 1970s, but the tool went from aid to threat once the Internet took hold in the middle of the 1990s. In under a decade, the old way of doing business became passé and clinging to it a danger. The real challenge is for the former newsprint company to continue reinventing itself in ways that won’t degrade the journalism. It’s not as easy task, and it’s especially difficult for reporters to deal with a sky perpetually falling while trying to do an often busy and bruising job. It’s no wonder new Public Editor Elizabeth Spayd told Poynter she’ll “pay attention to the newspaper’s business efforts as well” as the content. There’s no separating them anymore.
In a Politico piece, the excellent Joe Pompeo examines the company’s maneuverings in this fraught media age. An excerpt:
On the business side, the Times’ decision in 2011 to start charging people to read an unlimited number of articles on nytimes.com proved to be a life-saving calculation, bringing the Times more than a million digital-only subscribers to date and nearly $200 million in circulation revenue last year alone.
But the water is rising again and those numbers must grow. Soon, the Times is likely to hit a ceiling on how many people in its existing audience it can convert into paying subscribers. If it can’t get more money out of the same customers, it must find new ones.
That’s what’s behind a plan implemented earlier this year that puts $50 million behind the prospect of getting many new readers to open their wallets in foreign markets, where the Times is creating digital editions tailored to non-Americans.
A unit creating content that might just pass for journalism were it not paid for by advertisers also is making dents in the Times’ march toward $800 million in digital revenues by 2020, an ambitious goal considering digital revenues were just south of $400 million in 2015.
The problem is that while print advertising is still a big slice of company revenues ($441.6 million out of $1.58 billion in 2015), it’s been plummeting year after year as marketers become hotter on digital, and tech giants like Facebook and Google dominate online ad growth.•
In the same decade humans set foot on the moon, the most soaring technological achievement of our species, Sir Edmund Hillary went on an expedition to the Himalayas to search for the Abominable Snowman. There are still some among us all these years later who believe Yeti roams the Earth and the moonwalk was faked.
Great scientific knowledge and utter disregard for facts can exist in the same moment. There’s perhaps no more perplexing aspect of modern life than conspiracy theories mucking up the works, from chemtrails to 9/11 Truthers to Birthers to anti-Vaxxers. Endless information was supposed to set us free from such madness. It did not. The new tools have made it easier to spread lies, to conduct a war on info, to even run an essentially fact-free Presidential campaign.
Excerpts from two articles follow: 1) Christopher Mele’sNew York Times articleabout those who believe the Orlando massacre a staged or “false flag” event, and 2) William Finnegan’s New Yorker commentary on Trump’s appreciation for unhinged conspiracist Alex Jones, who believes pretty much every job an inside one.
Jesse Walker, the author ofThe United States of Paranoia: A Conspiracy Theory,said fear, the human need to find patterns and tell stories, and the recognition that conspiracies are not impossible help fuel such theories. The stories — no matter how outlandish — can bring meaning and a measure of comfort in a world that can make no sense, he said.
False-flag theories have long been around. One focused on the assassination attempt in 1835 of President Andrew Jackson, during which the president fought off a gunman whose two weapons misfired. Conspiracy theorists at the time believed Jackson had hired the gunman as a way to drum up sympathy for himself, Mr. Walker said.
Unlike the 1800s, stories today benefit from instant delivery through the internet and social media. One of the better-known purveyors is Alex Jones, who hosts an internet show at the website infowars.com. The day of the Orlando shooting, he posted a videoin which he asserted that the government had let the massacre happen so it could pass “hate laws to deal with right-wingers” and to disarm gun owners. He did not respond to an email seeking comment.
Mike Rothschildof Pasadena, Calif., who has researched and written about conspiracy theories, described the world of false-flag believers as a “bank of awakened internet sleuths that has got it all figured out.” They see it as their duty to warn others about secret elites in government who are plotting against citizens, he said.•
On December 2nd, while the awful news from San Bernardino was erupting, bit by unconfirmed bit, I was surprised by the crisp self-assurance of a couple of bloggers whose names were new to me. They were on it—number of victims, names of shooters, police-radio intercepts. Soon, though, the bloggers veered offfrom the story that other news sources were slowly, frantically putting together. The information being released by the authorities did not match the information the bloggers were unearthing, and the latter quickly deduced that, like other “mass shootings” staged by the government, in Newtown, Connecticut, and elsewhere, this was a “false flag” operation. The official account was fiction. One Web site that carried the work of these “reporters” was called Infowars. I made do with other sources for news. But I kept an eye on Infowars and its proprietor, Alex Jones, who is a conspiracy theorist and radio talk-show host in Austin, Texas. Jones’s guest on his show the morning of the shooting had been, as chance would have it, Donald Trump. Jones had praised Trump, claiming that ninety per cent of his listeners were Trump supporters, and Trump had returned the favor, saying, “Your reputation is amazing. I will not let you down.”
Jones’s amazing reputation arises mainly from his high-volume insistence that national tragedies such as the September 11th terror attacks, the Oklahoma City bombing, the Sandy Hook elementary-school shooting, and the Boston Marathon bombing were all inside jobs, “false flag” ops secretly perpetrated by the government to increase its tyrannical power (and, in some cases, seize guns). Jonesbelievesthat no one was actually hurt at Sandy Hook—those were actors—and that the Apollo 11 moon-landing footage was faked. Etcetera. Trump also trades heavily in imaginary events and conspiracy theories. He gained national traction on the American right by promoting the canard that President Obama was born outside the United States—a race-baiting lie that the candidate stilltoys withon Twitter. But birtherism is only the best-known among Trump’s large collection of creepy political fairy tales. You’ve probably heard the one about vaccines and autism. He even pushed that during a Presidential primary debate, on national television. Do you really believe that Obama won the 2012 election fairly? Wrong.Fraud. (At the same time, it’s Mitt Romney, total loser, who let everyone down.) Bill Ayers,not Obama, wrote “Dreams from My Father.” There is no drought in California, and the Chinese, outwitting us per usual, invented the concept of global warming to undermine American manufacturing. And so on.