The American military has thus far refused to consider using autonomous weapons system, which is good, but is it our choice alone to make? If one world power (or a smaller, rogue nation aspiring to be one) were to deploy such machines, how would others resist? The technology is trending toward faster, cheaper and more out of control, so it’s not difficult to imagine such a scenario. I think in the long run these systems are inevitable, but hopefully there will be much more time to prepare for what they’ll mean.
In a Financial Times column, John Thornhill writes of fears of LAWS (Lethal Autonomous Weapons Systems), which could fall into the wrong hands, like warlords or tyrants. Of course, it’s easy to make the argument that all hands are the wrong ones. The opening:
Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.
Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.
Who would not think that a good use of technology?
As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.
In anopen letterpublished last July, a group of AI researcherswarnedthat technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”•
Before media connected us all every second, there was Robert L. Ripley.
The creator of Believe It or Not!–a comic strip in the Hearst papers, then a radio program and finally a T.V. show–traveled the globe beginning in the 1920s in search of oddities and curiosities to entertain and inform Americans, long before travel abroad was something possible for most. His items didn’t exactly go viral–everyone caught them all at once during the Newspaper Age. In Ripley’s own wry way, as a spiritual descendant of P.T. Barnum, he worked to make the world smaller, to establish a Global Village.
As an early King of All Media, he was helped aided in his amateur anthropology and archaeology by his able researcher, Norbert Pearlroth, and helped immensely by his onetime sports editor, Walter St. Denis, who suggested the three-word, exclamatory column title that remains a recognizable phrase even in the Internet Age.
Heart problems ended Ripley’s life young, as recorded in his obituary in the May 28, 1949 Brooklyn Daily Eagle.
Speaking of solid, middle-class jobs being disappeared by technology, Elon Musk has tipped his hand, if just a bit, on a driverless vehicle that he believes can replace much of public transportation. Could be great for congestion and environmentalism, though not so much for bus drivers.
Tesla’s Chief Executive Officer Elon Musk is working on a self-driving vehicle he says could replace buses and other public transport in order to reduce traffic in cities. But he’s keeping the development a secret.
“We have an idea for something which is not exactly a bus but would solve the density problem for inner city situations,” Musk said Thursday at a transport conference in Norway. “Autonomous vehicles are key,” he said of the project, declining to disclose more. “I don’t want to talk too much about it. I have to be careful what I say.”•
Aspects of the Gig Economy benefit consumers but are terrible for most workers. An Uberization of Labor has increased in the last few years and seems poised to become a large-scale and entrenched apart of American society. If it is here to stay, what’s even worse is many of those piecemeal positions may eventually also be eliminated by automation.
Surprisingly, many Gig workers prefer their office-less lives because of the “greater freedom” it affords them, which is odd since most studies find that this new brand of freelancer has to hustle more hours than if trapped in a cubicle. Are bosses and office politics so awful that we would rather surrender security, vacation days and benefits to not be under the thumb of a fellow human being, even if an algorithm runs us ragged? It would seem so. We find each other intolerable enough to be sold on a Libertarian dream that may end up a nightmare.
According to a 2014 study commissioned by theFreelancers Union, 53 million Americans are independent workers, about 34 percent of the total workforce. A study from Intuit predicts that by 2020, 40 percent of US workers will fall into this category.
While there is considerable disagreement over this projection, what is clear is that “more and more jobs are being moved to independent contractor status,” saysJeffrey Pfeffer, a professor of organizational behavior at Stanford University. Pfeffer cites a recent paper that found that “the percentage of workers engaged in alternative work arrangements rose from 10.1 percent in February 2005 to 15.8 percent in late 2015.” This rise accounts for over 9 million people — more than all of the net employment growth in the US economy over that decade.
To be clear, employers are driving the change. Between 2009 and 2013, the unemployment rate was more than 7 percent, suggesting workers were turning to gigs because they didn’t have a choice. But that’s not to say most independent workers aren’t happy with their job situations. According to the Freelancers Union, a 300,000-plus member nonprofit, nearly nine in 10 of its members surveyed said they would not return to a traditional job if given the chance.•
Moscow is of two minds. To some extent, the capital of Russia must toe the line for Vladimir Putin, a capo with nuclear capabilities, whose backward thinking has dashed the national economy and threatened a new Cold War. But Moscow also rejects the retrograde big picture. The city openly embraces the future, one that’s not only an explosion of conspicuous consumption but also is liveable and sustainable.
In their Spiegel profile of a city in transition, Christian Neef and Matthias Schepp write that “the avant-garde triumphs on Moscow stages” in reference to cutting-edge theater, but all the world’s a stage. An excerpt about Technopolis:
City officials chose a former industrial ruin as the site of Technopolis, the largest of 19 new high-tech parks. The site once housed the Lenin Komsomol auto plant, which the Soviets built in 1930 in collaboration with Ford. After World War II, the plant produced the Moskvich, a copy of Germany’s Opel Kadett.
A real estate developer is now building a giant shopping mall on the site, and the city government has plans to build housing and offices for tens of thousands of people, skyscrapers included. But the pièce de résistance is Technopolis. Several dozen innovative companies have moved into one of the old factory buildings, including a manufacturer of computer-guided drones that deliver products from medication to pizza. City officials were enthusiastic about the company, while military and intelligence officials have voiced security concerns.
The startups are attracting specialists like nanophysicist Irina Rod. She has returned to Russia from the West, where many of her colleagues had emigrated to, “because, with Technopolis, they have finally established the conditions needed to work properly,” she says. Rod, who conducted research at the University of Duisburg-Essen in western Germany for seven years, has spent the last two years working for a joint venture of the Dutch firm Mapper and Russian high-tech holding company Rosnano.
The city government has rolled out the red carpet for such investors, waiving property taxes, reducing corporate income tax by a quarter, setting rents at below market level and guaranteeing a maximum waiting period of six months from the date a startup files an application for incorporation to the date of registration. “For Russia and our sluggish and often corrupt bureaucracy, that is sensational,” says Rod.
The 35-year-old is standing in a clean room, which has special protective features against contamination. She is wearing a white astronaut suit over a sweater and jeans, and her long, blonde hair is tucked under a white hood. Rod is in charge of quality control for microscopically small electronic lenses, which guide beams inside large 3-D printers.
She originally left the country because Russian microscopes were inadequate for the highly specialized research she does. “But now Moscow has an advantage,” she says. “The elevators for rapid professional advancement move twice as fast here.”•
The shocking, recent findings of economist Anne Case and her husband Angus Deaton in regards to the dying off of white, middle-aged Americans has been questioned, as all such eye-popping results should be, but a study of U.S. suicide rates between 1999 and 2014 seems to support their work, suggesting even that the problem is wider and deeper.
The new report a scarily large spike in citizens taking their own lives, close to 25%, and the surge cuts across most racial, gender and age groups. It could be the result of hollowing out of the middle class or the economic collapse or the opioid epidemic or the shift to a more technological age, but it’s probably a confluence of all those things and others. It may be a mismatch disease of some sort, but a mental one.
WASHINGTON — Suicide in the United States has surged to the highest levels in nearly 30 years, a federal data analysis has found, with increases in every age group except older adults. The rise was particularly steep for women. It was also substantial among middle-aged Americans, sending a signal of deep anguish from a group whose suicide rates had been stable or falling since the 1950s.
The suicide rate for middle-aged women, ages 45 to 64, jumped by 63 percent over the period of the study, while it rose by 43 percent for men in that age range, the sharpest increase for males of any age. The overall suicide rate rose by 24 percent from 1999 to 2014, according to the National Center for Health Statistics, which released the study on Friday. …
The data analysis provided fresh evidence of suffering among white Americans. Recent research has highlighted the plight of less educated whites, showing surges in deaths from drug overdoses, suicides, liver disease and alcohol poisoning, particularly among those with a high school education or less. The new report did not break down suicide rates by education, but researchers who reviewed the analysis said the patterns in age and race were consistent with that recent research and painted a picture of desperation for many in American society.•
Somehow missed the Atlantic interview from a couple weeks ago that Ross Andersen conducted with Russian billionaire Yuri Milner, who’s dedicated $100 million to speed tiny probes to Alpha Centauri in just 20 years time. This article is better and deeper by far than anything else I’ve read on the subject, revealing how and why the entrepreneur, named for Gagarin and raised on Asimov, believes he can accomplish his mission and explaining the smaller details (e.g., ground-based laser beams vs. space-based).
An excerpt about the desert power station that is planned to propel the crafts:
Milner told me that a ground-based laser could run off a giant power plant devoted solely to the mission. It could be a solar array in the Atacama desert, given how much sunlight pours onto its stark landscape. To make it work, the array would have to stretch for tens of miles, and it would need a battery large enough to store fodder for the daily firing of the world’s most powerful laser cannon.
The laser team would need to time its daily blast carefully, to avoid destroying the satellites and planes that pass overhead. When fired, the beam would shoot up through the atmosphere, and slam into the disc-like probe, sending it hurtling toward the edge of the solar system. After only a few minutes, the probe would be traveling at a significant fraction of the speed of light. It would pass Mars in less than an hour. The next day, it would streak by Pluto. (New Horizons took 9 years to achieve this feat.) As the probe headed deeper into the Kuiper belt’s recesses, another one would pop out from the mothership, and float into the laser’s line of sight.
“If you have a reasonable sized battery, and a reasonable sized array, and a reasonable sized power station, you probably can do one shot a day,” Milner told me. “And then you recharge and shoot again. You can launch one per day for a year and then you have hundreds on the way.”
By sending a whole stream of probes, you get more data, and also redundancy. Any encounter with interstellar dust would be fatal for a thin, flimsy disc traveling at cosmic speeds. A few hundred probes would probably be enough to guarantee that one slipped through—although it’s not a certainty.•
Great sadness over the death of Prince, who was as good as any pop musician of his era and probably better at his peak, though I wouldn’t be surprised if his demise was hastened by living inside a sealed bubble, a delusion of his own design, much the same as Elvis Presley and Michael Jackson. When you recuse yourself from the world, the mind tends to race, and it doesn’t always wind up in a safe place. At any rate, terrible to see him go so young.
In 2010, Peter Willis of the Mirrorvisited Paisley Park for what he terms the “most bizarre interview I’ve ever had with a celebrity.” An excerpt:
Unlike most other rock stars, he had banned YouTube and iTunes from using any of his music and had even closed down his own official website.
He said: “The internet’s completely over. I don’t see why I should give my new music to iTunes or anyone else. They won’t pay me an advance for it and then they get angry when they can’t get it.
“The internet’s like MTV. At one time MTV was hip and suddenly it became outdated. Anyway, all these computers and digital gadgets are no good.
“They just fill your head with numbers and that can’t be good for you.”
Then he led me to his recording studio in the complex and invited me to sit in his leather swivel chair at the enormous mixing desk.
Wow! I had finally arrived at the epicentre of Prince’s world – the scene of fabled all-night-long sessions in which he apparently played up to 27 instruments.
This is where the genius behind classics such as Purple Rain, When Doves Cry, 1999 and Let’s Go Crazy created his music. The walls were a vibrant reddish purple, flickering candles lined every ledge and the smell of incense filled the air.
Prince jabbed a few buttons and hidden speakers burst into life with my preview. He looked at me for a reaction and I told him it was brilliant, as indeed it was.
“This one’s called Compassion,” said Prince. But as I tried to jot down the title he looked aghast, grabbed my wrist and pleaded: “Please, please. It’s a surprise, don’t spoil it for people.”
A religious man
He told me how these were trying times and to emphasis the point, chivvied me into another room, switched on the TV and showed me an evangelical TV documentary blaming corporate America for a range of woes from Hurricane Katrina to asthmatic children.
He said that one problem was that “people, especially young people, don’t have enough God in their lives.”•
Ray Kurzweil thinks humans who can survive to 2030 will become immortal, but I’m willing, regrettably, to bet the over.
I don’t doubt there can be radical life extension if Homo sapiens persevere long enough, but the answers may be a lot more complicated than medical science riding a wave of Moore’s Law. Computing power, nanotechnology and genetic code will all likely be key to such a breakthrough, but time, that precious thing, is sadly not on our side.
An excerpt from David Hochman’s very good Playboy Interview with Google’s Director of Engineering:
The point is health care is now an information technology subject to the same laws of acceleration and progress we see with other technologies. We’ll soon have the ability to rejuvenate all the body’s tissues and organs and develop drugs targeted specifically at the underlying metabolic process of a disease rather than taking a hit-or-miss approach. But nanotechnology is where we really move beyond biology.
Tiny robots fighting disease in our veins?
Yes. By the 2020s we’ll start using nanobots to complete the job of the immune system. Our immune system is great, but it evolved thousands of years ago when conditions were different. It was not in the interest of the human species for individuals to live very long, so people typically died in their 20s. The life expectancy was 19. Your immune system, for example, does a poor job on cancer. It thinks cancer is you. It doesn’t treat cancer as an enemy. It also doesn’t work well on retroviruses. It doesn’t work well on things that tend to affect us later in life, because it didn’t select for longevity.
We can finish the job nature started with a nonbiological T cell. T cells are, in fact, nanobots—natural ones. They’re the size of a blood cell and are quite intelligent. I actually watched one of my T cells attack bacteria on a microscope slide. We could have one programmed to deal with all pathogens and could download new software from the internet if a new type of enemy such as a new biological virus emerged.
As they gain traction in the 2030s, nanobots in the bloodstream will destroy pathogens, remove debris, rid our bodies of clots, clogs and tumors, correct DNA errors and actually reverse the aging process. One researcher has already cured type 1 diabetes in rats with a blood-cell-size device.
So if we can hang on for 15 more years, we can basically live forever?
I believe we will reach a point around 2029 when medical technologies will add one additional year every year to your life expectancy. By that I don’t mean life expectancy based on your birthdate but rather your remaining life expectancy.•
In his just completed Reddit Ask Me Anything, Sir Richard Branson had the following exchange:
Who did you look up to when you were growing up?
I looked up to Sir Freddie Laker, the pioneer of cheap air travel who was driven out of business by British Airways. It was he who told me to “Sue the bastards!” when BA tried to do the same to us. We took his advice and succeeded. He also suggested I use myself to put Virgin on the map, which led to ballooning and boating adventures.•
Excerpts follow from two pieces about the late British aviation entrepreneur.
Before his actual obituary in 2006, Laker, who helped democratize plane travel and opened the heavens to the masses, was “laid to rest” once before in the early ’80s, when the wings came off his business model. The opening of Terry Smith’s 1982People articleabout Laker Airways when the Skytrain was falling:
If any doubt remained that Sir Freddie Laker is a knight of the people, the events of this month have dispelled it. The bankruptcy of Laker Airways, which in 1977 pioneered cut-rate transatlantic air travel, struck Laker’s countrymen like the demise of an old friend. Within hours of the announcement a group of private citizens set up Freddie’s Friendly Fund, quixotically dedicated to saving Skytrain. In the first 24 hours they received $1 million in pledges—and by last week the tally was up to $5.5 million. In addition, the rock group Police promised to turn over the receipts of a Los Angeles concert totaling $185,000. From two schoolboys who donated 16 pence at a Laker airport counter to the group of Liverpool businessmen who offered $1 million, grateful travelers have rallied to Sir Freddie’s side. Says Laker stewardess Lisa Holden, who has spent her recent days gathering signatures on a petition of support: “If public opinion was anything to go by, we’d never go out of business.”
Unfortunately, Laker’s $388 million debt is more than even such extraordinary goodwill can erase. A British bank hoping to raise a last-minute $64.7 million fund to keep Laker flying admitted defeat, and an offer by the airline’s 2,500 employees to take a huge pay cut was similarly futile. For Laker staffers who lobbied vociferously at 10 Downing Street last week, their plea for government assistance was largely a symbolic last stand. “I’ll go under with Laker,” said Capt. Terry Fenton. “I won’t find another job as a pilot, I won’t find any other job. There are no jobs. I’ll have to sell my house, just the same as 90 percent of the people here today.”
Laker was a victim of problems that have thrown other airlines into a tail-spin—skyrocketing fuel prices and decreased passenger traffic. When Pan Am (which lost $348 million in 1981) slashed its New York-London fare last November, Sir Freddie’s prospects darkened. The decline of the British pound also sapped his resources. Currently trying to sell the insolvent British Airways, Prime Minister Margaret Thatcher’s government announced that it could not intervene on Laker’s behalf.•
Laker lived high and fell far, though for a while he used aggressive pricing and sharp advertising to become the center of commercial aviation, making relatively cheap transatlantic fares a reality. From his 2006 New York Times obituarypenned by Jeff Bailey:
Laker Airways began service in 1977 and upset the staid world of trans-Atlantic travel then dominated by British Airways, Pan American World Airways and Trans World Airlines, sharply cutting fares and greatly expanding the number of people flying across the ocean.
“Traditional airlines were flying half-empty 747’s between Europe and the U.S.,” said Robert L. Crandall, former chief executive of American Airlines. Then along came Sir Freddie, charging about $240 for a round trip, and his planes were full.
Mr. Crandall recalled going to London in the early 1980’s to meet with him, and being picked up at the airport by Sir Freddie in a Jaguar convertible and driven along country lanes at ’70 to 80 miles an hour. Sir Freddie then took Mr. Crandall and his wife to a country pub for lunch.
The established carriers eventually matched Laker’s low fares. Passengers drifted away from Laker, and the company, having grown too quickly by most accounts, could not meet its debt obligations.
Laker’s liquidators later sued competing airlines, claiming a conspiracy to drive the upstart out of business. The litigation was settled, and Sir Freddie received $8 million for his Laker stock, though he lost many of his personal assets.
While it lasted, with Sir Freddie appearing in cheeky advertisements (‘Fly me,’ he said), Laker Airways was the talk of the aviation industry.•
Action figure of Fran Lebowitz or, perhaps, Harry Styles.
Fran Lebowitz thinks we look sloppy because we’re wearing clothes that weren’t intended for us. “What people don’t know is: Clothes don’t really fit you unless they’re made for you,” she told Elle last year. Sure, that’s true, but who can afford custom-made outfits?
The promise of 3D printing is that it will destabilize manufacturing, a transition that will be attended by both positive and negative results. One item on the plus side is that clothing may be much cheaper and made to fit individuals to the minutest specifications. Of course, that won’t be so good for garment workers, tailors, etc.
In a Factor-Tech piece, Lucy Ingham write of 3D printers moving bespoke beyond the boutique and manufacturing from factories to shops. Eventually, the creative process will likely relocate even further, directly into many homes. The opening:
A project between Loughborough University and clothing manufacturer the Yeh Group is set to make it possible to manufacture entire garments and footwear that perfectly fit their intended wearer in just 24 hours.
The project, which will run for the next 18 months, has come about as a result of advancements in additive manufacturing, enabling clothing to be printed in their entirety from a raw material such as polymer, without the waste and associated costs normally associated with clothing production.
“With 3D printing there is no limit to what you can build and it is this design freedom which makes the technology so exciting by bringing to life what was previously considered to be impossible,” said Dr Guy Bingham, senior lecturer in product and industrial design at Loughborough University,
“This landmark technology allows us as designers to innovate faster and create personalised, ready-to-wear fashion in a digital world with no geometrical constraints and almost zero waste material. We envisage that with further development of the technology, we could 3D print a garment within 24 hours.”•
The term “middle class” was not always a nebulous one in America. It meant that you had arrived on solid ground and only the worst luck or behavior was likely to shake the earth beneath your feet. That’s become less and less true for four decades, as a number of factors (technology, globalization, tax codes, the decline of unions, the 2008 economic collapse, etc.) have conspired to hollow out this hallowed ground. You can’t arrive someplace that barely exists.
Middle class is now what you think you would be if you had any money. George Carlin’s great line that “the reason they call it the American Dream is because you have to be asleep to believe it” seems truer every day. It’s not so much a fear of falling anymore, but the fear of never getting up, at least not within the current financial arrangement. Those hardworking, decent people you see every day? They’re just as afraid as you are. They are you.
In the spirit of the great 1977 Atlantic article “The Gentle Art of Poverty” and William McPherson’s recent Hedgehog Review piece “Falling,” the excellent writer and film critic Neal Gabler has penned, also for the Atlantic, an essay about his “secret shame” of being far poorer than appearances would indicate. An excerpt:
I know what it is like to have to juggle creditors to make it through a week. I know what it is like to have to swallow my pride and constantly dun people to pay me so that I can pay others. I know what it is like to have liens slapped on me and to have my bank account levied by creditors. I know what it is like to be down to my last $5—literally—while I wait for a paycheck to arrive, and I know what it is like to subsist for days on a diet of eggs. I know what it is like to dread going to the mailbox, because there will always be new bills to pay but seldom a check with which to pay them. I know what it is like to have to tell my daughter that I didn’t know if I would be able to pay for her wedding; it all depended on whether something good happened. And I know what it is like to have to borrow money from my adult daughters because my wife and I ran out of heating oil.
You wouldn’t know any of that to look at me. I like to think I appear reasonably prosperous. Nor would you know it to look at my résumé. I have had a passably good career as a writer—five books, hundreds of articles published, a number of awards and fellowships, and a small (very small) but respectable reputation. You wouldn’t even know it to look at my tax return. I am nowhere near rich, but I have typically made a solid middle- or even, at times, upper-middle-class income, which is about all a writer can expect, even a writer who also teaches and lectures and writes television scripts, as I do. And you certainly wouldn’t know it to talk to me, because the last thing I would ever do—until now—is admit to financial insecurity or, as I think of it, “financial impotence,” because it has many of the characteristics of sexual impotence, not least of which is the desperate need to mask it and pretend everything is going swimmingly. In truth, it may be more embarrassing than sexual impotence.•
I’m more deterministic about technology than John Markoff, but I really enjoyed his latest book, Machines of Loving Grace. One tidbit from that title: “At Stanford Research Institute, Douglas Engelbart sent the entire staff of his lab through EST training and joined the board of the organization.” Engelbart is the Augmented Intelligence pioneer most known today for 1968’s “Mother of All Demos.”
EST, the so-called self-improvement system that features copious mental abuse, is the brainchild of Werner Erhard, who was born John Rosenberg and rechristened himself after aNazi rocketeer(misspelling it!).The profane self-help peddler came to wide prominence in the 1970s, with the aid of apostles in entertainment and intellectual circles, from John Denver to Buckminster Fuller to Silicon Valley technologists. Now an octogenarian, Erhard still unabashedly calls himself a “hero.” Excerpts follow from two articles written during the EST salesman’s headiest decade.
I wanted to get as far away from Jack Rosenberg as I could get,” explains Werner Erhard. His reason is clear: Jack Rosenberg was a loser. Born in Lower Merion, Pa., Rosenberg married at 18, fathered four children and worked as a construction company supervisor—until he dropped out in 1960. Leaving his family, he took off for St. Louis with a girlfriend (now his second wife and mother of three). To start fresh, Rosenberg adoptedspace scientist Wernher von Braun‘s first name (misspelling it) and former West German Chancellor Ludwig Erhard’s last name. “Freudians,” Werner Erhard concedes, “would say this was a rejection of Jewishness and a seizure of strength.”
The rest of Erhard’s spiritual hegira has become legend among his cult. For eight years he worked as a crackerjack instructor of encyclopedia salesmen. Then one morning while driving down the freeway south of San Francisco, to which he moved in 1964, he was suddenly struck by the realization that “I was never going to make it. No matter how much money or recognition I achieved, it would never be enough.”
To overcome this hopelessness, Erhard experimented with just about every method guaranteed to bring peace of mind. “I tried yoga, Dale Carnegie, Zen, Scientology, encounter groups, t-groups, psychoanalysis, reality therapy, Gestalt, love, nudity, you name it,” he recalls. “But when it was over, that was not it.”
Once again, Erhard was behind the wheel when he finally “got it”—a religious happening that the faithful call “The Experience.” And what is ‘it’? Replies the Master: “What is it, is it. When you drop the effort to make your life work, you begin to discover that it’s perfect the way it is. You can relax. It’s all going to unfold.”
Not much of a message, perhaps, but as packaged, promoted and proselytized by Erhard in a two-weekend, 60-hour course (price $250), his movement, known as est (Latin for ‘it is,’ as well as Erhard Seminars Training), has turned out more than 63,000 converts in 12 U.S. cities. Another 12,000 hopefuls are on the waiting list. Among the alumni of est’s psychic boot camps are Emmy winner Valerie Harper (who thanked Erhard on TV for changing her life), Cloris Leachman,John Denver, astronautBuzz Aldrinand activistJerry Rubin.•
For hours mechanics have been fine-tuning the squat red-and-silver race car, while assistants check their clipboards and keep the Watkins Glen (N.Y.) bivouac free of litter and strangers. One fan wanders through in a T-shirt with the baffling slogan: ‘Before I was different, now I’m the same.’ Presently the driver emerges from an enormous van, astronaut-like in his creamy flame-proof suit, and heads for the Formula Super Vee racer (named for its Volkswagen engine). At the wave of a flag he will roar around a 3.3-mile Grand Prix course at speeds up to 130 mph.
There are 29 other qualifiers in this Gold Cup event, but only driver Werner Erhard claims he is here for the sake of mankind. Erhard, the founder of est (Erhard seminars training), says that when he slides into his 164-horsepower Argo JM4, he is raising consciousness, not merely dust.
‘Real people—you and me—feel like they don’t make any difference in this lousy world,’ says the 43-year-old Erhard. He is tall and loose-limbed with icy blue eyes; he insists on eye contact during a conversation. If his listener looks away, even momentarily, Erhard stops talking. He wants everyone to understand why he is driving fast cars these days in addition to heading the $20 million business that est has become, plus a 1977 spin-off, his program ‘to end world hunger by 1997.’ ‘I wanted to organize a high-performance team,’ Erhard continues, ‘that could master a complex skill in a very short time with winning results and show that everyone involved makes a big difference, from grease monkeys to spectators.’ In order to prove this estian point, Erhard says he considered such adventures as skydiving and karate, but rejected them as not collective enough. ‘Auto racing was perfect!’ he exclaims. ‘I hadn’t driven a car in six years and didn’t know the first thing about racing. Whatever we’d achieve, we’d achieve together.’”
Campuses, theme parks and other small, contained areas in warm-weather locales seem like Ground Zero for driverless cars, since they’re usually well-maintained and more predictable and mappable than wide-open spaces. Modestly-sized neighborhoods may be in the same category. Case in point: Beverly Hills would like autonomous vehicles, which could be summoned with a smartphone, to supplement its current public-transportation system.
Futurists have suggested that one day, self-driving cars might augment or even replace public transport, but for the town elders of Beverly Hills, this future is nearer than you’d think. Earlier this month, the city’s council voted unanimously to create a program to “develop autonomous vehicles as public transportation.”
The council’s vision is for self-driving vehicles to provide “on-demand, point-to-point transportation,” with citizens “requesting a ride using their smartphone.” The shuttles wouldn’t replace public transportation, but augment it, with Beverly Hills Mayor John Mirisch describing how autonomous vehicles would solve the “first/last mile” problem for residents using the city’s future subway — the Purple Line Extension — to get in and out of the city.
“This is a game-changer for Beverly Hills and, we hope, for the region,” said Mirisch in a press release. “Beverly Hills is the perfect community to take the lead to make this technology a reality. It is now both feasible and safe for autonomous cars to be on the road.”•
When Jonathan Franzen, who’s not going to stop, met President Obama, he informed our Commander in Chief that Richard Nixon was the “last Liberal President.” Obama responded, “Yeah, the only problem was he was crazy.” Largely true on both counts.
I’ve mentioned before that Nixon, who succeeded LBJ and his “War on Poverty,” attempted to establish Guaranteed Basic Income in the U.S., which came awfully close to happening. For a number of reasons, technological and political among them, the idea probably has more currency among Liberal, Conservative and Libertarian think tanks than anytime since, though those vying for higher office, Bernie Sanders included, dare not speak its name. If GBI resulted in a total dismantling of all other social safety nets, it could do more harm than good. If done correctly, however, it could help working-class people survive the hollowing out of the middle.
At Alternet, Rutger Bregman recalls Nixon’s effort. An excerpt:
Few people today are aware that the United States was just a hair’s breadth from realizing a social safety net at least as extensive as those in most western European countries. When President Lyndon B. Johnson declared his “War on Poverty” in 1964, Democrats and Republicans alike rallied behind fundamental welfare reforms.
First, however, some trial runs were needed. Tens of millions of dollars were budgeted to provide a basic income for more than 8,500 Americans in New Jersey, Pennsylvania, Iowa, North Carolina, Indiana, Seattle and Denver in what were also the first-ever large-scale social experiments to distinguish experimental and control groups. The researchers wanted answers to three questions: (1) Would people work significantly less if they receive a guaranteed income? (2) Would the program be too expensive? (3) Would it prove politically unfeasible?
The answers were no, no and yes.
Declines in working hours were limited across the board. “[The] declines in hours of paid work were undoubtedly compensated in part by other useful activities, such as search for better jobs or work in the home,” noted the Seattle experiment’s concluding report. For example, one mother who had dropped out of high school worked less in order to earn a degree in psychology and get a job as a researcher. Another woman took acting classes; her husband began composing music. “We’re now self-sufficient, income-earning artists,” she told the researchers. Among youth included in the experiment, almost all the hours not spent on paid work went into more education. Among the New Jersey subjects, the rate of high school graduations rose 30 percent.
And thus, in August 1968, President Nixon presented a bill providing for a modest basic income, calling it “the most significant piece of social legislation in our nation’s history.” A White House poll found 90 percent of all newspapers enthusiastically receptive to the plan. The National Council of Churches was in favor, and so were the labor unions and even the corporate sector (see Brian Steensland’s book The Failed Welfare Resolution, page 69). At the White House, a telegram arrived declaring, “Two upper middle class Republicans who will pay for the program say bravo.” Pundits were even going around quoting Victor Hugo—“Nothing is stronger than an idea whose time has come.”
It seemed that the time for a basic income had well and truly arrived.•
Chemtrails are bullshit, of course, but in 1950s America, there was definitely something in the air.
That was especially true if you lived in desert areas in which the U.S. government was conducting A-bomb tests. Windstorms at just the wrong moment could cause havoc, blowing radioactive mist into unsuspecting nearby communities.
One such bomb, nicknamed “Harry,” was detonated in Nevada on May 19, 1953, with gusts carrying its fallout 135 miles, running headlong into 5,000 people, including those on the set of Howard Hughes’ film The Conqueror. Legend has it that a cancer cluster among cast and crew was the result, although that seems more urban legend than medical fact. Nonetheless, some citizens were outraged by the recklessness, and the bomb was rechristened “Dirty Harry” in retrospect.
A month earlier, the Atomic Energy Commission had carried out another detonation in same state. Just after the explosion, a pair of radio-controlled planes carrying mice and monkeys were flown through the radioactive cloud. The strange scene, which was conducted in the name of biomedical research, was recorded in an article in the April 6, 1953 Brooklyn Daily Eagle.
Part of the story about companies, even individuals, being able to compete with states on large-scale projects–the Space Race 2.0, the eradication of infectious diseases, etc.–is the progress of our tools. Their increase in power and decrease in cost have leveled the playing field to some degree. But the world is still not flat. It requires billions to get into these games.
The other side of the narrative is that soaring wealth inequality, the signs of capitalism run amok, has enabled these nations of one, something more independent and rouge-ish than oligarchs. The new arrangement has allowed super-rich kids to purchase Vancouver apartments like they were postcards. More importantly, it’s allowed the few to acquire outsize influence.
That’s not to deride the more progressive megarich among us, the exceptions to the rule, even if a system that allows such unevenness is largely bad. Bill Gates, in his avuncular, sweater-clad 2.0 iteration, has done amazing work for the world. Elon Musk has directed his sizable ego mostly in the direction of enabling a cleaner Earth and a multi-planet species. Russian billionaire Yuri Milner’s micro-mission to Alpha Centauri is laudable. You can argue if some of these projects are the best immediate uses of resources, but there is a sense of generosity among these moonshots.
Nicolas Berggruen, a German-American billionaire, has been both a conspicuous consumer and philanthropist. He’s used some of his bankroll to establish a lavish Davos of sorts, an interdisciplinary Los Angeles parlor for discussion about important topics, including, amusingly, equality. The opening of Alessandra Stanley’s well-written profile of the think-tank tinkerer:
In a wood-paneled conference room in Stanford, Calif., a score of scholars, many of them eminent and some from as far away as Johannesburg and Beijing, gathered last month to compare philosophical notions of hierarchy and equality.
The gathering itself had no overt hierarchy, though one participant seemed a little more equal than the others. When Nicolas Berggruen spoke, no one interrupted. Only he occasionally checked his phone. And at dinner, the guests received fruit tarts for dessert — except for Mr. Berggruen, who was served chocolate mousse.
Mr. Berggruen, 54, is an investor and art collector who was once known as the “homeless billionaire” because he lived in itinerant luxury in five-star hotels. Now he is grounded in Los Angeles where he presides over a bespoke think tank, theBerggruen Institute.
The institute is a striking example of how wealthy philanthropists are reshaping the landscape with smaller versions of the foundations established by Bill Gates and George Soros. Sean Parker, one of the entrepreneurs behind Napster and Facebook, has a research institute, The Parker Foundation, which this monthpledged $250 million for cancer immunotherapy. He is also a co-founder of the Economic Innovation Group, which labels itself an “ideas laboratory.” Tom Steyer, who made his fortune as a hedge fund manager in California, has several environmental nonprofit groups, and last year created the Fair Shake Commission to redress economic inequality.
“There is a generation of new donors who have huge assets, and their own ideas, and think traditional think tanks are old-fashioned,” said James G. McGann, the director of the Think Tanks and Civil Societies Program at the University of Pennsylvania — a think tank that thinks about think tanks. In a money-fueled culture where tweets, not position papers, shape the national conversation, these kinds of philosopher-kingpins “are likely to be more influential than we are,” Mr. McGann said.•
Superintelligent machines may be the death of us, but far-less-smart AI can also lead to disasters, even cascading ones. In a Japan Times op-ed, philosopher Peter Singer thinks that AlphaGo’s stunning recent victory and the progress of driverless cars should spur an earnest discussion of the moral code of microchips and sensors and such. “It is not too soon to ask whether we can program a machine to act ethically,” he writes.
We shouldn’t set rules governing AI that will bind people deep into a future that will present different realities than our own, but laying a foundation for constantly assessing and reassessing the prowess of machine intelligence is vital.
Eric Schmidt, executive chairman of Google’s parent company, the owner of AlphaGo, is enthusiastic about what artificial intelligence means for humanity. Speaking before the match between Lee and AlphaGo, he said that humanity would be the winner, whatever the outcome, because advances in AI will make every human being smarter, more capable and “just better human beings.”
Will it? Around the same time as AlphaGo’s triumph, Microsoft’s “chatbot” — software named Taylor that was designed to respond to messages from people aged 18 to 24 — was having a chastening experience. Tay, as she called herself, was supposed to be able to learn from the messages she received and gradually improve her ability to conduct engaging conversations. Unfortunately, within 24 hours, people were teaching Tay racist and sexist ideas. When she starting saying positive things about Hitler, Microsoft turned her off and deleted her most offensive messages.
I do not know whether the people who turned Tay into a racist were themselves racists, or just thought it would be fun to undermine Microsoft’s new toy. Either way, the juxtaposition of AlphaGo’s victory and Taylor’s defeat serves as a warning. It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.
Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, argues in his book “Superintelligence” that it will not always be as easy to turn off an intelligent machine as it was to turn off Tay.•
As a visionary business person, Elon Musk is complicated. He’s the Tesla EV car manufacturer who’s repeatedly over-promised on price and production. Of course, he’s also the aspiring spaceman who stuck the landing of a reusable rocket on the deck of a drone ship. So his bold proclamations in regards to the near-term delivery of the first fully autonomous car–he promises we’ll be able to summon our driverless Model 3 across the country by 2018–should make us wary–or not.
Most driverless systems use LIDAR, which has limitations in inclement conditions, so a true robocar won’t be a self-contained thing. It will need be informed not only by regularly updated mapping, which won’t be perfect and will constantly have its accuracy degraded, but will also need to be in communication with other automobiles and different smart gadgets. Google, with its gazillion Android devices fanned out across the country, would seem to have an advantage born of synergy, but Musk believes his AI is superior and far cheaper. I seriously doubt anyone will have such a fully-realized system ready for the market in 20 months, though further significant integration of driverless capabilities and Atlantic-to-Pacific demos wouldn’t shock me. Of course, if you had told me a couple of years ago that a reusable rocket would gently lower onto a drone ship in 2016, I would have bet the under on that one also.
Two excerpts follow, one from a New Yorker piece by Levi Tillemann and Colin McCormick, which explains how Tesla may have quietly developed a better strategy, and the other a segment of an Economist article which offers a crisp explanation of the significant hurdles that must be cleared for robocars, some of which would appear to apply even to Musk’s LIDAR-less plan.
From the New Yorker:
In October of 2014, Tesla began offering its Model S and X customers a “technology package,” which included this sensor array and cost about four thousand dollars. The equipment allowed the company to record drivers’ movements, unless they opted out of the tracking, and—most important—to start amassing an enormous trove of data. A year later, it remotely activated its “Autopilot” software on tens of thousands of these cars. Suddenly, drivers had the ability to engage some limited autonomous functions, including dynamic cruise control (pegging your car’s speed to the speed of the car in front of you), course alignment inside highway lanes, and on-command lane-changing. Some drivers were unnerved by the Autopilot functions, and cars occasionally swerved or drove off the road. But many of Tesla’s tech-tolerant early adopters relished the new features.
Autopilot also gave Tesla access to tens of thousands of “expert trainers,” as Musk called them. When these de-facto test drivers overrode the system, Tesla’s sensors and learning algorithms took special note. The company has used its growing data set to continually improve the autonomous-driving experience for Tesla’s entire fleet. By late 2015, Tesla was gathering about a million miles’ worth of driving data every day.
To understand how commanding a lead this gives the company in the race for real-world autonomous-driving data, consider the comparably small number of lidar-based autonomous vehicles—all of them test cars—that some of its competitors have on the road. California, where much of the research on self-driving cars is taking place, requires companies to register their autonomous vehicles, so we know that currently Nissan has just four such cars on the road in the state, while Mercedes has five. Google has almost eighty registered in the state (though not all of them are in service); it is also doing limited testing in Arizona, Texas, and Washington. Ford announced earlier this year that it was adding twenty new cars to its test fleet, giving it thirty vehicles on the road in Arizona, California, and Michigan, which it says is the largest fleet of any traditional automaker. By comparison, Tesla has sold roughly thirty-five thousand cars in the U.S. since October of 2014. The quality of the data that these vehicles are producing is unlikely to be as rich as the information the lidar cars are providing, but Tesla’s vastly superior fleet size means that its autonomous cars can rack up as much driving experience every day or two as Google’s cars have cumulatively.•
From the Economist:
Some car firms, including Nissan, Ford, Kia and Tesla, think self-driving technology will be ready by 2020. Volvo plans to offer fully autonomous cars to 100 drivers as early as next year. All this increases the pressure to map the world in high definition before cars begin to drive themselves out of showrooms. HERE has several hundred vehicles like George mapping millions of kilometres of roads annually in 32 countries. TomTom has 70 on motorways and major roads in Europe and North America. Zenrin, a Japanese mapping firm partly owned by Toyota, is particularly active in Asia.
Analysing and processing data from so many vehicles is one of the biggest challenges. HERE originally had people inspecting the raw LIDAR data and turning it into a digital model using editing software—rather like “Minecraft for maps”, says Mr Ristevski. But manually extracting the data was painfully slow, and the company has now developed machine-learning algorithms to find automatically such things as lane markings and the edges of pavements. HERE’s AI systems can identify road signs and traffic lights from George’s still photos. Humans then modify and tweak the results, and check for errors.
Yet George’s data begin to age as soon as they are collected. Subsequent construction, roadworks or altered speed limits could lead to a self-driving car working from a dangerously outdated map. Maps will never be completely up-to-date, admits Mr Ristevski. “Our goal will be to keep the map as fresh and accurate as possible but vehicle sensors must be robust enough to handle discrepancies.”
Mapping vehicles are sent back to big cities like San Francisco regularly, but the vast majority of the roads they capture might be revisited annually, at best. A partial solution is to use what Mr Ristevski euphemistically calls “probe data”: the digital traces of millions of people using smartphones and connected in-car systems for navigation. HERE receives around 2 billion individual pieces of such data daily, comprising a car’s location, speed and heading, some of it from Windows devices (a hangover from when HERE was owned by Nokia, now part of Microsoft).
These data are aggregated and anonymised to preserve privacy, and allow HERE quickly to detect major changes such as road closures. As cars become more sophisticated, these data should become richer. Ultimately, reckons Mr Ristevski, self-driving cars will help to maintain their own maps.•
Computer matchmaking dates back long before “Language Style Matching,” all the way to the 1950s, in Canada at least, when singles experimented with IBM punch-card yenta-ing. It was, as you might guess, rudimentary, as hook-ups encouraged by the machines (based largely on height and age and hair color) still had to be arranged via rotary phone. By 1966, Gene Shalit and his mustache headed to Harvard to report for Look magazine on a computer-enabled, pre-Tinder bacchanal sweeping elite American colleges. The opening of Shalit’s Borscht Belt-New Journalism mashup for the long-defunct title:
Out of computers, faster than the eye can blink, fly letters stacked with names of college guys and girls–taped, scanned, checked and matched. Into the mails speed the compatible pairs, into P.O. boxes at schools across the land. Eager boys grab their phones… anxious coeds wait in dorms … a thousand burrrrrrrings jar the air . . . snow-job conversations start, and yeses are exchanged: A nationwild dating spree is on. Thousands of boys and girls who’ve never met plan weekends together, for now that punch-card dating’s here, can flings be far behind? And oh, it’s so right, baby. The Great God Computer has sent the word. Fate. Destiny. Go-go-go. Call it dating, call it mating, it flashed out of the minds of Jeff Tarr and Vaughn Morrill, Harvard undergraduates who plotted Operation Match, the dig-it dating system that ties up college couples with magnetic tape. The match mystique is here: In just nine months, some 100,000 collegians paid more than $300,000 to Match (and to its MIT foe, Contact) for the names of at least five compatible dates. Does it work? Nikos Tsinikas, a Yale senior, spent a New Haven weekend with his computer-Matched date, Nancy Schreiber, an English major at Smith. Result, as long date’s journey brightened into night: a bull’s-eye for cupid’s computer.•
Holy fuck, someone should have punched Gene Shalit right in the handlebar. Interesting though, despite the woeful writing, is that proto-Zuckerbergs were climbing the Ivy 50 years ago, trying to transform strange faces into friends. In a decade, “smart” romance had been monetized and become somewhat more sophisticated, but Moore’s Law clearly had a ways to go before it could truly be brought to bear on mating.
While today’s dating apps are much more algorithmically adept, it still feels like we’re in the early stages of machine-assisted love. Will the continuous development of language analysis and AI “eyes” be able to identify our cues even before we do? What does love at first sight mean in an age of visual-recognition systems?
In “When Dating Algorithms Can Watch You Blush,” Julia M. Klein’s smart Nautilus piece, the reporter looks at the next-level work of Northwestern psychologist Eli Finkel, who acknowledges it’s unlikely there is “ever going to be an algorithm that will find your soul mate.” He’s just hoping for significant if incremental improvement. The opening:
Let’s get the basics over with,” W said to M when they met on a 4-minute speed date. “What are you studying?”
“Uh, I’m studying econ and poli sci. How about you?”
“I’m journalism and English literature.”
They talked about where they were from (she hailed from Iowa, he from New Jersey), life in a small town, and the transition to college. An eavesdropper would have been hard-pressed to detect a romantic spark in this banal back-and-forth. Yet when researchers, who had recorded the exchange, ran it through a language-analysis program, it revealed what W and M confirmed to be true: They were hitting it off.
The researchers weren’t interested in what the daters discussed, or even whether they seemed to share personality traits, backgrounds, or interests. Instead, they were searching for subtle similarities in how they structured their sentences—specifically, how often they used function words such as it, that, but, about, never, and lots. This synchronicity, known as “language style matching,” or LSM, happens unconsciously. But the researchers found it to be a good predictor of mutual affection: An analysis of conversations involving 80 speed daters showed that couples with high LSM scores were three times as likely as those with low scores to want to see each other again.
It’s not just speech patterns that can encode chemistry. Other studies suggest that when two people unknowingly coordinate nonverbal cues, such as hand gestures, eye gaze, and posture, they’re more apt to like and understand each other. These findings raise a tantalizing question: Could a computer know whom we’re falling for before we do?•
Robots have been able to make hamburgers formore than 50 years, but it was never before financially sensible for franchises to transition. When such work was greasy kid stuff, the positions paid little and had a high churn rate. They were beginner jobs.
Globalization, technological progress and other factors, however, have hollowed out the middle in America, so calls for minimum-wage increases in such service positions, now often treated as careers, were a natural. Equally expected is the response by fast-food company CEOs who want to keep the bottom line lean: We’ll automate.
Perhaps there’s something worse than the fear that we’ll be reduced to McJobs: What if even such low-paying positions are automated?
The Fight for $15 campaign plans to target McDonald’s on April 14 as part of a new pre-Tax Day tradition, led by the Service Employees International Union, or SEIU.
Chicago is one of 300 cities worldwide where strikes and protests are scheduled. SEIU has spent $70 million on its Fight for $15 campaign. The union’s Local 73 represents more than 28,000 government workers in Illinois and Indiana.
Protestors may want to stop by the McDonald’s at Adams and Wells to meet their replacement – an automated McCafé kiosk.
The store, which is anticipating Chicago’s minimum-wage increase to $13an hour by 2019, is testing out coffee kiosks in the restaurant instead of having employees serve it. The kiosk features a touch-pad for ordering and paying. The screen also prompts customers to answer questions about their kiosk experience, giving the impression this is something that could be adopted as an alternative to hiring. This kind of automation, which replaces a human employee with technology, is one of the unintended consequences of Chicago’s minimum-wage increase.
It may not just be a coffee machine either. Other McDonald’s locations have used self-service kiosks with touch-screens for paying. And while self-serve kiosks don’t seem too unusual, San Francisco-based Momentum Machines has created a robotic hamburger-making machine the company claims can produce 400 high-quality burgers in an hour with minimal human supervision.
A robot making a hamburger sounds a bit absurd, but the desire to circumvent artificially set wages certainly isn’t.•
It sometimes seems that Monica Lewinsky was the last American with a sense of shame. It did not benefit her.
Celebutantes with sex tapes have since sold their boldface indiscretions for countless millions and covered Vogue. Most of them have a short shelf life, falling not soon after rising, but the Kardashians have taken Andy Warhol’s fifteen minutes of fame and broken the hands off the clock, becoming our other First Family. They are not ashamed.
Lewinsky, shaped by the pre-Internet Era, was not proud of her acts with the leader of the free world. She has never fully escaped the dark shadow of infamy, having spent her formative years mocked for her morals and weight, for being the girl in the beret used like a White House humidor. Just imagine the dumbest thing you did when you were 22 put on blast for the entire world. It’s unthinkable for anyone not raised on selfies and social media to survive such a thing, to flourish. Before everyone lived in public, privacy was prized, reputations mattered. It’s better in the big picture that the next generations don’t have to be ruined anymore by trolls. They just own it, whatever “it” may be. Something, though, has been lost in translation.
In a smart Guardian piece, Jon Ronson profiles the former intern in middle age. The opening:
One night in London in 2005, a woman said a surprisingly eerie thing to Monica Lewinsky. Lewinsky had moved from New York a few days earlier to take a master’s in social psychology at the London School of Economics. On her first weekend, she went drinking with a woman she thought might become a friend. “But she suddenly said she knew really high-powered people,” Lewinsky says, “and I shouldn’t have come to London because I wasn’t wanted there.”
Lewinsky is telling me this story at a table in a quiet corner of a West Hollywood hotel. We had to pay extra for the table to be curtained off. It was my idea. If we hadn’t done it, passersby would probably have stared. Lewinsky would have noticed the stares and would have clammed up a little. “I’m hyper-aware of how other people may be perceiving me,” she says.
She’s tired and dressed in black. She just flew in from India and hasn’t had breakfast yet. We’ll talk for two hours, after which there’s only time for a quick teacake before she hurries to the airport to give a talk in Phoenix, Arizona, and spend the weekend with her father.
“Why did that woman in London say that to you?” I ask her.
“Oh, she’d had too much to drink,” Lewinsky replies. “It’s such a shame, because 99.9% of my experiences in England were positive, and she was an anomaly. I loved being in London, then and now. I was welcomed and accepted at LSE, by my professors and classmates. But when something hits a core trauma – I actually got really retriggered. After that I couldn’t go more than three days without thinking about the FBI sting that happened in ’98.”
Seven years earlier, on 16 January 1998, Lewinsky’s friend – an older work colleague called Linda Tripp – invited her for lunch at a mall in Washington DC. Lewinsky was 25. They’d been working together at the Pentagon for nearly two years, during which time Lewinsky had confided in her that she’d had an affair with President Bill Clinton. Unbeknown to Lewinsky, Tripp had been secretly recording their telephone conversations – more than 20 hours of them. The lunch was a trap.•
Working with Artificial Intelligence isn’t for everyone, but it’s for far more people now than ever before. In a Bloomberg Technology piece about a small crew of programmers who created an application to read Japanese handwriting even though they themselves can’t, Pavel Alpeye points out that AI being “sold like a utility” has democratized such work, though I would add that it still helps to be really, really smart.
Real-world artificial-intelligence applications are popping up in unexpected places—and much sooner than you might think.
While winning a game of Go might be impressive, machine intelligence is also evolving to the point where it can be used by more people to do more things. That’s how four engineers with almost zero knowledge of Japanese were able to create software, in just a few months, that can decipher handwriting in the language.
The programmers at Reactive Inc. came up with an application that recognizes scrawled-out Japanese with 98.66 percent accuracy. The 18-month-old startup in Tokyo is part of a growing global community of coders and investors who are harnessing the power of neural networks to put AI to far more practical purposes than answering trivia or winning board games.
“Just a few years ago, you had to be a genius to do this,” said David Malkin, who has a Ph.D. in machine learning but can barely string two Japanese sentences together. “Now you can be a reasonably smart guy and make useful stuff. Going forward, it will be more about using imagination to apply this to real business situations.”•
Earlier today, I posted a piece about Russian entrepreneur Yuri Milner, who’s bankrolling an attempt at an unpeopled reconnaissance mission to Alpha Centauri. A techno-optimist the way most high flyers in Silicon Valley are, Milner believes our increasing connectedness, including the Internet of Things, will bring about a more prosperous world–even a “global brain,” which will unite us all and serve as a “global central nervous system.” That new wealth may be distributed very unevenly, but the wholesale connectivity will likely arrive sooner than later. It will be both boon and bane.
From a Reuters report Chrystia Freeland filed in 2001, just twelve days after the 9/11 attacks:
Milner almost perfectly represents a global technology elite whose frame of reference is planet Earth. He mostly lives in Moscow, but has recently purchased a palatial home in Silicon Valley. He addressed the Ukrainian conference by video link from Singapore.
From that vantage point, the most pressing issue in the world today isn’t recession and political paralysis in the West, or even the rapid development and political transformation in emerging markets, it is the technology revolution, which, in Mr. Milner’s view, is only getting started. Here are the changes he thinks are most significant:
• The Internet revolution is the fastest economic change humans have experienced, and it is accelerating. Mr. Milner said two billion people are online today. Over the next decade, he predicts that that number will more than double.
• The Internet is not just about connecting people, it is also about connecting machines, a phenomenon Mr. Milner dubbed “the Internet of things.” Mr. Milner said that five billion devices are connected today. By 2020, he thinks more than 20 billion will be.
• More information is being created than ever before. Mr. Milner asserted that as much information was created every 48 hours in 2010 as was created between the dawn of time and 2003. By 2020, that same volume of information will be generated every 60 minutes.
• People are sharing information ever more frequently. The pieces of content shared on Facebook have increased from 140 million in 2009 to 4 billion in 2011. We are even sending more e-mails: 50 billion were sent in 2006, versus 300 billion in 2010.
• The result, according to Mr. Milner, is the dominance of Internet platforms relative to traditional media. “The largest newspaper in the United States is only reaching 1 percent of the population.” he said. “That compares to Internet media, which is used by 25 percent of the population daily and growing.”
• Internet businesses are much more efficient than brick-and-mortar companies. This was one of Mr. Milner’s most striking observations, and a clue to the paradox of how we find ourselves simultaneously living in a time of what Mr. Milner views as unprecedented technological innovation but also high unemployment in the developed West. As Mr. Milner said, “Big Internet companies on average are capable of generating revenue of $1 million per employee, and that compares to 10 to 20 percent of that which is normally generated by traditional offline businesses of comparable size.” As an illustration, Mr. Milner cited Facebook, where, he said, each single engineer supports one million users.
• Artificial intelligence is part of our daily lives, and its power is growing. Mr. Milner cited everyday examples like Amazon.com’s recommendation of books based on ones we have already read and Google’s constantly improving search algorithm.
• Finally — and Mr. Milner admitted this was “a bit of a futuristic picture” — he predicted “the emergence of the global brain, which consists of all the humans connected to each other and to the machine and interacting in a very unique and profound way, creating an intelligence that does not belong to any single human being or computer.”•
Online anarchy did not begin with the World Wide Web. More than a decade before Sir Timothy Berners-Lee’s cat-meme-and-fetish-porn-enabling gift, the Internet got its first real taste of widespread hacking. That was in 1983.
We reflect with respect on the phone phreaks who preceded these Reagan Era interlopers. John Draper (aka “Captain Crunch”), the teenaged Steve Jobs and Steve Wozniak and other blue-box builders are credited with giving birth to a personal-computing culture that’s transformed the world. Their Internet progeny were not held in the same esteem, despite operating in an era before hacking laws existed.
In the early 1980s, the 414s and the Inner Circle, American teen hackers, were the subject of mass arrests for breaching government and corporate accounts. They mostly did so just to prove they could, not apparently with any malicious intent, but that hardly mattered when their garages and bedrooms were stormed by Feds.
MILWAUKEE, March 16— Two young men who broke into large computers in the United States and Canada last June, simply to prove they could do it, have pleaded guilty to two misdemeanor charges, according to documents filed today in Federal District Court.
Eric Klumb, an Assistant United States Attorney, said it was the first such computer crime case in which the motive was not financial gain.
Gerald Wondra of West Allis, Wis., and Timothy D. Winslow of Milwaukee, both 21 years old, agreed to plead guilty to two counts each of making harassing telephone calls. Both are members of a loosely knit group of computer enthusiasts called the ”414’s,” after Milwaukee’s telephone area code. …
By charging the two, Mr. Klumb said he thought other computer enthusiasts might be deterred from similar intrusions.•
In an excellent Paleofuture post, Matt Novak recalls the watershed event and analyzes its fallout. An excerpt about one of the arrestees, Bill Landreth, the son of hippies whose dad was a financially struggling grow-lamp inventor:
When I met Bill Landreth at a Starbucks in Santa Monica, he was sitting quietly at a table drinking coffee with two bags on the the seat across from him, and a bag of blankets in the corner. A pipe made out of an apple and filled with what I assumed was medical marijuana sat at the table next to his coffee and Samsung tablet. A passing cop glanced at the spread but didn’t raise an eyebrow.
Arranging our meeting was tricky, because Bill isn’t sure where he’ll be sleeping from night to night. Now 52, with a slight goatee and a tussle of wavy hair that nearly reaches his shoulders, Bill has been living on the streets for 30 years. But if it weren’t for his receding hairline and a certain grayness to his gaze, he’d probably pass for a decade younger. There’s something assertive yet firmly guarded about the way he speaks. It’s as though Bill’s a man who’s not afraid to say what he thinks, but still worries about saying something out of line in front of me.
In our conversation, he was calm, affable, and clearly intelligent, and almost immediately began rattling off computers and computing languages of which I have little to no background or understanding.
Bill got his first computer in 1980, he tells me. It was a TRS-80 from RadioShack. He was 14 or 15, and explains that he planned to get the version with 8K of memory using $500 he had saved. His dad offered to pitch in another $500, and he got the 16K version with a cassette tape drive for storage. He also picked up a 300 baud modem.
Bill was a quick learner, and developed a knack for the BASIC programming language. From there he’d learn other languages, and his desire to explore the world of computing became overpowering.•