Excerpts

You are currently browsing the archive for the Excerpts category.

I’m never bored. I was sometimes when I was a child, before I knew what to do with the time, but never as an adult. I just want more time to think and read, but I will never get enough. No one does.

In his latest Financial Times column, Douglas Coupland writes about the modern boredom, which is interesting. An excerpt:

I think boredom has to be some sort of natural selection process. If it weren’t for boredom, our ancestors would have spent all their days in their caves, with no hunting or gathering, and then no wheels or fire or mathematics or HBO. …

Part of our new boredom is that your brain doesn’t have any downtime. Even the smallest amount of time not being engaged creates a spooky sensation that maybe you’re on the wrong track. Reboot your computer and sit there waiting for it to do its thing and within 17 seconds you experience a small existential implosion when you remember that 15 years ago life was nothing but that kind of moment. Gosh, maybe I’ll read a book. Or go for a walk.

Sorry.

Probably not going to happen. Hey, is that the new trailer for “Ex Machina”?•

 

Tags:

Having just profiled anchorman(nequin) Don Lemon in the pages of GQ, Taffy Brodesser-Akner turns her attention to Kris Jenner for the New York Times Magazine. Her work is starting to form a through line.

These subjects are shallow people consumed by what they see in the mirror, but they also reflect, if uncomfortably, who we are and the period we live in. Jenner seems equal parts producer and pimp, but pimpin’ ain’t easy. Without a ton of obvious talent to work with but also unencumbered by a sense of shame, she’s molded her brood into the apotheosis of Andy Warhol’s prediction (warning?) that in the future everyone would be famous for 15 minutes, and she’s made a mockery of that shelf life. 

In that sense the Kardashian-Jenner family members are the most interesting celebrities of our era, because they’re more emblematic of it than anyone else, a harsh, anxious age marked by decentralized media, branding, exhibitionism, narcissism as entrepreneurialism, faces filled and filtered, and banal self-promotion. It’s a moment when the spoils go to the most aggressively remorseless. Maybe someday their stars fall and selfies fade, but the clan has already left a mark on its time–our time. That’s what great performers do.

An excerpt from the largely admiring piece:

There are still people who dismiss Kris Jenner, 59, and her family — Kourtney, Kim and Khloé Kardashian, all in their 30s; her son, Rob Kardashian, 28; and Kendall and Kylie Jenner, 19 and 17 — as “famous for being famous,” a silly reality-show family creating a contrived spectacle. But we have reached the point at which the Jenners and the Kardashians are not famous for being famous: They are famous for the industry that they’ve created, the Kardashian/Jenner megacomplex, which has not just invaded the culture but metastasized into it, with the family members emerging as legitimate businesspeople and Kris the mother-leader of them all.

She is an executive producer of “Keeping Up With the Kardashians” and its summer spinoffs. She also manages the careers of all six of her children, as well as her own. Without Kris, Kim might not have pulled in a reported $28 million in 2014. Kendall wouldn’t necessarily be an in-demand model, walking runways for Chanel and Marc Jacobs and appearing on the covers of Allure and Harper’s Bazaar. There would most likely be no Kim Kardashian: Hollywood, a choose your own adventure (presuming it’s an adventure Kim Kardashian would go on) game app, starring Kim, that brought in many millions last year, or T-Mobile commercial, or book of selfies (“Selfish”), released this month. Kourtney and Khloé and Kim might not have three retail stores, named Dash, in Los Angeles, New York and Miami; a hair-and-makeup line, Kardashian Beauty; a bronzer line, Kardashian Glow; and Kardashian Kids, a children’s clothing line sold at Babies “R” Us and Nordstrom. Kendall and Kylie might not have licensing deals with PacSun, Steve Madden, Topshop and Sugar Factory, where they each have signature lollipops and several contractual agreements to appear at the candy stores. Rob, the lone brother, would probably not have a sock company that features socks that say things like “LOVE HURTS” and “YOLO” or sell adult onesies at places like Macy’s. There would not be seven perfumes in Kim’s name, or Khloé’s perfume with Lamar Odom, Unbreakable, which is still available, though their marriage has ended. There would be no endorsement deals, either: things like OPI nail polish and a “waist trainer” that Khloé and Kim model on their Instagram account. It is entirely possible that without Kris Jenner and all her wisdom over the years, all the attention she has garnered for her family, 16.9 million people would not have tuned in on April 24 to watch her ex-husband Bruce tell Diane Sawyer that he is transgender. 

The thing is, no one in her family knew what they were doing until Kris took charge.•

Tags: ,

Hmm, I don’t agree with Vivek Wadhwa that there’ll be no role for humans in labor in the future, but it will take far less than a total foundering of our system for us to find ourselves with an unacceptable level of technological unemployment, so I certainly concur with him in the bigger picture. From Cole Stangler at International Business Times:

The breadth and speed of recent innovations — think robotics, synthetic biology, nanotechnology and 3D printers — coupled with relatively high unemployment, have fueled a debate over whether humans will be permanently erased from the labor force. It’s a potentially dystopian landscape that would have future workers longing for the unemployment rate seen in Friday’s job figures.

“I’m really worried about this,” says Vivek Wadhwa, author and fellow at Stanford University’s Rock Center for Corporate Governance. “In the long term, I see no role for human beings.”

Wadhwa recently oversaw academic programs at Singularity University, a think tank-like group in Silicon Valley whose goal is to “educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”

He says self-driving cars and trains will replace workers in the transportation industry, artificial intelligence, sensors and smartphones will eliminate the need for most doctors, nurses and surgeons–and algorithms will displace most human writers. (“I don’t mean to insult your profession,” he says.)

All of this, Wadhwa says, will happen in the next five to 15 years.•

Tags: ,

I’m nearly done with Martin Ford’s Rise of the Robots, an excellent book I’ll have more to say about soon. The author sat for an episode of the Review the Future podcast and spoke on automation and wealth inequality.

Ford agrees with something I mentioned in response to Thomas Piketty’s suggestion that we counteract our 1% world with an investment in education (and re-education). While that would be great, I think it likely won’t be nearly enough to solve income disparity or technological unemployment.

Two exchanges from the podcast follow.

_____________________________

Jon Perry: 

This topic often is scoffed at by economists, which is something you mentioned in your first book The Lights in the Tunnel, that it was almost unthinkable to a lot of people. I’m curious, as that book came out in 2009 and now six years have gone by, how has the conversation around this issue changed since then?

Martin Ford:

Well, I think it’s definitely changed and it’s become a lot more visible. As you say, when I wrote that book, I mentioned that it was almost unthinkable in terms of the way economists approached it, and that’s less true today. There are definitely some economists out there, at least a few, that are now talking seriously about this. So, it’s really been quite a dramatic change as these technologies and the implications of them have become more visible.

Having said that, I still think this is very much an idea that is outside the mainstream, and a lot of the economists that do talk about it tend to take what I would call a very conservative tack, which is they still tend to believe that the solution to all this is more education and training–all we have to do is train people so that they can climb the skills ladder and keep ahead of the machines. I think that’s an idea that pretty much has run out of steam, and we probably need to look at more radical proposals going forward. But definitely it is a much more visible topic now than it was back in 2009.

_____________________________

Jon Perry: 

Now, we’ve had information technology for not that long, but we’ve had it for a little while now, and there are certainly some troubling economic trends that we see–stagnating wages, rising inequality, for example. To what extent can we say that, say, information technology or even technology in general is at least partially the cause of these economic trends? And if that’s the thesis here, what would be the best way to actually try to tease that apart and measure technology’s impact on the labor force going forward?

Martin Ford:

It’s kind of a challenging problem. I believe obviously very strongly that information technology has been an important part of it. I would not argue that it’s all of it by any means. And in my new book, Rise of the Robots, I point out about seven general trends that you can look at.

That includes the fact that wages have stagnated while productivity has continued to increase, so productivity and incomes have kind of decoupled. It includes the fact that the share of income going to labor as opposed to capital has gone into a pretty precipitous decline, especially since the year 2000. The labor force participation rate, meaning the number of people who are actually actively engaged in work, is falling. We’re seeing wages for college graduates actually going into decline, so it’s not the case anymore that people with higher educations are doing extremely well; a lot of people with college degrees are also being impacted.

So, there are a number of things you could look at there, and the thing is that if you take any one of these and you look at the research that economists have done, there are lots of explanations. Technology is nearly always one of the explanations, but there, of course, are other explanations. There’s globalization, there’s a basic change in our politics, which will become more conservative. In particular, there’s the decimation of unions in the private sector. Depending on who’s doing the analysis and sometimes what their agenda is, they will point to those things as being more important than technology.

But what I believe is that if you take all of that evidence collectively, if you look at all of those things together, it’s really hard to come up with one explanation other than technology that can explain all of those things.•

Tags: ,

One revelation from the Reddit AMA conducted by Philip Zimbardo, still best known for the infamous 1971 Stanford Prison Experiment, a dress rehearsal more or less for Abu Ghraib, was that the psychologist was high school classmates with Stanley Milgram, author of the equally controversial “Obedience to Authority” study. That must have been some high school! Zimbardo was joined by writer Nikita Coulombe, to discuss their new book Man (Dis)connected. A few exchanges follow about the notorious test at Stanford.

___________________________

Question:

If you had a chance to do the Stanford Prison Experiment again, what would you do differently?

Philip Zimbardo:

Yes I would, I would have only played the role of researcher and there would be someone above me, who would be the superintendent of the prison and when things got out of hand I would have been in a better position to terminate the study earlier and more appropriately.

___________________________

Question:

In context of the famous prison experiment, when you were first organizing it, what were some of the specific dangers you tried to avoid?

Philip Zimbardo:

We selected young men who were physically healthy and psychologically normal, we had prior arrangements with student health if that was necessary. Each student was given informed consent, so they knew that there would likely be some levels of stress, so they had some sense of what was to come. Physical violence by the guards, especially if there was a revolt, solitary confinement beyond the established one hour limit, but primarily trying to minimise acts of sexual degradation.

___________________________

Question:

Being particularly interested in social psychology, I’m a big fan of what you have accomplished through your research. I was wondering what really got you interested in social psychology, and your research is connected to that of Stanley Milgram, another favourite psychologist of mine – so what I’m asking is what initially got you into this field of psychology, and what did you think of Milgram’s research when you first came across it?
permalink

Philip Zimbardo:

Thank you. I was interested in psychology from a young age: I grew up in the Bronx in the 1930s and started wondering why some people would go down certain paths, like joining a gang, while others didn’t. I was also high school classmates with Stanley Milgram; we were both asking the same questions.

___________________________

Question:

If there was a film adaptation dramatizing the events of the Stanford Prison Experiment, who would you want to play you?

Philip Zimbardo:

Glad you asked the question, amazingly there is a new Hollywood movie that just premiered at the Sundance film festival to great reviews winning lots of prizes titled The Stanford Prison Experiment. It will have national showings in America starting in July and hopefully in Europe in the Fall. I was hoping that the actor who would play me would be either Johnny Depp or Andy Garcia but they were not available so instead a wonderful young actor, Billy Crudup is Dr Z. You may be aware of his great acting in Almost Famous and Dr Manhattan in Watchmen.•

___________________________

“Jesus Christ, I’m burning up inside–don’t you know?”:

Tags: , ,

My takeaway from the comment in the heading, offered by BMW executive Ian Robertson, is a little different than Leonid Bershidsky’s interpretation in Bloomberg View. Bershidsky believes complex moral questions about responsibility for the actions of autonomous vehicles means that humans may never truly be able to let go of the wheel. Wow, never is a long time. The inference I draw is that if moral philosophy is the chief concern of auto-industry executives, that says they believe the technology is fait accompli. Sure, they could be wrong, but if the machinery is perfected, moral quandaries won’t keep such cars permanently parked. We’ll just be forced to answer difficult questions sooner than later.

From Bershidsky:

Self-driving cars are the subject of more hype than even true artificial intelligence, perhaps because they already exist and a number of big companies are committed to making them a marketable reality. So it’s worth listening when a top executive of one of these companies says self-driving vehicles are a long way off.

“The technology will be held back by the ultimate moral question on who’s responsible,” said Ian Robertson, head of sales for Bayerische Motoren Werke in Munich. 

Figuring this out isn’t as easy as simply changing insurance rules. Imagine you’re driving along a narrow mountain road at high speed, and a child jumps in front of your car. If you swerve to avoid hitting him, you’ll crash into a cliff or plunge into an abyss. In both cases, it means certain death for you.

Now imagine the car is driving itself.

“An algorithm will make a decision which might not be acceptable from a cultural or societal point of view,” Robertson explained.•

Tags: ,

In a Washington Post article, Dominic Basulto reports on significant changes in synthetic biology, one of which is DARPA deciding to move forward in earnest into the field. “Engineering biology is emerging as a powerful technology with the potential for significant impact,” as the Defense agency asserts, in a statement marked by both potential and peril. One positive would be the work being applied to the manufacture of cities both on Earth and in space. On the other hand, the creation of synthetic life will pose ethical issues and risks, though it likewise could be a boon to medicine. In the long run, I feel it’s inevitable.

An excerpt:

After announcing the launch of its new Biological Technologies Office in April 2014, DARPA is finally moving off the sidelines and getting into the game. If DARPA brings the same innovation know-how to synthetic biology that it has brought to fields such as robotics, the Internet and autonomous vehicles, this could be big. At the Biology is Technology (BiT) event hosted by DARPA in San Francisco in mid-February, the agency sought to outline all the innovative ways that it hoped to use biology for defense technology, such as through its Living Foundries program.

At the BiT event, which included a keynote from Craig Venter and a fireside chat with George Church, DARPA Deputy Program Director Alicia Jackson laid out a compelling new vision for “Programming the Living World” that focused on biology as a radically new type of manufacturing platform. The goal, said Jackson, is to take everything researchers know from electronics, physics and engineering and migrate that over to the world of genomics and biology, making it possible to mass-produce engineered organisms. Jackson called synthetic biology a “new technology vector” that is more exciting and more scalable than anything that exists today.•

Tags: ,

Sometimes people will say that free-range chicken is a more ethical choice for meat consumers, but if I were a chicken my main objection to the slaughterhouse would not be the accommodations. I’d happily stay in a spartan efficiency if you would only be so nice as to not murder my family and I at check-out time. 

Sujata Gupta of BBC Future has written an interesting article about attempts to make industrial pig farming “more humane,” but the ultimate endgame remains the same, death for the animal and damage to the environment. I think meat will eventually be almost entirely grown in labs from cells, but I have no idea when such a thing will be perfected and accepted. Probably not anytime soon.

Gupta traces the history of pig domestication and breeding from thousands of years ago on rural Asian and European farms to the gestation crates in mechanized industrial American factories, noting that the animal may have been originally tamed for one simple reason: “They made excellent garbage disposal units,” usefully hoovering up slop that would have gone foul.

The opening:

Passing fields of soy, corn and towering bleach-white windmills fanning out across windy plains, I arrive early one morning somewhere between Chicago and Indianapolis at a place that promises “sow much fun.” 

The Pig Adventure, housing 3,000 sows and producing 80,000 piglets per year, sits alongside a 36,000-cow Dairy Adventure. This is “agro-Disneyland,” a place where rides have been replaced by adorable pink piglets and 72-cow robotic milking parlours.

I line up next to a retired couple and an extended family with three freckled kids from Chicago, and our tour starts inside a sleek lobby outfitted with touch-screens and billboards illuminating the intelligence of pigs – as smart as three-year-olds, better at learning tricks than dogs, outranked in brainpower only by chimps, dolphins and elephants. We pass through a mock shower where animated bubbles slide down the walls to clean us, and into a wide, carpeted corridor. Everything smells as pleasantly antiseptic as a dentist’s office. Arriving at a viewing area, we ogle some real pigs through thick, soundproof panes of glass.

Here, visitors can see for themselves how, even in today’s global, supermarket era, ‘Concentrated Animal Feeding Operations’ – or factory farms, as they’re better known – can continue to operate on gargantuan scales while still paying heed to animal welfare. Places like The Pig Adventure exist because this clash of practical and moral needs is becoming a massive consumer sticking point.•

Tags:

In “The Dawn of Artificial Intelligence,” the Economist considers the insinuation of Weak AI in our lives, for better and worse, and the longer-range concerns about Strong AI. The (un-bylined) writer compares conscious machines, if they’re ever realized, to bureaucracies, military and markets, though I don’t know that the sometimes uncontrollable nature of those things is a perfect analogue. An excerpt:

The first step is to understand what computers can now do and what they are likely to be able to do in the future. Thanks to the rise in processing power and the growing abundance of digitally available data, AI is enjoying a boom in its capabilities (seearticle). Today’s “deep learning” systems, by mimicking the layers of neurons in a human brain and crunching vast amounts of data, can teach themselves to perform some tasks, from pattern recognition to translation, almost as well as humans can. As a result, things that once called for a mind—from interpreting pictures to playing the video game “Frogger”—are now within the scope of computer programs. DeepFace, an algorithm unveiled by Facebook in 2014, can recognise individual human faces in images 97% of the time.

Crucially, this capacity is narrow and specific. Today’s AI produces the semblance of intelligence through brute number-crunching force, without any great interest in approximating how minds equip humans with autonomy, interests and desires. Computers do not yet have anything approaching the wide, fluid ability to infer, judge and decide that is associated with intelligence in the conventional human sense.

Yet AI is already powerful enough to make a dramatic difference to human life. It can already enhance human endeavour by complementing what people can do. Think of chess, which computers now play better than any person. The best players in the world are not machines however, but what Garry Kasparov, a grandmaster, calls “centaurs”: amalgamated teams of humans and algorithms. Such collectives will become the norm in all sorts of pursuits: supported by AI, doctors will have a vastly augmented ability to spot cancers in medical images; speech-recognition algorithms running on smartphones will bring the internet to many millions of illiterate people in developing countries; digital assistants will suggest promising hypotheses for academic research; image-classification algorithms will allow wearable computers to layer useful information onto people’s views of the real world.

Even in the short run, not all the consequences will be positive.•

Last year, I posted a 1950 Brooklyn Daily Eagle article in which Norbert Wiener, father of cybernetics, predicted society being crushed by the metal grip of robots, with automation upending our accepted order. The year prior he was assigned to write “what the ultimate machine age is likely to be” by the New York Times. The piece was never published. That article is referenced in Martin Ford’s provocative new book, The Rise of the Robots, so I thought I would present an excerpt (which eventually made it into the NYT two years ago). From the “Mass-Produced Laborers” section:

We have so far spoken of the computing machine as an analogue to the human nervous system rather than to the whole of the human organism. Machines much more closely analogous to the human organism are well understood, and are now on the verge of being built. They will control entire industrial processes and will even make possible the factory substantially without employees.

In these the ultra-rapid digital computing machines will be supplemented by pieces of apparatus which take the readings of gauges, of thermometers, or photo-electric cells, and translate them into the digital input of computing machines. The new assemblages will also contain effectors, by which the numerical output of the central machine will be converted into the rotation of shafts, or the admission of chemicals into a tank, or the heating of a boiler, or some other process of the kind.

Furthermore, the actual performance of these effector organs as well as their desired performance will be read by suitable gauges and taken back into the machine as part of the information on which it works.

The general outline of the processes to be carried out will be determined by what computation engineers call taping, which will state and determine the sequence of the processes to be performed. The possibility of learning may be built in by allowing the taping to be re-established in a new way by the performance of the machine and the external impulses coming into it, rather than having it determined by a closed and rigid setup, to be imposed on the apparatus from the beginning.

The limitations of such a machine are simply those of an understanding of the objects to be attained, and of the potentialities of each stage of the processes by which they are to be attained, and of our power to make logically determinate combinations of those processes to achieve our ends. Roughly speaking, if we can do anything in a clear and intelligible way, we can do it by machine.•

Tags: ,

In a lucid and lively NYRB review of Robert D. Putnam’s Our Kids: The American Dream in Crisis, a new volume which suggests that a myriad of grassroots problems have caused social mobility to founder, Nicholas Lemann astutely questions if the author perhaps has the cause and effect confused. Maybe a fraying social fabric hasn’t led to a decline in upward mobility but a flattening of the middle class and those who aspire to it has instead caused a sense of community to crumble. 

Due to globalization, de-unionization, changes in taxation and automation, the piece of the American pie enjoyed by the non-wealthy has been in steep decline in the U.S. since 1974. Between the lines, Putnam, like Lemann and Thomas Piketty, seems to acknowledge that higher education, more than social capital, is a likelier remedy to the problem. I wonder, though, how much longer that will be true. Technological forces that have disrupted other industries will likely soon come for the university, which could lead to an increased polarization in credentials and access (even as more knowledge than ever will be available online). That shift coupled with what might be a decline in opportunity owing to robotization could neutralize even the great equalizer.

Another interesting bit from Lemann: Individual mobility has always been something of a myth, as each person has usually been raised or lowered by the greater sweep of history and politics. Dick, born at an inopportune moment, would likely remain ragged.  

An excerpt:

By the logic of the book, access to social capital ought to be strongly associated with going to college and doing well there—otherwise, why stress it so strongly? The syllogism would be: social capital leads to educational attainment, which leads to mobility. But for his classmates, Putnam reports, academic achievement was the factor most predictive of college attendance, and the link between such achievement and parental encouragement (of the kind he has copiously praised in the main body of the book) was only “modestly important,” and “much weaker” than the link between class rank and college attendance. Not only that:

No other measure of parental affluence or family structure or neighborhood social capital (or indeed anything else we had measured)—none of the factors that this book has shown are so important in producing today’s opportunity gap—had any appreciable effect on college attendance or other educational attainment.

In the methods appendix, Putnam refers readers to his website for more detail on his findings about his classmates. There, he writes:

No measure of parental resources adds any predictive power whatsoever—not parental occupational status, not parental unemployment, not family economic insecurity during high school, not homeownership, not neighborhood characteristics, and not family structure…. Parental education, parental encouragement, and class rank were all modestly predictive of extracurricular participation, but holding constant those variables, extracurricular participation itself was unrelated to college-going.

So is it really the case that Putnam has shown that strong social capital once produced individual opportunity—let alone that the deterioration of social capital has produced what he calls the opportunity gap? The passages I just quoted seem to indicate that the strong association between social capital and opportunity that is Putnam’s core assertion has not been proven. Putnam doesn’t define “social capital” precisely enough to rigorously test its effects, even on as small and unrepresentative a sample as the one in his survey, and he doesn’t attempt to test its effects precisely in the present. It could even be that, rather than social capital generating prosperity, prosperity might generate social capital, which would mean Putnam has been showing us the effects of inequality, not the causes.•

Tags: ,

The second entry in the New York Times’ “Robotica” video series is a look at military-automation research by the Navy in San Diego. Right now the driverless vehicles and pet-like machines are being developed as tools to help “take our war fighters out of harm’s way,” but they will certainly be delegated more integral roles over time.

“I don’t see a robot that’s a killer robot,” says computer engineer Mark Tjersland, not seeming to realize he’s working on a project ripe for mission creep. Navy Bomb Technician Jeremy Owen acknowledges the military’s ultimate goal: “You’ll never completely eliminate the soldier from the fight, as much as they want to try to… maybe in a hundred years.”

Of course. even semi-autonomy could make robots deadlier, as drones have shown us, perhaps making war unthinkable–or even more inviting to those nations flourishing in advanced robotics.

When you live in North Korea, even China looks like freedom.

The Asahi Shimbun has an interview with a 31-year-old woman who defected from the late Kim Jong-il’s benighted nation in 2010, which provides a look into the clandestine country normally only visited by outsiders who happen to be Dennis Rodman or professional wrestlers. (Dan Greene of Sports Illustrated recently published an oral history of a 1995 North Korean wrestling tour.) There’s no horrific revelation, just a person awakened to the delusion she’d always lived within, a condition that can happen to a citizen of any country but is pretty much mandatory in North Korea. An excerpt:

Question:

What was ideology education like?

Answer:

Among newspapers, there was the Rodong Sinmun for party members, another one read by the officers of labor federations and another one for young people. All newspapers had a regular section and a supplement.

The regular section ran stories about Supreme Leader Kim Jong Il visiting a local area to give instructions. The supplement contained information about daily life, such as the extent to which a spinning factory approached its production quota. The supplement also carried comics that said, “This is how crafty and bad the United States, South Korea and Japan are.” But the comics always ended up with North Korea winning.

Question:

Was there any change after Kim Jong Un took over as national leader?

Answer:

I have heard that two to three layers of barbed wire were laid along the border with China. There were also moves to force people to appear at their workplace as well as an examination of family registers by the party. Moreover, instead of money, the party began collecting beans, sesame seeds, peanuts and sunflower seeds. An organized attempt was also made to stamp out reactionary elements in society with the creation of an “anti-socialism group.”

Question:

What kind of group was that?

Answer:

The group consisted of members of the State Security Ministry (in charge of the secret police), the People’s Security Ministry (in charge of the regular police) and prosecutors.

Group members would walk around with neighborhood group leaders, and if a member said, “I want to enter that home,” they were able to conduct a search without a warrant. If the search turned up U.S. dollars, Chinese yuan or CDs, it was confiscated unconditionally.

Such crackdowns occurred even while we were doing business. But no problems arose as long as we gave them bribes, such as a few cartons of cigarettes.

Tags: ,

Los Angeles just got good, and now California, thirsty so thirsty, will be reduced to a desiccated mound of powder? Oh, the timing.

Of course, the state isn’t disappearing, but its way of life may be–the swimming pools and mountains of almonds. Governor Jerry Brown, trying to turn a negative into a positive, is attempting to lead California into a new era of conservation. But will even that be enough for our largest state and chief food supplier to retain its comfort and beauty? In a New York Times editorial, Timothy Egan, who studied the effects of weather on the land in his great 2005 Dustbowl book, The Worst Hard Timewonders if the current drought is merely prelude and if H2O may ultimately create a new class system of haves and have-nots. An excerpt:

There is nothing normal about the fourth year of the great drought: According to climate scientists, it may be the worst arid spell in 1,200 years. For all the fields that will go fallow, all the forests that will catch fire, all the wells that will come up dry, the lasting impact of this drought for the ages will be remembered, in the most exported term of California start-ups, as a disrupter.

“We are embarked upon an experiment that no one has ever tried,” said Gov. Jerry Brown in early April, in ordering the first mandatory statewide water rationing for cities.

Surprising, perhaps even disappointing to those with schadenfreude for the nearly 39 million people living in year-round sunshine, California will survive. It’s not going to blow away. The economy, now on a robust rebound, is not going to collapse. There won’t be a Tom Joad load of S.U.V.s headed north. Rains, and snow to the high Sierra, will eventually return.

But California, from this drought onward, will be a state transformed. The Dust Bowl of the 1930s was human-caused, after the grasslands of the Great Plains were ripped up, and the land thrown to the wind. It never fully recovered. The California drought of today is mostly nature’s hand, diminishing an Eden created by man. The Golden State may recover, but it won’t be the same place.

Looking to the future, there is also the grim prospect that this dry spell is only the start of a “megadrought,” made worse by climate change. California has only about one year of water supply left in its reservoirs. What if the endless days without rain become endless years?

Tags:

As driverless capacity is integrated into cars, the growing levels of the technological efficacy will evoke a variety of behaviors in the drivers who are gradually surrendering the wheel. One person studying the human aspect of the shift is Jeff Blecher of Agero, which has partnered with MIT to analyze the transition. From Jim Henry’s Forbes interview with Blecher:

Forbes:

What are the driver behavior issues?

Jeff Blecher:

Those sorts of systems, we’re going to see some unintended consequences. How are drivers going to behave? … If they know they can rely on these systems, will they be more apt to become drowsy? More apt to do other things while they’re driving? Will they start to trust the technology almost “too” much?

… You’re going to have two separate risk profiles now. One, there’s the risk profile of the driver driving the car. Two, there’s the car driving the car, in that semi-autonomous mode.

How often is the driver driving, and how often is the vehicle driving? Historically, when an insurance company offered a discount, it was based on what the car is equipped with – antilock brakes, how many air bags, a collision mitigating braking system, adaptive cruise control with lane-centering. Now the question also becomes, how often are those technologies used?

Forbes:

Is it too soon to say if these technologies result in a discount, like for having antilock brakes, or a car alarm?

Jeff Blecher:

It’s really too soon. We’re just getting to point where we’re getting some small samples of data.

If you think about the profitability of the insurance business, as cars get safer, the rate of claims goes down for a while before the premiums decline, until the companies get competitive, until they get comfortable with the level of claims and can reduce premiums.

We will start to see premiums come down for some of these technologies in the near future.•

Tags: ,

We shouldn’t use the term “Sharing Economy” because there’s no actual sharing involved. And “Peer Economy” doesn’t really say it since the workers aren’t treated like peers, far from it. But whatever the term, the Ubers and Lyfts and Airbnbs have made their way in the world by making up their own rules as needed and leaving until later any worry about preexisting legislation. It’s a willful attack upon convention, one that previously wrecked Napster but worked out wonderfully for Youtube. Google was also aided early in its development of driverless cars by just going forward with highway testing, only thinking about laws when they were presented in response.

It’s not so easy to reconcile one’s feelings about such corporate behavior. There are many good things that have come from such advances (e.g., smartphone hailing and payment for car service, the amazing archival video trove now available to us), but those ruined empires left behind also provided wealth now forever lost. For instance: Despite’s Napster’s own demise, the record-industry model was toppled, which might seem like a good thing, but didn’t it bring us so much joy across decades? It’s a complicated situation. From the Economist:

The tension between innovators and regulators has been particularly intense of late. Uber and Lyft have had complaints that their car-hailing services break all sorts of taxi regulations; people renting out rooms on Airbnb have been accused of running unlicensed hotels; Tesla, a maker of electric cars, has suffered legal setbacks in its attempts to sell directly to motorists rather than through independent dealers; and in its early days Prosper Marketplace, a peer-to-peer lending platform, suffered a “cease and desist” order from the Securities and Exchange Commission. It sometimes seems as if the best way to identify a hot new company is to look at the legal trouble it is in.

There are two big reasons for this growing friction. The first is that many innovative companies are using digital technology to attack heavily regulated bits of the service economy that are ripe for a shake-up. Often they do so by creating markets for surplus labour or resources, using websites and smartphone apps: Uber and Lyft let people turn their cars into taxis; Airbnb lets them rent out their spare rooms; Prosper lets them lend out their spare cash. Conventional taxi firms, hoteliers and banks argue, not unreasonably, that if they have to obey all sorts of regulations, so should their upstart competitors.

The second is the power of network effects: there are huge incentives to get to the market early and grow as quickly as possible, even if it means risking legal challenges. Benjamin Edelman of Harvard Business School argues that YouTube owes its success in part to this strategy.•

After finishing this (often) funny book about an unfunny subject, I started on Martin Ford’s Rise of the Robots (excellent so far), which wonders if this new machine age will differ from the Industrial Age by not creating newer, better jobs to replace those disappeared by automation. So perhaps I’m thinking even more than usual about technological unemployment, especially in regards to the service sector, which, as Ford reminds, is where most Americans earn a living now and which is most prone to robotization. It isn’t so much a fear stoked by where Deep Learning is right now but where it may be a decade on. If it advances rapidly, how do we proceed?

In a Los Angeles Times piece, Jon Healey is concerned about the same after attending the Milken Global Conference panel on robotics. The opening:

Sometimes I wonder if I’m in the very last generation of newspaper reporters.

After hearing Jeremy Howard talk at a Milken Global Conference panel on robotics this week, however, I’m wondering if I’m in the very last generation of workers.

Howard is chief executive of Enlitic, which uses computers to help doctors make diagnoses. His technology relies on something known as machine learning, or the process by which a computer improves its own capabilities. He’s also a top data scientist, which gives him a much better view of what’s coming than most people have.

This year, Howard said, machines are better than humans at recognizing objects in an image. Now here’s the scary part. Compared to where they were in November, Howard said, they are 15 times faster in recognizing objects while being more accurate and using fewer computational resources. In five years, they will be 10,000 times faster.

“We are seeing order-of-magnitude improvements every few months,” Howard said. Similar leaps are starting to appear in computers’ ability to understand written text.

In five years’ time, a single computer could be hundreds or thousands of times better at that task than humans, Howard said. Combine it with other computers on a network, and the advantage becomes even more pronounced.

“Probably in your lifetime, certainly in your kids’ lifetime … computers will be better than humans at all these things,” he said. And within five years after that, they will be 1,000 times better.

Gulp.•

Tags: ,

In a wonderful Backchannel piece, historian Leslie Berlin answers two key questions: “Why did Silicon Valley happen in the first place, and why has it remained at the epicenter of the global tech economy for so long?”

Sharing granular details (before the name “Silicon Valley” was popularized in 1971, the area was known as “Valley of the Heart’s Delight”) and big-picture items (William Shockley’s genius drew talent to the community, and his bizarre paranoia dispersed them), Berlin provides a full-bodied sense of the place’s past, something she says continues to be of interest to the latest wave of technologists.

The short answer to the two questions posed is that there was confluence of technical, cultural and financial forces in this place in a relatively short span of time, and these same factors continue to sustain the area’s growth. (Oh, and immigration helps.) An excerpt from the “Money” section:

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was the Department of Defense buying 100% of the earliest microchips, Hewlett-Packard and Lockheed selling products to military customers, or federal research money pouring into Stanford, Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.

This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.•

Tags: ,

Friend, is your home drone-proof? Are you keeping surveillance cameras and potential flying explosives at bay? Do you realize that soon enough they’ll be the size of a flea and you won’t even be able to see these invaders? Act now!

From an Economist report about the fledgling anti-drone industry:

Detecting a small drone is not easy. Such drones are slow-moving and often low-flying, which makes it awkward for radar to pick them up, especially in the clutter of a busy urban environment. “Defeating” a detected drone is similarly fraught with difficulty. You might be able to jam its control signals, to direct another drone to catch or ram it, or to trace its control signals to find its operator and then “defeat” him instead. But all of this would need to take place, as far as possible, without disrupting local Wi-Fi systems (drones are often controlled by Wi-Fi), and it would certainly have to avoid any risk of injuring innocent bystanders.

Bringing down quads

One company which thinks itself up to fulfilling the detection part of the process is DroneShield, in Washington, DC. This firm was founded by John Franklin and Brian Hearing after Mr Franklin crashed a drone he was flying into his neighbours’ garden by accident, without them noticing. He realised then how easily drones could be used to invade people’s privacy and how much demand there might be for a system that could warn of their approach.

DroneShield’s system is centred on a sophisticated listening device that is able to detect, identify and locate an incoming drone based on the sound it makes. The system runs every sound it hears through a sonic “library,” which contains all the noises that are made by different types of drone. If it finds a match, it passes the detected drone’s identity and bearing to a human operator, who can then take whatever action is appropriate.

Other ways of detecting drones are also under investigation.•

Tags: ,

Quantifying our behavior is likely only half the task of the Internet of Things, with nudging us the other part of the equation. I don’t necessarily mean pointing us toward healthier choices we wouldn’t necessarily make (which is dubious if salubrious) but placing us even more inside a consumerist machine.

Somewhat relatedly: Quentin Hardy of the New York Times looks at how the data-rich tomorrow may mostly benefit the largest technology companies. An excerpt:

This sensor explosion is only starting: Huawei, a Chinese maker of computing and communications equipment with $47 billion in revenue, estimates that by 2025 over 100 billion things, including smartphones, vehicles, appliances and industrial equipment, will be connected to cloud computing systems.

The Internet will be almost fused with the physical world. The way Google now looks at online clicks to figure out what ad to next put in front of you will become the way companies gain once-hidden insights into the patterns of nature and society.

G.E., Google and others expect that knowing and manipulating these patterns is the heart of a new era of global efficiency, centered on machines that learn and predict what is likely to happen next.

“The core thing Google is doing is machine learning,” Eric Schmidt, Google’s executive chairman, said at an industry event on Wednesday. Sensor-rich self-driving cars, connected thermostats or wearable computers, he said, are part of Google’s plan “to do things that are likely to be big in five to 10 years. It just seems like automation and artificial intelligence makes people more productive, and smarter.”

Tags: ,

It’s certainly disingenuous that the UK publication the Register plastered the word “EXCLUSIVE” on Brid-Aine Parnell’s Nick Bostrom interview, since the philosopher, who’s become widely known for writing about existential risks in his book Superintelligence, has granted many interviews in the past. The piece is useful, however, for making it clear that Bostrom is not a confirmed catastrophist, but rather someone posing questions about challenges we may (and probably will) face should our species continue in the longer term. An excerpt:

Even if we come up with a way to control the AI and get it to do “what we mean” and be friendly towards humanity, who then decides what it should do and who is to reap the benefits of the likely wild riches and post-scarcity resources of a superintelligence that can get us out into the stars and using the whole of the (uninhabited) cosmos.

“We’re not coming from a starting point of thinking the modern human condition is terrible, technology is undermining our human dignity,” Bostrom says. “It’s rather starting from a real fascination with all the cool stuff that technology can do and hoping we can get even more from it, but recognising that there are some particular technologies that also could bring risks that we really need to handle very carefully.

“I feel a little bit like humanity is a bit like an infant or a teenager: some fairly immature person who has got their hands on increasingly powerful instruments. And it’s not clear that our wisdom has kept pace with our increasing technological prowess. But the solution to that is to try to turbo-charge the growth of our wisdom and our ability to solve global coordination problems. Technology will not wait for us, so we need to grow up a little bit faster.”

Bostrom believes that humanity will have to collaborate on the creation of an AI and ensure its goal is the greater good of everyone, not just a chosen few, after we have worked hard on solving the control problem. Only then does the advent of artificial intelligence and subsequent superintelligence stand the greatest chance of coming up with utopia instead of paperclipped dystopia.

But it’s not exactly an easy task.•

Tags: ,

Last month, I blogged about a 1928 article which told of the demise of the great Norwegian explorer Roald Amundsen, so I would be remiss if I didn’t post a passage from “Moving to Mars,” Tom Kizzia’s wonderful New Yorker article which investigates NASA’s attempts to understand the isolating effects of a potential Mars mission, which uses the 1898 Antarctic voyage of the Belgica, of which Amundsen was a crewmember, as a seafaring parallel. That ship ran into major difficulties, among them the remoteness of the trip playing havoc with the minds of the discoverers. Kizzia visits a contemporary experiment in astronaut sequestration (and other pragmatic problems) in Hawaii (“Mauna Loa is our Martian mountain,” as it’s put), a federal study similar to what speliologist Michel Siffre attempted by his lonesome in the 1960s-70s. An excerpt:

A century after the Belgica’s return, a NASA research consultant named Jack Stuster began examining the records of the trip to glean lessons for another kind of expedition: a three-year journey to Mars and back.“Future space expeditions will resemble sea voyages much more than test flights, which have served as the models for all previous space missions,” Stuster wrote in a book, Bold Endeavors, which was published in 1996 and quickly became a classic in the space program. A California anthropologist, Stuster had helped design U.S. space stations by studying crew productivity in cases of prolonged isolation and confinement: Antarctic research stations, submarines, the Skylab station. The study of stress in space had never been a big priority at NASA—or of much interest to the stoic astronauts, who worried that psychologists would uncover some hairline crack that might exclude them from future missions. (Russia, by contrast, became the early leader in the field, after being forced to abort several missions because of crew problems.) But in the nineteen-nineties, with planning for the International Space Station nearly complete, NASA scientists turned their attention to journeys deeper into space, and they found questions that had no answers.“That kind of challenging mission was way out of our comfortable low-earth-orbit neighborhood,”Lauren Leveton, the lead scientist of NASA’s Behavioral Health and Performance program, said. Astronauts would be a hundred million miles from home, no longer in close contact with mission control. Staring into the night for eight monotonous months, how would they keep their focus? How would they avoid rancor or debilitating melancholy?

Stuster began studying voyages of discovery—starting with the Niña, the Pinta, and the Santa Maria, whose deployment, he observed, anticipated the NASA-favored principle of “triple redundancy.” Crews united by a special “spirit of the expedition” excelled. HeO praised the Norwegian Fridtjof Nansen’s three-year journey into the Arctic, launched in 1893, for its planning, its crew selection, and its morale. One icebound Christmas, after a feast of reindeer meat and cranberry jam, Nansen wrote in his journal that people back home were probably worried.“I am afraid their compassion would cool if they could look upon us, hear the merriment that goes on, and see all our comforts and good cheer.” Stuster found that careful attention to habitat design and crew compatibility could avoid psychological and interpersonal problems. He called for windows in spacecraft, noting studies of submarine crewmen who developed temporarily crossed eyes on long missions. (The problem was uncovered when they had an unusual number of automobile accidents on their first days back in port.) He wrote about remote-duty Antarctic posts suffering a kind of insomnia called “polar big eye,”which could be addressed by artificially imposing a diurnal cycle of light and darkness.

Bold Endeavors was a hit with astronauts, who carried photocopied pages into space, bearing Stuster’s recommendations on workload, cognitive impairment, and special celebration days. (He nominated the birthday of Jules Verne, whose fictional explorers headed to the moon with fifty gallons of brandy and a “vigorous Newfoundland.”) But historical analogies could take NASA only so far, Stuster argued. Before humans went to Mars, a final test should run astronauts through “high-fidelity mission simulations.”To the extent possible, these tests should be carried out in some remote environment, whose extreme isolation would bring to bear the stress and confinement of a journey to outer space.•

Tags:

In way or another, bigots are almost always the thing they hate.

It just isn’t always literally so as in the case of Hungarian politician Csanad Szegedi, who was a far-right anti-Semite, until discovering he was Jewish. That stunning revelation, which occurred three years ago, knocked him from his perch, forcing him to learn to walk again as an adult. Nick Thorpe of the BBC follows up with a report. An excerpt: 

He comes across a bit like the American singer Johnny Cash. “Hello, I’m Csanad Szegedi.” And the schoolchildren of the Piarist Secondary School in Szeged hang on every word.

“I’m speaking to you here today,” says the tall chubby faced man, with small, intelligent eyes, “because if someone had told me when I was 16 or 17 what I’m going to go tell you now, I might not have gone so far astray.”

As deputy leader of the radical nationalist Jobbik party in Hungary, Szegedi co-founded the Hungarian Guard – a paramilitary formation which marched in uniform through Roma neighbourhoods.

And he blamed the Jews, as well as the Roma, for the ills of Hungarian society – until he found out that he himself was one. After several months of hesitation, during which the party leader even considered keeping him as the party’s “tame Jew” as a riposte to accusations of anti-Semitism, he walked out.

Not a man to do things in half-measures, he has now become an Orthodox Jew, has visited Israel, and the concentration camp at Auschwitz which his own grandmother survived.•

Tags: ,

Every now and then something truly expeditious occurs in technology or science (e.g., the speed forward of driverless-car development in the aftermath of the 2004 “Debacle in the Desert“). But progress is generally maddeningly slow, at least when viewed from the perspective of our own lifespans. So when Transhumanists promise a-mortality in two decades or people smart enough to know better argue that Moore’s Law will magically make everything just great by 2030, feel free to look askance. 

Demonstrating this point: Sarah Zhang of Gizmodo scooped up an issue of Scientific American from a decade ago and measured the accuracy of the predictions on everything from stem cells to solar cells. It was not pretty. An excerpt:

I recently dig up the 2005 December issue of Scientific American and went entry by entry through the Scientific American 50, a list of the most important trends in science that year. I chose 2005 because 10 years seemed recent enough for continuity between scientific questions then and now but also long enough ago for actual progress. More importantly, I chose Scientific American because the magazine publishes sober assessments of science, often by scientists themselves. (Read: It can be a little boring, but it’s generally accurate.) But I also trusted it not to pick obviously frivolous and clickbaity things.

Number one on the list was a stem cell breakthrough that turned out to be one of the biggest cases of scientific fraud ever. (To be fair, it fooled everyone.) But the list held other unfulfilled promises, too: companies now defunct, an FBI raid, and many, many technologies simply still on the verge of finally making it a decade later. By my count, only two of its 16 medical discoveries of 2005 have resulted in a drug or hospital procedures so far. The rosy future is not yet here.

Science is not a linear march forward, as headlines seem to imply. Science is a long slow slog, and often a twisty one at that.•

Tags:

Your new robot coworkers are darling–and so efficient! They’ll relieve you of so many responsibilities. And, eventually, maybe all of them. For now, factory robots will reduce jobs only somewhat, as we work alongside them. But eventually the band will be broken up, the machines going solo. Even the workers manufacturing the robots will soon enough be robots.

In a Technology Review article, Tom Simonite takes a smart look at this transitional phase, as robots begin to gradually commandeer the warehouse. He focuses on Fetch, a company that makes robots versatile enough to be introduced into preexisting factories. An excerpt:

Freight is designed to help shelf pickers, who walk around warehouses pulling items off shelves to do things like fulfilling online shopping orders. As workers walk around gathering items from shelves, they can toss items into the crate carried by the robot. When an order is complete, a tap on a smartphone commands the robot to scoot its load off to its next destination.

Wise says that robot colleagues like these could make work easier for shelf pickers, who walk as much as 15 miles a day in some large warehouses. Turnover in such jobs is high, and warehouse operators struggle to fill positions, she says. “We can reduce that burden on people and have them focus on the things that humans are good at, like taking things off shelves,” says Wise.

However, Wise’s company is also working on a second robot designed to be good at that, too. It has a long, jointed arm with a gripper, is mounted on top of a wheeled base, and has a moving “head” with a depth camera similar to that found in the Kinect games controller. This robot, named Fetch, is intended to rove around a particular area of shelving, taking items down and dropping them into a crate carried by a Freight robot.

Tags: ,

« Older entries § Newer entries »