Science/Tech

You are currently browsing the archive for the Science/Tech category.

Driverless cars and trucks are the future, but when, exactly, is that? 

It would be really helpful to know, since million jobs of jobs would be lost or negatively impacted in the trucking sector in the U.S. alone. There’s also, of course, taxis, delivery workers, etc.

In the BBC piece What’s Putting the Brakes on Driverless Cars?, Matthew Wall examines the factors, legal and technical, delaying what’s assumed more and more to be inevitable. An excerpt:

The technology isn’t good enough yet

Many semi-autonomous technologies are already available in today’s cars, from emergency braking to cruise control, self parking to lane keeping. This year, Ford is also planning to introduce automatic speed limit recognition tech and Daimler is hoping to test self-driving lorries on German motorways.

But this is a far cry from full autonomy.

Andy Whydell, director at TRW, one of the largest global engineering companies specialising in driver safety equipment, says radars have a range of about 200-300m (218-328 yards) but struggle with distances greater than this.

As a result, “sensors may not have sufficient range to react fast enough at high speed when something happens ahead,” he says. “Work is going on to develop sensors that can see ahead 400m.”

Lasers and cameras are also less effective in rainy, foggy or snowy conditions, he says, which potentially makes them unreliable in much of the northern hemisphere.

Even Google has admitted that its prototype driverless car struggles to spot potholes or has yet to be tested in snow.

And how would a driverless car cope trying to exit a T-junction at rush hour if human-driven cars don’t let it out?•

 

Tags:

Reality stars happened for numerous reasons, most notably because TV and print, destabilized by the Internet, needed insta-celebs to provide cheap content–actual stars were too expensive in the new economic reality–so dysfunction was commodified, and the modern version of the circus freak show was popularized. They’re pretty much walking products, all the Housewives and Bachelors, desperately trying to sell themselves in a market where the middle has disappeared and the bottom is the best most can hope for.

In 2007, Lynn Hirschberg of the New York Times Magazine penned “Being Rachel Zoe,” a profile of the image maker at an inflection point in the culture, when faux celebs were becoming the real thing, when the sideshow moved to the center ring. An excerpt:

As always, Zoe (pronounced ZOH) was dressed for the designer she was viewing. She was wearing a bright pink nubby wool Chanel jacket, black pants and her usual five-inch platform open-toed shoes. All the Zoe trademarks were in place: she was very tan; her long blond hair was carefully styled to look carefree; there were ropes of gold chains around her neck and stacks of diamond bangles on her wrists; and enormous (Chanel) sunglasses nearly obscured her face. Even wearing high heels, she is short and stick-thin, but Zoe, who is 36, does not seem fragile. The masses of jewelry, the outsize sunglasses, the whole noisy, ’70s-inspired look add up to a hectic, ostentatious, theatrical sort of glamour.

It’s the look she has duplicated on her clients, making the so-called Zoe-bots paparazzi favorites, as well as walking advertisements for a host of top designers. A cross-pollinator of the worlds of Hollywood celebrities, high fashion and tabloid magazines, Zoe has become a powerful image broker, a conduit to the ever-more lucrative intersection of commerce, style and fame. Early in her career, in 1996, she worked as a stylist at YM magazine, dressing such teenage pop stars as Britney Spears and Jessica Simpson, girls who were young enough to be molded and popular enough to be influential. Around the same time, magazines like Us Weekly began inventing their own cadre of celebrities, like Paris Hiltonand Nicole Richie. They had no discernible accomplishments or talent, but they did seem to go out a lot, and they thrived under the flash of the paparazzi. Magazines like Us constructed provocative narratives around them — their romantic woes, their drug problems — and Zoe, who began working with Richie in 2003 when she was viewed only as Hilton’s plump sidekick, saw an opportunity. “Nicole is now what people refer to as the big thing that happened,” Zoe told me in Paris. “Everything went from nowhere to everywhere. Nicole was about creating a look. Because of her fashion sense, which was really my fashion sense, she became famous. It was a huge moment: Nicole became a style icon without being a star.”

And then Nicole became a star, too. Because of circumstances that remain murky, Nicole and Rachel no longer speak. But the relationship made their careers. Zoe began working with Lindsay Lohan, Kate Beckinsale and other tabloid-ready stars eager for a new fashion identity. Now she has 20 clients, each of whom reportedly pays her more than $6,000 a day to dress them for events, big and small. Some pay only for premieres and award shows; some also retain Zoe to provide clothes for their daily lives. The financial scope of her business also includes incentives in the form of money and/or clothes, accessories or jewels, offered by designers eager to dress a particular Zoe client for a particular event. “Around three years ago, everything began to change,” Zoe said as she ran through puddles toward the entrance of the Chanel show. “The nature of what, or who, is a celebrity has expanded. We aren’t saving lives here, but we are creating images, and images create opportunities in a lot of areas.”•

Tags: ,

Jeez, Jim Holt is a dream to read. If you’ve never picked up his 2012 philosophical and moving inquiry, Why Does the World Exist?: An Existential Detective Story, it’s well worth your time. For awhile it was cheekily listed No. 1 at the Strand in the “Best Books to Read While Drunk” category, but I don’t drink, and I adored it.

In a 2003 Slate article, “My Son, the Robot,” Holt wrote of Bill McKibben’s Enough: Staying Human in an Engineered Age, a cautionary nonfiction tale that warned our siblings soon would be silicon sisters, thanks to the progress of genetic engineering, robotics, and nanotechnology. It was only a matter of time.

Holt was unmoved by the clarion call, believing human existence unlikely to be at an inflection point and thinking the author too dour about tomorrow. While Holt’s certainly right that we’re not going to defeat death anytime soon despite what excitable Transhumanists promise, both McKibben and the techno-optimists probably have time on their side.

An excerpt:

Take McKibben’s chief bogy, genetic engineering—specifically, germline engineering, in which an embryo’s DNA would be manipulated in the hopes of producing a “designer baby” with, say, a higher IQ, a knack for music, and no predisposition to obesity. The best reason to ban it (as the European Community has already done) is the physical risk it poses to individuals—namely, to the children whose genes are altered, with unforeseen and possibly horrendous consequences. The next best reason is the risk it poses to society—exacerbating inequality by creating a “GenRich” class of individuals who are smarter, healthier, and handsomer than the underclass of “Naturals.” McKibben cites these reasons, as did Fukuyama (and many others) before him. However, what really animates both authors is a more philosophical point: that genetic engineering would alter “human nature” (Fukuyama) or take away the “meaning” of life (McKibben). As far as I can tell, the argument from human nature and the argument from meaning are mere terminological variants of each other. And both are woolly, especially when contrasted with the libertarian argument that people should be free to do what they wish as long as other parties aren’t harmed.

Finally, McKibben’s reasoning fitfully betrays a vulgar variety of genetic determinism. He approvingly quotes phrases like “genetically engineered thoughts.” Altering an embryo’s DNA to make your child, say, less prone to violence would turn him into an “automaton.” Giving him “genes expressing proteins to boost his memory, to shape his stature” would leave him with “no more choice about how to live his life than a Hindu born untouchable.” Why isn’t the same true with the randomly assigned genes we now have?

Now to the deeper fallacy. McKibben takes it for granted that we are at an inflection point of history, suspended between the prehistoric and the Promethean. He writes, “we just happen to be alive at the brief and interesting moment when [technological] growth starts to really matter—when it spikes.” Everything is about to change.

The extropian visionaries arrayed against him—people like Ray Kurzweil, Hans Moravec, Marvin Minsky, and Lee Silver—agree. In fact, they think we are on the verge of conquering death (which McKibben thinks would be a terrible thing). And they mean RIGHT NOW. When you die, you should have your brain frozen; then, in a couple of decades, it will get thawed out and nanobots will repair the damage; then you can start augmenting it with silicon chips; finally, your entire mental software, and your consciousness along with it (you hope), will get uploaded into a computer; and—with multiple copies as insurance—you will live forever, or at least until the universe falls apart.•

 

Tags: ,

Robotics will likely play a big role in the future of spectator sports, though I don’t envision the discipline taking over ESPN in prime time in the near future. Auto racing is, of course, already a human-machine hybrid, but are familiar athletics to be encroached upon by AI or will robotic-specific sports rise? Are drones to be the new chariots?

In a Medium article, Cody Brown, who’s more bullish on robot athletics in the near term than I am, gives seven reasons for his enthusiasm. An excerpt:

3.) Top colleges fight over teenagers who win robotics competitions.

If you’re good at building a robot, chances are you have a knack for engineering, math, physics, and a litany of other skills top colleges drool over. This is exciting for anyone (at any age) but it’s especially relevant for students and parents deciding what is worth their investment.

There are already some schools that offer scholarships for e-sports. I wouldn’t be surprised if intercollegiate leagues were some of the first to pop up with traction.

5.) Rich people are amused by exceptional machines.

There is a reason that Rolex sponsors Le Mans. A relatively small number of people attend the race but it’s an elite mix of engineers and manufactures. Many of the people who became multimillionaires in the past 20 years got it from The Internet or some relation to the tech industry. They want to spend their money on what amuses them/their friends and robotics is a natural extension. Mark Zuckerberg recently gave one of the top drone racers in the world (Chapu) a shoutout on Facebook.•

Tags:

Uber’s claims that it’s good for Labor are nonsense, and the press conference in Harlem even used Eric Garner’s name to sell that hokum. The rideshare company is potentially good in some ways–consumer experience, challenging the taxi business’ racial profiling, being friendlier to the environment–but it didn’t beat Mayor de Blasio because of those potentially good things, but because it outmaneuvered him on a big lie. That’s worrisome.

Chris Smith of New York smartly sums up the gamesmanship:

De Blasio hadn’t been prepared for the onslaught. What was truly disorienting for him — and politically ominous — was that the roles had been scrambled. The mayor assumed he was the progressive defender of moral fairness and the little guy: Of course city government should regulate anyone trying to add 10,000 commercial vehicles to New York’s streets. Of course he needed to protect the rights of Uber’s “driver-partners.”

Yet Uber was able to deftly outflank de Blasio on his home turf, co-opting pieces of his message, splitting him from his normal Democratic allies, and drawing together an opposition constituency that could haunt de Blasio in 2017. To do so, Uber deployed a sophisticated, expensive political campaign waged by lobbyists and strategists trained in the regimes of Obama, Cuomo, and Bloomberg. That campaign worked in part because even though Uber the company is motivated by the pursuit of profit, not social justice, Uber the product has some genuinely progressive effects. So Uber went straight at the mayor’s minority base, drawing it into its vision of the modern New York. The company’s ad blitz highlighted how Uber’s drivers are mostly black and brown. It held a press conference at Sylvia’s, in Harlem, where the company basically accused the mayor of discriminating against minorities by daring to try to rein in its growth. It pushed data to reporters showing that Uber serves outer-borough neighborhoods that for years were shunned by yellow cabs.

In doing so, the company was able to dispel its aura of Bloomberg-era elitism. Newer services like UberPool, which allows drivers to pick up multiple passengers who split the fare, would ease congestion and make the city greener. Uber exploited its appeal to a youthful, techie, multiracial liberalism, selling itself as about openness and choice — a choice that was being stymied by old bureaucratic ways that have no business in the new city. This was a direct hit on de Blasio’s greatest vulnerability: the mayor’s seeming defense of an entrenched and hated yellow-taxi monopoly that’s been one of his most prolific campaign contributors.•

 

Tags:

The news business isn’t dead, just a tale of haves and have-nots like most of contemporary culture, and we may have reached an inflection point in that arrangement over the past year or so. From Jeff Bezos’ purchase of the Washington Post to Nikkei snapping up the Financial Times, those publications with a global name are being acquired and reformatted for a mobile world where smartphones are the main medium. 

The question for me remains whether the New York Times, the most valuable erstwhile newspaper in the U.S., will be able to go it alone as a “mom-and-pop” shop, or if they must also become part of a gigantic, diversified company. I’d hope for the former and bet the latter.

From Matthew Garrahan at the Financial Times:

What has convinced investors to take another look at news? “There has been a lot of disruption but there has never been a lack of consumer demand for quality news content,” says Jim Bankoff, chief executive of Vox Media.

Another clue can be found in the actions of Apple and Facebook, which have built news offerings. The technology companies have realised that, like photo sharing and music apps, news can attract and retain users of their mobile services. This year will be the first when smartphones are responsible for 50 per cent of news consumption, up from 25 per cent in 2012, according to Ken Doctor, an analyst with Newsonomics. The smartphone has become “the primary access point for many readers,” he says.

The news brands that have attracted the most interest are digital, mobile and global. For Nikkei, buying the FT gives it an opportunity to expand into new markets — particularly in Asia, says Mr Doctor, where markets such as South Korea, Indonesia and India are growing rapidly. The FT “gives Nikkei more weight and more smarts in how to compete,” he says.•

Tags: , ,

Elite-level athletes are born with all sorts of genetic advantages. Some are related to lungs and hearts, some to muscles and body type. Michael Phelps couldn’t have been better built in a laboratory for swimming, from leg length to wingspan. Usain Bolt, that wonder, has an innate biological edge to go along with cultural factors that benefit Jamaican runners. There’s no such thing as a level playing field.

So, I’m puzzled when a female competitor with a hormone level that’s naturally elevated into what’s considered male territory is held up to scrutiny. Apart from having to do with sexual characteristics, how is it any different?

The Indian sprinter Dutee Chand, who has high natural levels of testosterone, has thankfully been ruled eligible to compete despite protests. From John Branch at the New York Times:

The final appeals court for global sports on Monday further blurred the line separating male and female athletes, ruling that a common factor in distinguishing the sexes — the level of natural testosterone in an athlete’s body — is insufficient to bar some women from competing against females.

The Court of Arbitration for Sport, based in Switzerland, questioned the athletic advantage of naturally high levels of testosterone in women and immediately suspended the “hyperandrogenism regulation” of track and field’s governing body. It gave the governing organization, known as the I.A.A.F., two years to provide more persuasive scientific evidence linking “enhanced testosterone levels and improved athletic performance.”
 
The court was ruling on a case, involving the Indian sprinter Dutee Chand, that is the latest demonstration that biological gender is part of a spectrum, not a this-or-that definition easily divided for matters such as sport. It also leaves officials wondering how and where to set the boundaries between male and female competition.
 
The issue bemuses governing bodies and riles fans and athletes. Among those who testified in support of the I.A.A.F. policy was the British marathon runner Paula Radcliffe, who holds the event’s world record among women. According to the ruling, Radcliffe said that elevated testosterone levels “make the competition unequal in a way greater than simple natural talent and dedication.” She said that other top athletes shared her view.•

 

Tags: ,

Well, of course we shouldn’t engage in autonomous warfare, but what’s obvious now might not always seem so clear. What’s perfectly sensible today might seem painfully naive tomorrow.

I think humans create tools to use them, eventually. When electricity (or some other power source) is coursing through those objects, the tools almost become demanding of our attention. If you had asked the typical person 50 years ago–20 years ago?–whether they would be accepting of a surveillance state, the answer would have been a resounding “no.” But here we are. It just creeped up on us. How creepy.

I still, however, am glad that Stephen Hawking, Steve Wozniak, Elon Musk and a thousand others engaged in science and technology have petitioned for a ban on AI warfare. It can’t hurt.

From Samuel Gibbs at the Guardian:

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.•

Tags: , , ,

nasamoon9

A NASA-commissioned study suggests the costs of establishing and maintaining a moon colony could be reduced by 90% if it’s a multinational, public-private enterprise that utilizes robotics and reusable rockets, and if the moon’s resources are mined for fuel and other necessities. As Popular Science points out, the whole project pretty much rests on there being abundant hydrogen in the moon’s crust, or it’s a no-go.

Below is the study’s passage on robotics, how autonomous machines would aid in the mission and what residual benefits their development may deliver to Earth:

Establish Crew Outpost

Following completion of the ISRU production facility, and the arrival of the large reusable lunar lander, the site is ready for the delivery of habitats, and other infrastructure needed for the permanent crewed lunar base. The ELA is designed to launch a Bigelow BA-330 expandable habitat sized system via either a Falcon Heavy or Vulcan LV to LEO, which is then transferred from LEO to low-lunar orbit (LLO) by leveraging inspace propellant transfer in LEO. The large reusable lunar lander will then rendezvous with the habitat, and other large modules, in LLO and transport them to the surface of the Moon. These modules would be moved by robotic systems from the designated landing areas to the crew habitation area selected during the scouting/prospecting operation. The modules could be positioned into lava tubes, which provide ready-made, natural protection against radiation and thermal extremes, if discovered at lunar production site. Otherwise, the robotic systems will move regolith over the modules for protection. Additionally, the robotic systems will connect the modules to the communications and power plant at the site.

Human & Robot Interaction as a System:

Why are robotics critical?

The reasons that the process begins with robotics instead of beginning with ‘human-based’ operations like Apollo includes:

1. Robotics offer much lower costs and risk than human operations, where they effective, which is amplified in remote and hostile environments.

2. Robotic capabilities are rapidly advancing to a point where robotic assets can satisfactorily prospect for resources and also for set up and prepare initial infrastructure prior to human-arrival.

3. Robotics can be operated over a long period of time in performing the prospecting and buildup phases without being constrained by human consumables on the surface (food, water, air, CO2 scrubbing, etc.).

4. Robotics can not only be used to establish initial infrastructure prior to crew arrival, preparing the way for subsequent human operations, but to also repair and maintain infrastructure, and operate equipment after humans arrive.

Why do robots need humans to effectively operate a lunar base? Why can’t robotics “do it all”? Why do we even need to involve humans in this effort?

1. Some more complex tasks are better performed jointly by humans and robotics….or by humans themselves. This is an important area of research and testing.

2. Humans operate more effectively and quicker than robotic systems, and are much more flexible. Human are able to make better informed and timely judgments and decisions than robotic operations, and can flexibly adapt to uncertainty and new situations.

3. Robotic technology has not reached a point where robots can repair and maintain themselves. The robotic systems will need to periodic as well as unscheduled maintenance and repair….provided by humans.

Public Benefits of Investments in Advanced Robotics

U.S. government investments in advanced technologies such as robotics will have tremendous impacts on American economic growth and innovation here on Earth. The investments just by DARPA in robotic technologies are having significant spill-over effects into many terrestrial applications and dual-use technologies. Examples of dual use technologies include:

a. Robotic systems performing connect /disconnect operations of umbilicals for fluid/propellant loading … could lead to automated refueling of aircraft, cars, launch vehicles, etc.

b. Robotic civil engineering: 3D printing of structures on the Moon with plumbing through industrial 3D printer robotics, could lead to similar automated construction methods here on Earth.

c. Tunnel inspections: Robotic operations for inspecting lava tunes on the Moon could lead to advanced automation in mine shafts on Earth. Advances in autonomous navigation, imagery, and operations for dangerous locations and places could save many lives here on Earth.

d. Remote and intelligent inspection of unsafe structures from natural disasters (tsunamis, radiation leakage, floods, hurricanes) could enable many more operations by autonomous robotics where it is unsafe to send humans.•

We will be measured, and often we won’t measure up.

Connected technologies will not just assess us in the aftermath, but during processes, even before they begin. Data, we’re told, can predict who’ll falter or fail or even become a felon. Like standardized testing, algorithms may aim for meritocracy, but there’s likely to be unwitting biases. 

And then, of course, is the intrusiveness. Those of us who didn’t grow up in such a scenario won’t ever get used to it, and those who do won’t know any other way.

Such processes are being experimented with in the classroom. They’re meant to improve the experience of the student, but they’re fraught with all sorts of issues.

From Helen Warrell at the Financial Times:

week after students begin their distance learning courses at the UK’s Open University this October, a computer program will have predicted their final grade. An algorithm monitoring how much the new recruits have read of their online textbooks, and how keenly they have engaged with web learning forums, will cross-reference this information against data on each person’s socio-economic background. It will identify those likely to founder and pinpoint when they will start struggling. Throughout the course, the university will know how hard students are working by continuing to scrutinise their online reading habits and test scores.

Behind the innovation is Peter Scott, a cognitive scientist whose “knowledge media institute” on the OU’s Milton Keynes campus is reminiscent of Q’s gadget laboratory in the James Bond films. His workspace is surrounded by robotic figurines and prototypes for new learning aids. But his real enthusiasm is for the use of data to improve a student’s experience. Scott, 53, who wears a vivid purple shirt with his suit, says retailers already analyse customer information in order to tempt buyers with future deals, and argues this is no different. “At a university, we can do some of those same things — not so much to sell our students something but to help them head in the right direction.”

Made possible by the increasing digitisation of education on to tablets and smartphones, such intensive surveillance is on the rise.•

Tags: ,

In one of his typically bright, observant posts, Nicholas Carr wryly tackles Amazon’s new scheme of paying Kindle Unlimited authors based on how many of their pages are read, a system which reduces the written word to a granular level of constant, non-demanding engagement. 

There’s an argument to be made that like systems have worked quite well in the past: Didn’t Charles Dickens publish under similar if not-as-precisely-quantified circumstances when turning out his serial novels? Sort of. Maybe not to the same minute degree, but he was usually only as good as his last paragraph (which, thankfully, was always pretty good).

The difference is while it worked for Dickens, this process hasn’t been the motor behind most of the great writing in our history. James Joyce would not have survived very well on this nano scale. Neither would have Virginia Woolf, William Faulkner, Marcel Proust, etc. Their books aren’t just individual pages leafed together but a cumulative effect, a treasure that comes only to those who clear obstacles.

Shakespeare may have had to pander to the groundlings to pay the theater’s light bill, but what if the lights had been turned off mid-performance if he went more than a page without aiming for the bottom of the audience?

Carr’s opening:

When I first heard that Amazon was going to start paying its Kindle Unlimited authors according to the number of pages in their books that actually get read, I wondered whether there might be an opportunity for an intra-Amazon arbitrage scheme that would allow me to game the system and drain Jeff Bezos’s bank account. I thought I might be able to start publishing long books of computer-generated gibberish and then use Amazon’s Mechanical Turk service to pay Third World readers to scroll through the pages at a pace that would register each page as having been read. If I could pay the Turkers a fraction of a penny less to look at a page than Amazon paid me for the “read” page, I’d be able to get really rich and launch my own space exploration company.

Alas, I couldn’t make the numbers work. Amazon draws the royalties for the program from a fixed pool of funds, which serves to cap the upside for devious scribblers.

So much for my Mars vacation. Still, even in a zero-sum game that pits writer against writer, I figured I might be able to steal a few pennies from the pockets of my fellow authors. (I hate them all, anyway.) I would just need to do a better job of mastering the rules of the game, which Amazon was kind enough to lay out for me:

Under the new payment method, you’ll be paid for each page individual customers read of your book, the first time they read it. … To determine a book’s page count in a way that works across genres and devices, we’ve developed the Kindle Edition Normalized Page Count (KENPC). We calculate KENPC based on standard settings (e.g. font, line height, line spacing, etc.), and we’ll use KENPC to measure the number of pages customers read in your book, starting with the Start Reading Location (SRL) to the end of your book.

The first thing that has to be said is that if you’re a poet, you’re screwed.•

 

Tags:

Not everyone can be a syphilitic prostitute or a foul-smelling arsonist, so some must settle for being Instagram comedians. 

I mean, there’s nothing wrong with Instagram or Vine or any other platform in the hands of someone who’s genuinely funny and creative, but what’s thought of as humor on these new channels is largely (and almost intentionally) witless, just topical bullshit that doesn’t require anything more than the barest sentience for the so-called jokes to be received.

Case in point: Josh Ostrovsky, whose main flaw should be that he dubbed himself “The Fat Jew” (or “The Fat Jewish”), but he’s also hampered by being resolutely lame in captioning photos unearthed in web searches and posting them to his Instagram account. His brutally unfunny shtick is as painful as it is popular, unless you think “White girl ordering Starbucks” riffs are hilarious. Like the Kardashians, he’s mastered form free of any worthwhile content, but the difference is Reality TV promises none while humor does (or at least should). In a FT piece by John Sunyer, Ostrovsky’s process, as it were, is revealed. An excerpt:

The Fat Jew explains how, each day, from an office he rents in the back of a nail salon in Queens, he and three interns search the bowels of the web for unusual, often slightly ridiculous images — so long as they haven’t already gone viral — to post to his followers.

Last month, celebrating the Supreme Court’s decision to approve same-sex marriage across the US, he posted a Photoshopped picture of rapper Kanye West kissing his double. Below it, the Fat Jew wrote: “Finally, Kanye can legally marry himself absolutely anywhere in this great nation.” The picture gained almost 238,000 likes.

Outlandish taste runs through his comedy — in one picture two pizza slices are positioned together so that they look like the Star of David, with the caption “This is my religion” typed below. He also mocks our dependence on the internet. Take the picture showing a message saying: “Home is where your WiFi connects automatically,” beneath which a caption reads: “Meaning not my parents’ house, where the WiFi password is RHXFGJIJ0000055$T.”

With this sort of content, which people seem unable to resist regramming or telling their friends about, the Fat Jew has found fame and financial success. Still, he says, it’s difficult to explain to “proper adults” what he does.

“Not getting all high and mighty about it but it’s more like performance art than comedy,” he says, draining his cocktail. “I won’t ever open a soup kitchen but what I do is the next best thing. Pictures are what I can give back to the world. A lot of people have steady careers, health insurance, a pay cheque at the end of the month, a wife and three kids. But that kind of life can get boring; sometimes you need to see a fat guy sitting in a giant bowl of chilli.•

Tags: ,

In Jerry Kaplan’s excellent WSJ essay about ethical robots, which is adapted from his forthcoming book, Humans Need Not Apply, the author demonstrates it will be difficult to come up with consistent standards for our silicon sisters, and even if we do, machines following rules 100% of the time will not make for a perfect world. The opening:

As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?

Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.

How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?•

 

Tags:

Some things in America are certainly accelerating. That includes things technological (think how quickly autonomous vehicles have progressed since 2004’s “Debacle in the Desert“) and sociological (gay marriage becoming legal in the U.S. just seven years after every Presidential candidate opposed it). Our world just often seems to move more rapidly to a conclusion of one sort or another in a wired, connected world, though the end won’t always be good, and legislation will be more and more feckless in the face of the new normal.

Of course, this sort of progress sets up a trap, convincing many technologists that everything is possible in the near-term future. When I read how some think robot nannies will be caring for children within 15 years, stuff like that, I think perfectly well-intentioned people are getting ahead of themselves. 

In a Forbes piece, Steven Kotler certainly thinks acceleration is the word of the moment, though to his credit he acknowledges that the tantalizing future can be frustrating to realize. An excerpt:

You really have to stop and think about this for a moment. For the first time in history, the world’s leading experts on accelerating technology are consistently finding themselves too conservative in their predictions about the future of technology.

This is more than a little peculiar. It tells us that the accelerating change we’re seeing in the world is itself accelerating. And this tells us something deep and wild and important about the future that’s coming for us.

So important, in fact, that I asked Ken [Goffman of Mondo 2000] to write up his experience with this phenomenon. In his always lucid and always funny own words, here’s his take on the dizzying vertigo that is tomorrow showing up today:

In the early ‘90s, the great science fiction author William Gibson famously remarked, “The Future is here. It’s just not very evenly distributed.” While this was a lovely bit of phraseology, it may have been a bit glib at the time. Nanotechnology was not even a commercial industry yet. The hype around virtual reality went bust. There were no seriously funded brain emulation projects pointing towards the eventuality of artificial general intelligence (now there are several). There were no longevity projects funded by major corporations (now we have the Google GOOGL -3.13%-funded Calico). You couldn’t play computer games with your brain. People weren’t winning track meets and mountain climbing on their prosthetic legs. Hell, you couldn’t even talk to your cell phone, if you were among the relatively few who had one.

Over the last few years, the tantalizing promises of radical technological changes that can alter humans and their circumstances have really started to come into their own. Truly, the future is now actually here, but still largely undistributed (never mind, evenly distributed).•

Tags: ,

The Japanese robot hotel that I’ve blogged about before (here and here) was the focus of a CBS This Morning report by Seth Doane. It’s sort of a curious piece. It focuses mostly on the novelty and the hotel’s lack of a human touch, with the correspondent stating that no one should worry about losing their job to a robot in the near future because there still are software issues to be worked out. Except that even before a total robotization of basic hotels, many positions handled currently by humans will be lost. The same goes for airports, hospitals, warehouses and restaurants.

On the face of it, what a wonderful thing if AI could handle all the drudgery and bullshit jobs, even if it could take some quality jobs and do much better work than we do. Of course, the problem is, economies in America and many other states aren’t arranged for that radical reorganization. It’s a serious cultural problem and a political one.

Tags:

I’m not in favor of capping Uber in NYC, since I don’t think it makes much economic sense to suppress innovation, but as I’ve said many times before, we should be honest about what ridesharing, and more broadly the Gig Economy, truly is: good for the consumer and convenience and the environment and really bad for Labor.

Not only will ridesharing obliterate taxi jobs that guarantee a minimum wage, but Travis Kalanick’s outfit has no interest in treating its drivers well or even in retaining them in the longer term and is only intent on using workers–even exploiting military vets–for publicity purposes. Yet community leaders and politicians, including New York’s Governor Cuomo, either keep buying this nonsense or are being dishonest.

From Glenn Blain in the New York Daily News:

ALBANY — Gov. Cuomo is siding with Uber in its battle with Mayor de Blasio.

In the latest eruption of bad blood between Cuomo and his one-time frenemy, the governor hailed the ride-sharing company as a job producer and scoffed at a City Council bill — backed by the mayor — that would cap Uber’s growth.

“Uber is one of these great inventions,” Cuomo said during an interview on public radio’s “The Capitol Pressroom” Wednesday.

“It is taking off like fire through dry grass and it’s offering a great service for people and it’s giving people jobs,” Cuomo said. “I don’t think government should be in the business of trying to restrict job growth.”•

Tags: ,

In an attempt to rename cars in the potential driverless future, technologists and pundits have offered numerous alternatives: robocars, driverless cars, autonomous cars. Funny thing is, the term “auto” (“by oneself or spontaneous” and “by itself or automatic”), one we already use, would be particularly apt. The word doesn’t need change–the definition will.

In a similar vein, Adrienne France of the Atlantic, has written a smart piece about the word “computer,” which is entering its 3.0 stage, an era that may call back in some ways to the original definition. An excerpt:

Now, leading computer scientists and technologists say the definition of “computer” is again changing. The topic came up repeatedly at a brain-inspired computing panel I moderated at the U.S. Capitol last week. The panelists—neuroscientists, computer scientists, engineers, and academics—agreed: We have reached a profound moment of convergence and acceleration in computing technology, one that will reverberate in the way we talk about computers, and specifically with regard to the word “computer,” from now on.

“It’s like the move from simple adding machines to automated computing,” said James Brase, the deputy associate director for data science at Lawrence Livermore National Laboratory. “Because we’re making an architectural change, not just a technology change. The new kinds of capabilities—it won’t be a linear scale—this will be a major leap.”

The architectural change he’s talking about has to do with efforts to build a computer that can act—and, crucially, learn—the way a human brain does. Which means focusing on capabilities like pattern recognition and juiced-up processing power—building machines that can perceive their surroundings by using sensors, as well as distill meaning from deep oceans of data. “We are at a time where what a computer means will be redefined. These words change. A ‘computer,’ to my grandchildren, will be definitely different,” said Vijay Narayanan, a professor of computer science at Pennsylvania State University.•

Tags: , ,

I would think there’s life of some sort out there somewhere beyond the little rock we call Earth. It’s never visited us here, but perhaps someday, in one way or another, we’ll make contact. At that point, how do we proceed?

In a Guardian article, Roger Pielke Jr. fears we’ll have to wing it, having done so little thinking about the possibility. An excerpt:

Earlier this week Stephen Hawking, theoretical physicist and modern rock star scientist, along with Russian billionaire Yuri Milner announced that they would be launching a new project to boost efforts to look for intelligent life outside our solar system. Milner, who is funding the effort, said that he had been “thinking about this since I was a child, reading Carl Sagan’s book.”

Upon hearing of the new project, called Breakthrough Listen, I was reminded, of all things, of a recent prison break. Last month two convicted murders escaped from a New York prison. They had spent months carefully planning and executing their escape, which involved cutting and digging their way through walls, pipes and concrete. Remarkably, however, the pair gave little thought to what they would do if they actually succeeded in their plans. The consequence of the lack of planning was a short effort to flee from authorities followed by the death of one prisoner and re-capture of the other by authorities.

The search for extra-terrestrial life shares some similarities. We are investing considerable attention and resources into the search, but little into thinking about the consequences of success. As Carl Sagan imagined, it is as if we expect to fail, which would be a relief. Even Milner says, “It’s quite likely that we won’t find anything.” But what if we do succeed? What then? …

In fact, it seems unlikely that any policy makers in national or international settings have a clearly thought–through plan for responding to the discovery of extraterrestrial life, whether that be microbes on another body in our solar system or beady-eyed aliens looking to invade. The conversation is only silly if we assume that efforts to detect alien life will never succeed.•

Tags:

Julian Assange, the alleged Bill Cosby of Wikileaks, can be a preposterous blowhard, but that doesn’t mean he hasn’t been a useful part of the discussion about surveillance. At Spiegel, Michael Sontheimer has a new longform Q&A with Assange, in which they discuss what might be called Wikileaks 2.0, as well as the “digital colonization of the world” by Silicon Valley powerhouses, this era’s analog of America’s twentieth-century cultural exportation of Hollywood and hamburgers.

An excerpt:

Spiegel:

You met Eric Schmidt, the CEO of Google. Do you think he is a dangerous man?

Julian Assange:

If you ask “Does Google collect more information than the National Security Agency?” the answer is “no,” because NSA also collects information from Google. The same applies to Facebook and other Silicon Valley-based companies. They still collect a lot of information and they are using a new economic model which academics call “surveillance capitalism.” General information about individuals is worth little, but when you group together a billion individuals, it becomes strategic like an oil or gas pipeline.

Spiegel:

Secret services are perceived as potential criminals but the big IT corporations are perceived at least in an ambiguous way. Apple produces beautiful computers. Google is a useful search engine.

Julian Assange:

Until the 1980s, computers were big machines designed for the military or scientists, but then the personal computers were developed and companies had to start rebranding them as machines that were helpful for individual human beings. Organizations like Google, whose business model is “voluntary” mass surveillance, appear to be giving it away for free. Free e-mail, free search, etc. Therefore it seems that they’re not a corporation, because corporations don’t do things for free. It falsely seems like they are part of civil society.

Spiegel:

And they shape the thinking of billions of users?

Julian Assange:

They are also exporting a specific mindset of culture. You can use the old term of “cultural imperialism” or call it the “Disneylandization” of the Internet. Maybe “digital colonization” is the best terminology.

Spiegel:

What does this “colonization” look like?

Julian Assange:

These corporations establish new societal rules about what activities are permitted and what information can be transmitted. Right down to how much nipple you can show. Down to really basic matters, which are normally a function of public debate and parliaments making laws. Once something becomes sufficiently controversial, it’s banned by these organizations. Or, even if it is not so controversial, but it affects the interests that they’re close to, then it’s banned or partially banned or just not promoted.

Spiegel:

So in the long run, cultural diversity is endangered?

Julian Assange:

The long-term effect is a tendency towards conformity, because controversy is eliminated. An American mindset is being fostered and spread to the rest of the world because they find this mindset to be uncontroversial among themselves. That is literally a type of digital colonialism; non-US cultures are being colonized by a mindset of what is tolerable to the staff and investors of a few Silicon Valley companies. The cultural standard of what is a taboo and what is not becomes a US standard, where US exceptionalism is uncontroversial.•

Tags: ,

Apart from E.L. Doctorow, no one was able to conjure the late Harry Houdini, not even his widow.

But she certainly tried. A famed debunker of spiritualists, Houdini made a pact with his wife, Bess, that if the dead could speak to the living, he would deliver to her a special coded message from the beyond. Nobody but the two knew what the special message was. When a poorly received punch to the abdomen in 1926 made it impossible for the entertainer to escape death, his widow annually attempted to contact him through seance. No words were reportedly ever exchanged. The following are a couple of Brooklyn Daily Eagle articles about the wife’s attempts to continue the marital conversation.

_________________________

From April 24, 1936:

From February 12, 1943:

Tags: ,

If Brad Darrach hadn’t also profiled Bobby Fischer, the autonomous robot named Shakey would have been the most famous malfunctioning machine he ever wrote about. 

The John Markoff’s Harper’s piece I posted about earlier made mention of Shakey, the so-called “first electronic person,” which struggled to take baby steps on its own during the 1960s at the Stanford Research Institute. The machine’s intelligence was glacial, and much to the chagrin of its creators, it did not show rapid progress in the years that followed, as Moore’s Law forgot to do its magic.

Although Darrach’s 1970 Life piece mostly focuses on the Palo Alto area, it ventured to the other coast to allegedly record this extravagantly wrong prediction from MIT genius Marvin Minsky: “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.” The thing is, Minsky immediately and vehemently denied the quote and since other parts of the piece’s veracity were also questioned, I believe his disavowal.

The Life article (which misspelled the robot’s name as “Shaky”), for its many flaws and ethical lapses, did sagely acknowledge the potential for a post-work world and the advent of superintelligence and the challenges those developments might bring. The opening:

It looked at first glance like a Good Humor wagon sadly in need of a spring paint job. But instead of a tinkly little bell on top of its box-shaped body there was this big mechanical whangdoodle that came rearing up, full of lenses and cables, like a junk sculpture gargoyle.

“Meet Shaky,” said the young scientist who was showing me through the Stanford Research Institute. “The first electronic person.”

I looked for a twinkle in the scientist’s eye. There wasn’t any. Sober as an equation, he sat down at an input ter minal and typed out a terse instruction which was fed into Shaky’s “brain,” a computer set up in a nearby room: PUSH THE BLOCK OFF THE PLATFORM.

Something inside Shaky began to hum. A large glass prism shaped like a thick slice of pie and set in the middle of what passed for his face spun faster and faster till it disolved into a glare then his superstructure made a slow 360degree turn and his face leaned forward and seemed to be staring at the floor. As the hum rose to a whir, Shaky rolled slowly out of the room, rotated his superstructure again and turned left down the corridor at about four miles an hour, still staring at the floor.

“Guides himself by watching the baseboards,” the scientist explained as he hurried to keep up. At every open door Shaky stopped, turned his head, inspected the room, turned away and idled on to the next open door. In the fourth room he saw what he was looking for: a platform one foot high and eight feet long with a large wooden block sitting on it. He went in, then stopped short in the middle of the room and stared for about five seconds at the platform. I stared at it too.

“He’ll never make it.” I found myself thinking “His wheels are too small.” All at once I got gooseflesh. “Shaky,” I realized, ”is thinking the same thing I am thinking!”

Shaky was also thinking faster. He rotated his head slowly till his eye came to rest on a wide shallow ramp that was lying on the floor on the other side of the room. Whirring brisky, he crossed to the ramp, semicircled it and then pushed it straight across the floor till the high end of the ramp hit the platform. Rolling back a few feet, he cased the situation again and discovered that only one corner of the ramp was touching the platform. Rolling quickly to the far side of the ramp, he nudged it till the gap closed. Then he swung around, charged up the slope, located the block and gently pushed it off the platform.

Compared to the glamorous electronic elves who trundle across television screens, Shaky may not seem like much. No death-ray eyes, no secret transistorized lust for nubile lab technicians. But in fact he is a historic achievement. The task I saw him perform would tax the talents of a lively 4-year-old child, and the men who over the last two years have headed up the Shaky project—Charles Rosen, Nils Nilsson and Bert Raphael—say he is capable of far more sophisticated routines. Armed with the right devices and programmed in advance with basic instructions, Shaky could travel about the moon for months at a time and, without a single beep of direction from the earth, could gather rocks, drill Cores, make surveys and photographs and even decide to lay plank bridges over crevices he had made up his mind to cross.

The center of all this intricate activity is Shaky’s “brain,” a remarkably programmed computer with a capacity more than 1 million “bits” of information. In defiance of the soothing conventional view that the computer is just a glorified abacuus, that cannot possibly challenge the human monopoly of reason. Shaky’s brain demonstrates that machines can think. Variously defined, thinking includes processes as “exercising the powers of judgment” and “reflecting for the purpose of reaching a conclusion.” In some at these respects—among them powers of recall and mathematical agility–Shaky’s brain can think better than the human mind.

Marvin Minsky of MIT’s Project Mac, a 42-year-old polymath who has made major contributions to Artificial Intelligence, recently told me with quiet certitude, “In from three to eight years we will have a machine with the generaL intelligence of an average human being. I mean a machine that will he able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”

I had to smile at my instant credulity—the nervous sort of smile that comes when you realize you’ve been taken in by a clever piece of science fiction. When I checked Minsky’s prophecy with other people working on Artificial Intelligence, however, many at them said that Minsky’s timetable might be somewhat wishful—”give us 15 years,” was a common remark—but all agreed that there would be such a machine and that it could precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of growth in science, education and the arts. At the same time a number of computer scientists fear that the godsend may become a Golem. “Man’s limited mind,” says Minsky, “may not be able to control such immense mentalities.”•

Tags: , , ,

Cloned sheep once gave humans nightmares, but the science has quietly insinuated itself into the world of polo, thanks to star Adolfo Cambiaso, who impetuously saved cells from his best stallion, Aiken Cura, as the horse was being prepared for euthanasia, its leg ruined. There are now dozens of cloned versions of champion polo ponies, some of whom are competing on the field of play. 

In a really smart Vanity Fair article, Haley Cohen explores how cloning science, which dates back to sea-urchin experiments in 1885, came to the sport of mounts and mallets. Oddly, it involves Imelda Marcos. And, yes, there is discussion about using the same methods to clone humans. An excerpt:

As the pair made their way toward Cambiaso’s stabling area, the exhausted Aiken Cura’s front left leg suddenly gave out. When Cambiaso felt the horse begin to limp beneath him, he leapt out of his saddle and threw his blue-and-white helmet to the ground in anguish.

“Save this one whatever it takes!” he pleaded, covering his face with his gloves. But the leg had to be amputated below the knee, and eventually Cambiaso—whose team won the Palermo Open that year and would go on to win the tournament another five times—was forced to euthanize his beloved Cura.

Before he said his final good-bye, however, he had a curious request: he asked a veterinarian to make a small puncture in the stallion’s neck, put the resulting skin sample into a deep freeze, and store it in a Buenos Aires laboratory. He remembers, “I just thought maybe, someday, I could do something with the cells.”

His hope was not in vain. With the saved skin sample, Cambiaso was able to use cloning technology to bring Aiken Cura back to life. These days, a four-year-old, identical replica of Cambiaso’s star stallion—called Aiken Cura E01—cavorts around a flower-rimmed field in the Argentinean province of Córdoba, where he has begun to breed and train for competition.

Now 40 years old, Cambiaso is ruggedly handsome, with long brown hair, covetable bone structure, and permanent stubble. But in spite of his athleticism, good looks, and wealth, he is surprisingly shy. Walking across the Palermo polo field, where he’s come to watch his oldest daughter play, he speaks in short spurts, as if he would rather not be talking to a stranger. Staring into the distance, he says, “Today, seeing these clones is more normal for me. But seeing Cura alive again after so many years was really strange. It’s still strange. Thank goodness I saved his cells.”•

 

Tags: ,

Harper’s has published an excerpt from John Markoff’s forthcoming book, Machines of Loving Grace, one that concerns the parallel efforts of technologists who wish to utilize computing power to augment human intelligence and those who hope to create actual intelligent machines that have no particular stake in the condition of carbon-based life. 

A passage:

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”•

Tags: ,

The Internet of Things has potential for more good and bad than the regular Internet because it helps bring the quantification and chaos into the physical world. The largest experiment in anarchy in the history will be unloosed in the 3D world, inside our homes and cars and bodies, and sensors will, for better or worse, measure everything. That would be enough of a challenge but there’s also the specter of hackers and viruses.

A small piece from the new Economist report about IoT security concerns:

Modern cars are becoming like computers with wheels. Diabetics wear computerised insulin pumps that can instantly relay their vital signs to their doctors. Smart thermostats learn their owners’ habits, and warm and chill houses accordingly. And all are connected to the internet, to the benefit of humanity.

But the original internet brought disbenefits, too, as people used it to spread viruses, worms and malware of all sorts. Suppose, sceptics now worry, cars were taken over and crashed deliberately, diabetic patients were murdered by having their pumps disabled remotely, or people were burgled by thieves who knew, from the pattern of their energy use, when they had left their houses empty. An insecure internet of things might bring dystopia.  

Networking opportunities

All this may sound improbably apocalyptic. But hackers and security researchers have already shown it is possible.•

An uncommonly thoughtful technology entrepreneur, Vivek Wadhwa doesn’t focus solely on the benefits of disruption but its costs as well. He believes we’re headed for a jobless future and has debated the point with Marc Andreessen, who thinks such worries are so much needless hand-wringing. 

Here’s the most important distinction: If time proves Wadhwa wrong, his due diligence in the matter will not have hurt anyone. But if Andreessen is incorrect, his carefree manner will seem particularly ugly.

No one need suggest we inhibit progress, but we better have political solutions ready should entrenched technological unemployment become the new normal. Somehow we’ll have to work our way through the dissonance of a largely free-market economy meeting a highly automated one.

In a new Washington Post piece on the topic, Wadhwa considers some solutions, including the Carlos Slim idea of a three-day workweek and the oft-suggested universal basic income. The opening:

“There are more net jobs in the world today than ever before, after hundreds of years of technological innovation and hundreds of years of people predicting the death of work.  The logic on this topic is crystal clear.  Because of that, the contrary view is necessarily religious in nature, and, as we all know, there’s no point in arguing about religion.”

These are the words of tech mogul Marc Andreessen, in an e-mail exchange with me on the effect of advancing technologies on employment. Andreessen steadfastly believes that the same exponential curve that is enabling creation of an era of abundance will create new jobs faster and more broadly than before, and calls my assertions that we are heading into a jobless future a luddite fallacy.

I wish he were right, but he isn’t. And it isn’t a religious debate; it’s a matter of public policy and preparedness. With the technology advances that are presently on the horizon, not only low-skilled jobs are at risk; so are the jobs of knowledge workers. Too much is happening too fast. It will shake up entire industries and eliminate professions. Some new jobs will surely be created, but they will be few. And we won’t be able to retrain the people who lose their jobs, because, as I said to Andreessen, you can train an Andreessen to drive a cab, but you can’t retrain a laid-off cab driver to become an Andreessen.  The jobs that will be created will require very specialized skills and higher levels of education — which most people don’t have.

I am optimistic about the future and know that technology will provide society with many benefits. I also realize that millions will face permanent unemployment.•

Tags:

« Older entries § Newer entries »