My default position on whistleblowers is I don’t want them imprisoned, since it’s a more dangerous world without them. I feel that way about Edward Snowden, as sloppy as he might have been at times in his mission. I still wonder, however, if much will truly come of his daring. 

In his New York Times Op-Ed piece, Snowden declares the world has said “no” to surveillance, but I’m not so sure. Spying has been with us for as long as there’s been information, and governments have always been involved, using the Cold War or the War on Terror or any other conflict as an excuse. And I truly wonder how any legislation is going to keep up with technology. That one isn’t really a fair race. Perhaps even more difficult to control than government will be huge corporations, which don’t even have to break the law–just offer us a Faustian bargain–to get at what’s inside our brains. The Internet of Things will make it very easy for everyone to be tracked at all times. 

At Reddit, Mattathias Schwartz of the New Yorker conducted a predictably smart Ask Me Anything about Snowden. A few exchanges follow.

_________________________

Question:

Why are whistleblowers treated like criminals?

Mattathias Schwartz:

Well, if whistleblowers come from within the US intelligence community (FBI, CIA, NSA, DEA, DIA, etc.) they usually are criminals, under current federal laws against the release of classified information. A lot of recent protections that apply to whistleblowers do not apply to those who work within the intelligence community. The Washington Post did a good story on this, and how it applies to Snowden’s case, here …http://www.washingtonpost.com/blogs/fact-checker/wp/2014/03/12/edward-snowdens-claim-that-as-a-contractor-he-had-no-proper-channels-for-protection-as-a-whistleblower/ … and my colleague Jane Mayer wrote about Thomas Drake, an NSA employee who attempted to blow the whistle, here …http://www.newyorker.com/magazine/2011/05/23/the-secret-sharer.

_____________________

Question:

Can speculate why there seems to be such apathy (after an initial and momentary outrage at the water cooler) by the American public regarding the erosion of our civil liberties? Case in point would be the NSA spying and data collection efforts to which change has essentially squeaked by with little continued pressure by the constituents since the revelations.

Mattathias Schwartz:

I think it’s too soon to say whether there’s been a sea change or not. It’s all still in play. To see where it winds up, you’d have to track what happens with the PCLOB’s ongoing inquiry into Executive Order 12333, and you’d have to see whether the Supreme Court decides to weigh in on Fourth Amendment / NSA / surveillance issues. It already started to with United States v. Jones and if I had to guess, I would say that there will be more action to come on that front.

_____________________

Question:

Do you worry about the future of well-funded investigative journalism now that we’re living in the era of free internet news?

Mattathias Schwartz:

Yes! There is a lot to worry about here. Part of me thinks that Big Tech should be funding this stuff–they certainly make plenty of money on journalism that they don’t pay for!–but that’s a troubled notion if you look at how much money Big Tech spends to influence policy in Washington. Reporting is expensive and the number of institutions willing to put up the money to do it is shrinking–I am lucky to be working with the New Yorker, which is one of them. I can foresee three models under which this kind of work can continue. There’s the opera model, which depends on patronage and a small, influential, highbrow audience. There’s the model that the Swiss watch companies found after the invention of the Quartz watch, shifting away from mass-market utility and towards luxury, which isn’t so different from the opera model, actually. And then you’ve got the Snowden model, where a private individual takes it upon themselves to speak out in those places where they feel that investigative journalists, and politicians, have failed to do so. There will always be people who want this information, and over time, supply will keep pace with demand, especially when you can cram so much supply onto a USB stick.

_________________________

Question:

I thought the Snowden op-ed in the NYT today was a little strange, but can’t tell if it’s my own cynicism. Seemed like a politician’s intervention–lots of spin. What did you think?

Mattathias Schwartz:

Hi K, Yeah it is “spin” in the sense that he is obviously trying to influence the Beltway conversation, but I enjoyed the writing and I was glad to see ES speaking in his own voice finally, as opposed to speaking through his collaborators, or through documents. Looking at it from the outside, my sense is that he wants to come home. And I liked the bit about a “post-terror generation.”

_________________________

Question:

Do you think he’ll ever be able to come home without spending a long time in federal prison?

Mattathias Schwartz:

It’s hard to predict the future but I wouldn’t be surprised if there were a trial on some limited set of charges, which would give Snowden a public platform at the risk of a limited jail term, if he were found guilty. Snowden has already said that he’s willing to come home if he can be guaranteed a fair trial. And I can’t imagine that the US government would want a guy who knows so much staying in Russia for the long term.•

Lowlife Culture (Williamsburg)

Seeking any and all Con men/women, scam artists, pickpockets, thieves, and anyone else embedded in lowlife culture to tell their story. No story will be considered too troubled, no atmosphere too strange. Genuine characters of all kinds welcome.

 

John Sculley is, similar to the late Steve Jobs, a creative person (Architecture at Brown, Art as RISD) with a mind for business, so it’s sad he’s usually depicted so reductively in the popular narrative of Apple. Even Steve Wozniak has pushed back at this crude portrait. The Newton was a great idea, and things might have been different had it not been roughly a decade ahead of the technology.

Two passages follow from a 1993 People article by Craig Horowitz at the end of Sculley’s run atop the company.

___________________________

If Sculley is running on pure adrenaline these days, who can blame him? At a time when most personal-computer makers—including the behemoth IBM—are struggling for survival, Apple posted record revenues last year (more than $7 billion). The Powerbook, its notebook computer introduced in 1992, quickly became the best-selling computer on the market and a must-have accessory for the trendy. But for Sculley, selling computers is only the beginning. A Renaissance man who once had his sights set on a career in architecture and design, Sculley wants to change the world. Or at least America.

“Our resources are no longer coal and iron ore and things that come out of the ground,” he says in his sparsely furnished office at Apple headquarters in Silicon Valley. “The strategic resources are things that come out of people’s minds.” Sculley has campaigned vigorously for fundamental changes in education, job training and the economy to meet the high-tech future. He is also fighting for the construction of a technology infrastructure—a nationwide “data superhighway”—that would transmit vast quantities of information quickly and help create jobs.

It is this vision of the future that transformed Sculley, a lifelong Republican, into an avid Clinton supporter. George Bush “is a very nice person, but he obviously had no interest in anything we [in the high-tech industry] were talking about,” says Sculley. “Remember, this was the President who was amazed by the scanner in a supermarket.” Sculley, who had met the Clintons while Bill was still a Governor, helped mobilize the business community for the Democratic ticket and has become a valued adviser to both Bill and Hillary. “John was instrumental in the development of the President’s overall economic plan,” says White House Chief of Staff Thomas McLarty. The Clintons, Sculley says, “are the kind of people I’m attracted to. They’re builders.”

___________________________

By 1982, Sculley was one of three hand-picked contenders to replace Kendall as chairman of PepsiCo when he retired. It was then that Sculley was unexpectedly offered the top spot at Apple. He knew little about computers, but was attractive for his management skills. Though Apple was only five years old—having moved al warp speed from a company located in the garage of its founders, Steve Wozniak and Steve Jobs, to the FORTUNE 500—the opportunity captured Sculley’s imagination. So did the now famous pitch made by Jobs, then 28, a vegetarian and a college dropout. “Do you want to spend the rest of your life selling sugar water?” Jobs asked Sculley. “Or do you want a chance to change the world?”

Choosing the latter required an adjustment of epic proportions for Sculley. who had to go from pinstripes to corduroy and from formal meetings in the august PepsiCo boardroom to brainstorming in Apple’s funky cubicles. Perhaps most significant, he went from being a respected leader in his industry to an object of suspicion that he was just another empty suit. Sculley quickly proved otherwise when, less than two years into his tenure at Apple, the bottom fell out of the computer market. Sculley had to make tough decisions—decisions Steve Jobs couldn’t live with. “Everyone says I took his company away from him,” Sculley says, “but I told him he could have it back. I didn’t like the direction it was going. It was ultimately the board of directors who made the decision.” Jobs, who quit and started a computer company called Next, refuses to comment.

To get Apple back on track, Sculley, who is known for such openhanded policies as on-site childcare, profit sharing and employee sabbaticals, cut staff, closed factories and reorganized the company. When a second crisis occurred in 1990, he reduced his own $2.2 million salary by a third and look over the reins as technical chief. In that role, he has now bet the future of the company on a new group of products called the Newton, scheduled for introduction later this year. The small, hand-held devices perform a wide variety of office functions—as typewriters, calculators, calendars, faxes, modems, telephones and radios. “If it works out well, it will be great.” he says. “If it doesn’t work out, I guess I’ll be looking for a new job.” With Sculley’s history, no one is betting against him.•

 

Tags: , ,

At Nature, a quartet of researchers write of their concerns as Artificial Intelligence matures, worrying about our robot brethren contributing to warmongering and income inequality. In “Take a Stand on AI Weapons,” Berkeley computer professor Stuart Russell focuses on the former, questioning the wisdom of Lethal Automated Weapons Systems. An excerpt:

The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).

Technologies have reached a point at which the deployment of such systems is — practically if not legally — feasible within years, not decades. The stakes are high: LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms.  

Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. …

In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.•

Tags:

Predicting things will fall apart is easy, but predicting when is hard. Chris Hedges, author of Wages of Rebellion: The Moral Imperative of Revolt, thinks we’ve already entered the collapse phase.

In the U.S., we have myriad problems that increasingly seem unfixable from the inside: gerrymandering, Citizens United, corporatocracy, institutionalized racism, income inequality. You know, gerrymandering might be the most frustrating of them all, since you can’t remedy the rest with entrenched leadership.

Elias Isquith of Salon just interviewed Hedges, who asserts that a revolutionary movement in America would have global stakes. “When we go down, the whole planet is going to go with us,” the author says, making clear what he believes would happen if our society runs aground.

The Q&A’s opening exchange:

Question:

Do you think we are in a revolutionary era now? Or is it more something on the horizon?

Chris Hedges:

It’s with us already, but with this caveat: it is what Gramsci calls interregnum, this period where the ideas that buttress the old ruling elite no longer hold sway, but we haven’t articulated something to take its place.

That’s what that essay I quote by Alexander Berkman, “The Invisible Revolution,” talks about. He likens it to a pot that’s beginning to boil. So it’s already taking place, although it’s subterranean. And the facade of power — both the physical facade of power and the ideological facade of power — appears to remain intact. But it has less and less credibility.

There are all sorts of neutral indicators that show that. Low voter turnout, the fact that Congress has an approval rating of 7 percent, that polls continually reflect a kind of pessimism about where we are going, that many of the major systems that have been set in place — especially in terms of internal security — have no popularity at all.

All of these are indicators that something is seriously wrong, that the government is no longer responding to the most basic concerns, needs, and rights of the citizenry. That is [true for the] left and right. But what’s going to take it’s place, that has not been articulated. Yes, we are in a revolutionary moment; but maybe it’s a better way to describe it as a revolutionary process.•

Tags: ,

During the nascent days of aviation, scads of hobbyists tried their hand at building flying machines, hoping to become the first to solve flight. It was inexpensive enough that the lone inventor could dream. In fact, a pair of bicycle manufacturers named Wilbur and Orville Wright managed the feat. Later, tens of thousands of small businesspeople attempted to create the first successful commercial-airplane company. 

When flight began to point to the stratosphere, however, the costs were too dear for the individual, and the race would have to be run among governments. That’s really only changed this century, as technologists with the wealth of small nations have used money gained in other industries to enter the Space Race. Perhaps 3-D manufacturing will eventually make it possible for smaller-scale operations to compete.

In 1930, one hopeful was unbowed by a lack of funds, scientific facts, and, it would seem, basic common sense. Robert J. McLaughlin of New York craved “wealth, health, and glorious adventure,” so he planned to fly to the moon and live there. A report of his ambitions follows from the May 26, 1930 Brooklyn Daily Eagle.

Tags:

Ray Kurzweil, that brilliant guy, has been correct in many of his technological predictions and very wrong in others. An example of the latter: 2009 came and went and computers hadn’t disappeared because information was being written directly onto our retinae by special glasses. 

The technologist now prognosticates that in fifteen years, our brains will be connected to the cloud, able to call upon any of the vast (and growing) trove of information. From Anthony Cuthbertson at International Business Times:

Artificial intelligence pioneer Ray Kurzweil has predicted that within 15 years technology will exist that will allow human brains to be connected directly to the internet.

Speaking at the Exponential Finance conference in New York on Wednesday (3 June), Kurzweil hypothesised that nanobots made from DNA strands could be used to transform humans into hybrids 

“Our thinking then will be a hybrid of biological and non-biological thinking,” Kurzweil said. “We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human – we transcend our limitations.”

“We’ll be able to extend (our limitations) and think in the cloud. We’re going to put gateways to the cloud in our brains.”

Connecting brains to the internet or a cloud computer network will allow for advanced thinking, Kurzweil predicts, and by the late 2030s human thought could be predominantly non-biological.

Tags: ,

I’m in favor of sensible gun-control regulations, but soon enough controlling guns will be an antique idea. My initial reaction to 3-D printers was that they’ll be used to make untraceable firearms (or, worse, biological weapons), especially when it’s easy for a 3-D printer to make other 3-D printers which will make other 3-D printers. A scary parallel to our concerns about government intruding more in our lives with technology is that many dangerous things in our decentralized world are becoming beyond its reach.

The 3-D printer is still in its crude stage, but Andy Greenberg has written an article about using a Ghost Gunner machine to make an untraceable AR-15 in his office at Wired. The opening:

THIS IS MY ghost gun. To quote the rifleman’s creed, there are many like it, but this one is mine. It’s called a “ghost gun”—a term popularized by gun control advocates but increasingly adopted by gun lovers too—because it’s an untraceable semiautomatic rifle with no serial number, existing beyond law enforcement’s knowledge and control. And if I feel a strangely personal connection to this lethal, libertarian weapon, it’s because I made it myself, in a back room of Wired’s downtown San Francisco office on a cloudy afternoon.

I did this mostly alone. I have virtually no technical understanding of firearms and a Cro-Magnon man’s mastery of power tools. Still, I made a fully metal, functional, and accurate AR-15. To be specific, I made the rifle’s lower receiver; that’s the body of the gun, the only part that US law defines and regulates as a “firearm.” All I needed for my entirely legal DIY gunsmithing project was about six hours, a 12-year-old’s understanding of computer software, an $80 chunk of aluminum, and a nearly featureless black 1-cubic-foot desktop milling machine called the Ghost Gunner.

The Ghost Gunner is a $1,500 computer-numerical-controlled (CNC) mill sold by Defense Distributed, the gun access advocacy group that gained notoriety in 2012 and 2013 when it began creating 3-D-printed gun parts and the Liberator, the world’s first fully 3-D-printed pistol. While the political controversy surrounding the notion of a lethal plastic weapon that anyone can download and print has waxed and waned, Defense Distributed’s DIY gun-making has advanced from plastic to metal.•

Tags:

In a Slate piece, David Auerbach asks whether Rand Paul or some other Republican could siphon votes from left-leaning technologists concerned about civil liberties. That could happen, although the GOP would need to reposition itself significantly for such a candidate to triumph in the Republican Primary.

I will take issue with Auerbach’s depiction of former Democratic candidate Howard Dean, the clear antecedent to the Paul brand. The author writes this of the former Vermont governor: “…he ended up impacting the Democratic agenda (anti-war, expanded health care, gay rights) for years.” Dean had an impact on the American electoral process for sure, realizing before anyone that grassroots and social media could be wedded to buoy a largely unknown candidate, but I don’t believe he influenced the Democratic agenda at all. Only the Iraq War becoming a boondoggle made Democrats turn against it, universal health care was on the docket long before 2004 and neither of the main 2008 contenders supported gay marriage (Hillary Clinton opposed it and Barack Obama was purposely vague); the tide turned on that issue for the party only because voters and polls moved leftward.

Auerbach’s opening:

Every time Kentucky Sen. Rand Paul assails mass government surveillance on the floor of the Senate, it is surprising to see who emerges to praise the erratically unorthodox Republican, from Edward Snowden to Glenn Greenwald to true-blue progressive reporter Marcy Wheeler. This support comes with caveats, of course. But the lefty applause for Paul also arrives at a moment of a distinctly lacking enthusiasm for Barack Obama and Hillary Clinton. More importantly, these plaudits align with many of the concerns of a quiet but influential contingent of liberal-leaning techies who might one day become Rand Democrats—or Democrats willing to support some other, future right-wing firebrand with lefty-compatible ideas about civil liberties—much in the way disaffected blue-collar workers became Reagan Democrats. Think of them as the New Randroids—and definitely not because they admire Ayn Rand, which they don’t. Progressives and loyal Democrats would be wise not to ignore them.

Rand Paul’s father, Ron Paul, has long enjoyed a libertarian following within what I call the tech laity—the vast number of tech workers who don’t work for buzzy startups or dominate the tech press. This group of engineers and other research and development workers, far less white and somewhat less male than what’s portrayed in the media, clusters around industry hubs in the Bay Area, Seattle, Boston, North Carolina’s Research Triangle, Austin, Los Angeles, and so on. The libertarians among them, who hate the Federal Reserve and have funded the presidential ambitions of Paul pere, make up a tiny minority of the whole. What’s been surprising is how hard Rand Paul has worked to broaden that small following in the hopes of reaching disaffected Republicans and Democrats. He has about as much chance of winning the nomination as Howard Dean, Paul’s closest antecedent, did in 2004. But Dean came closer than most people remember, and he ended up impacting the Democratic agenda (anti-war, expanded health care, gay rights) for years. Paul hopes to do the same—and perhaps make another run at the presidency in 2020 or 2024.•

Tags: ,

r2d2c3po890

There’s no guarantee that this week’s DARPA Robotics Challenge finals will be as much a watershed for the field as the 2004 Grand Challenge was for driverless, but the competition will probably lead to information being shared and the discipline advancing. That’s a great thing, though while robotics will increase wealth in the aggregate, it will tax us to reposition society around it economically and to redefine the role of humans. Members from some of the competing teams just did an Ask Me Anything at Reddit. A few exchanges follow.

____________________________

Question:

Ethics: Do you feel your contributions are leading to a society where most machines will replace humans, causing permanent unemployment? What is the future role of man in the Age of AI?

Jerry Pratt – Team IHMC:

Humans and technology go hand in hand. Without humans, there is no technology, and humans would have a hard time surviving without their technology. We are already cyborgs in some sense. We wear clothing, live in houses, drive planes, trains, and automobiles. A large number of people have ocular prosthetics nearly permanently attached inside their eyelids (contact lenses). We communicate long distances and enhance our memories. As technology gets more advanced, we’ll continue to become one with it. With AI this means that someday we will have SIRI or OK-Google style interfaces that are really powerful and can significantly enhance our capabilities. Should we fear this? Not sure. Just make sure you can always unplug to get away from it all…

____________________________

Question:

What are the main pros and cons you see of robots integrated into our society in the future?

Todd Danko – Team TROOPER:

Pros – Robots participating in dull, dirty and dangerous jobs. Cons – Not really a con, but a big challenge is building trust between humans and robots.

Russ Tedrake – Team MIT:

Robots are already doing amazing things in medicine, manufacturing, … the new buyin from industry means we’re going to see so much more of this in the next few years. Like every new technology (e.g. the personal computer), it means our society is going to have to adapt. But it will be overwhelming good for society as a whole.

____________________________

Question:

After DRC Urban Challenge most of the auto companies started to work on driverless-cars. Do you think same thing will happen with humanoid robots (not auto companies obviously, any company that can handle humanoid robots)?

Scott Kuindersma – Team MIT:

The increased investment we’ve seen from industry over the past year suggests that is likely to be the case, though it might not be in humanoid form factor.

Russ Tedrake – Team MIT:

I think working on humanoids forces us to solve some really important science questions… it’s hard for the right reasons. But the technology will appear first in other applications (manufacturing, logistics, medicine, …)

____________________________

Question:

Have you seen Age of Ultron? If yes, did you question your line of work as Tony Stark did when he realized he created a monster that could end humanity? (All in jest of course. You guys are awesome.)

Russ Tedrake – Team MIT:

I actually think the political and ethical questions for robots are super important. But we’re still far away from a robot takeover. There are very basic things that humans do well which robots cannot…

____________________________

Question:

Would you say icons such as Elon Musk and Hawking are edging on paranoia then with the whole AI/ robot threat?

Scott Kuindersma – Team MIT:

I think if we accept that 1) there is nothing special about the hardware that implements biological intelligence and 2) that we will continue to make progress toward AI, then their concerns are legitimate. However, the problem is definitely not imminent.•

Tags: , , ,

Some people don’t have enough to eat and some have so much that they read magazines about it. The extreme fetishization of food is one of the damndest developments of modern life. I know the market goes where it will, but just imagine if we put all that effort into making sure everyone was fed rather than a few devouring the most creative menus possible?

One way to spread the goodness would be with a plant-based meat replacement that’s cheaper than beef and more nutritious. Such an innovation would also diminish animal suffering, human health problems and environmental damage. In a Grub Street piece, Daniel Fromson takes us inside the Silicon Valley labs of Pat Brown, who wants to make a meatless burger that’s better than the cow kind for the same reason that Elon Musk aimed to manufacture an EV superior to cars with internal combustion engines–to save humanity. An excerpt:

What if taste isn’t a true obstacle to change? According to Pat Brown, it doesn’t need to be. Brown is a biochemist who resigned last year from his “dream job” at Stanford’s School of Medicine — tenure, funding, his own genomics and cancer-treatments lab — and who has called raising animals for food “a completely shitty, useless industry in every possible way.” “I am an ideologue, if you want to call it that,” he once said. Others have called him a “deranged visionary,” an “incurable optimist,” and, in the case of the Nobel-winning oncologist Harold Varmus, former director of the National Cancer Institute and the National Institutes of Health, a “prophet.” Brown is also a vegan, albeit one who disdains what he sees as the irrational “vegan fundamentalism” epitomized by PETA-style activists. “The way to win is the awesome power of the free market,” Brown says. Meat, he adds, “is like the horse-and-buggy industry at the turn of the century: It’s obviously doomed, and it’s just a question of who takes it down and how soon.” You might be able to guess who he says will help him do that: “Our target market is not vegetarians. It’s not vegans. It’s not fringy health nuts. It’s not food-fad faddists. It’s mainstream, mass-market, uncompromising, meat-loving carnivores.” Citing a U.N. calculation that 30 percent of the planet’s land is used for animal agriculture, he hopes his plan will “change the way Earth looks from space.” “The way that we’re going to monitor our progress,” he says, “is by looking at Google Earth, basically.”

You may have heard of “cultured meat” made of lab-grown cells, like the $325,000 patty paid for by Google’s Sergey Brin — a strategy Brown sees as off-putting, not to mention technically and economically unviable. And you may have heard of start-ups, like Beyond Meat, that have tried to invent animal-cell-free “plant-based meat,” often made from soy, that re-creates the taste and texture of the real thing — a target, Brown and others agree, that they have failed to hit. You may not have heard of Brown’s own start-up, which is trying to do the same thing, because he has spent four years working mostly in secret, tweaking the user experience like his iPhone-making counterparts in Cupertino. But what he has done, he says, is spectacular: He has cracked meat’s molecular code. Which means that by sometime next year, he intends to sell what he calls a “shock and awe” plant-based burger that bleeds like beef, chars like it, and tastes like it (and eventually, critical to its long-term prospects, costs less).

“It’s going to be absolutely, flat-out delicious,” Brown says. “People have low expectations because they think what they’ve experienced before represents what’s possible.” Brown has high expectations. His start-up is named Impossible Foods.

America’s highest-tech hamburger prototypes are built in Redwood City, the Silicon Valley home of Oracle and Evernote, in what looks like a test kitchen hijacked by chemists.

Tags: , ,

In a Frontiers opinion piece, University of Melbourne research fellow Jean-Loup Rault wonders if pet ownership will become morally unacceptable and environmentally unsustainable in the next 35 years, as animal-rights sensibilities shift and population grows, with digital and virtual pets filling the void. “Funerals are held for AIBO robotic dogs in Japan nowadays,” writes the doctor, demonstrating the emotions we can attach to AI.

I’m guessing Freeman Dyson would argue that this would be just an intermediate step, and children will eventually play with specially designed biological creatures that don’t use resources but create them. I don’t know if either will happen in the next 35 years or at all, though I would assume household robots will take on a more prominent role.

The opening:

Over half the people in Western societies share their daily life with pets, which makes it the norm rather than the exception. Our shared history with domestic animals goes back tens of thousands years. However, technological advances in the last decades – computer, internet, social media – revolutionized our means of communication, and particularly our social lives. A legitimate but tacit question is whether this technological evolution will also change human–animal relationships, and concurrently, the place of pets in human societies. Pet ownership in its current form is likely unsustainable in a growing, urbanized population. Digital technologies have quickly revolutionized human communication and social relationships, and logically could tackle human–animal relationships as well. The question is whether these new technologies actually represent the future of pet ownership, helping tackle its sustainability while solving animal welfare issues.

To consider whether new technologies could substitute animal use, one should first consider the reasons for keeping animals, and particularly pets. Domestication started some 18,000–32,000 years ago with dogs. However, today’s pets cover a wide range of species from mammals, birds, and fish to the more ‘exotic’ reptiles, arachnids, and even insects. One of the many definitions of a pet is “a domesticated animal kept for pleasure rather than utility” (Merriam-Webster Dictionary), although non-domesticated species are increasingly popular. The benefits or function that humans derive from pet ownership are still debated. It may be a cultural habit: “I had a pet growing up, so it is normal to have one,” even though for some people the only interaction with their pet is restricted to providing food and water and no other forms of social interaction, hence only partly fulfilling our ‘duty of care.’ Pet ownership could be a sign of status: dog ownership can be interpreted as an economic indicator, highly correlated with rise in countries’ income. Pets may be used to compensate for lack of social relationships, as pet owners report feeling less lonely, although there are evidence that pets facilitate human social interactions. A widespread theory holds that pet ownership brings health benefits. Alternative explanations are, for instance, that pet ownership may improve reproductive fitness given that people more easily approach someone walking their dog. Nevertheless, some scientists remain dubious, citing inconsistent findings on the benefits of pet ownership (1). Irrespectively, the human–animal relationship is a strong and emotional powerful bond: pets are often considered part of the family, and dogs and children activate common brain regions in mothers (2), drawing on the hormone oxytocin (3). Historically, the human–animal relationship has already been a changing concept, from animals for their consumptive value to ‘reservoirs of human need’ such as love and care (4). However, we are possibly witnessing the dawn of a new era, the digital revolution with likely effects on pet ownership, similar to the industrial revolution which replaced animal power for petrol and electrical engines.•

Tags:

Some people search and find the wrong thing.

Such was the case with the followers of the technologically friendly cult Heaven’s Gate, which stunned the world when 38 members committed a mass suicide in 1997 at the behest of the group’s leader Marshall Applewhite, who founded the pseudo-religion 22 years earlier in Los Angeles. The guru believed the Hale-Bopp Comet would be tailed by a UFO which would take them to heaven if they killed themselves at just the right moment, and somehow a diverse group of basically intelligent people heeded his call. 

That year, People magazine provided profiles of Applewhite and some of his acolytes. An excerpt:

Marshall Herff Applewhite 65, music teacher turned cult leader

Missouri prosecutor Tim Braun never forgot the car-theft case that came his way in 1974, when he was a novice St. Louis County public defender. “Very seldom do we see a statement that ‘a force from beyond the earth has made me keep this car,’ ” he says. The defendant: Marshall Herff Applewhite. The sentence: four months in jail.

His early life offers few hints of what led Applewhite—son of a Presbyterian preacher and his wife—to abandon his career as a music professor for a life chasing alien spacecraft. Married with two children, he seemed the devoted family man. But his marriage broke up in the mid-’60s, and he moved to Houston, where he ran a small Catholic college’s music department and often sang with the Houston Grand Opera.

A sharp dresser whose taste in cars ran to convertibles, and in liquor to vodka gimlets, he became a fixture of Houston’s arts scene—and, less overtly, its gay community. “Everybody knew Herff,” says Houston gay activist and radio host Ray Hill. But in 1970, Applewhite left the college, apparently after allegations of an affair with a male student.

Soon afterward, Houston artist Hayes Parker recalls, Applewhite claimed to have had a vision during a walk on the beach in Galveston, Texas. “He said he suddenly had knowledge about the world,” recalls Parker. Around that time he met nurse Bonnie Nettles, with whom he formed an instant bond that became the basis of a 25-year cult odyssey. They wandered the country, gathering followers and attracting so much curiosity that by the mid-’70s he had been interviewed by The New York Times. “Some people are like lemmings who rush in a pack into the sea,” Applewhite said of other alternative lifestyles. “Some people will try anything.”

Cheryl Butcher 42, computer trainer
Butcher was a shy, bright, self-taught computer expert who spent half her life in Applewhite’s orbit. Growing up in Springfield, Mo., she was “the perfect daughter,” says her father, Jasper, a retired federal corrections officer. “She was a good student. She did charity work, candy striper stuff.” But according to Virginia Norton, her mother, she was also “a loner. She watched a lot of TV and read. Making friends was hard for her.” That is, until she joined the cult in 1976. “She wrote me a letter once,” says Norton, “that said, ‘Mother, be happy that I’m happy.’ Another time she ended a letter with ‘Look higher.’ ”

David Van Sinderen 48, environmentalist
“When I was 4, he saved me from drowning,” says publicist Sylvia Abbate of her big brother David. The son of a former telephone company CEO, David became an environmentalist. ” ‘Don’t be hurt, I’m not doing this to you,’ ” Abbate says he told his family after he joined the cult in 1976. ” ‘It’s something I have to do for me.’ ” Visiting his sister in ’87, he puzzled her with his backseat driving, then apologized, explaining that cult members drove with a partner so they would have an extra set of eyes. Says Abbate: “That’s the kind of care they had for one another.”

Alan Bowers 45, oysterman
Bowers had spent eight years with the cult in the ’70s before returning to Fairfield, Conn., in the early ’80s to work as a commercial oysterman. In 1988 his life derailed when his wife divorced him and his brother Barry drowned in a boating accident. Bowers, who had three children, moved to Jupiter, Fla., near his stepsisters Susan and Joy Ventulett. “He came down here to make a new start,” says Susan, but he could never quite get it together. Then in 1994, Bowers, while working for a moving company, ran into someone he knew from Applewhite’s legions at a McDonald’s in New Mexico. “He felt it might have been destiny,” says Joy. “He was a little vulnerable. He was searching for peace.”

Margaret Bull 54, farm girl
Peggy Bull, among the cult’s first adherents in the mid-’70s, grew up on a farm outside little Ellensburg, Wash. Though shy, she was in the high school pep club and a member of the Wranglerettes, a riding drill team. Later “she belonged to all the intellectual-type groups,” says Brenda McIntosh, a roommate at the University of Washington, where Bull earned her B.A. in 1966. “It was sometimes hard to talk to her because she was so smart.” Recalls English professor Roger Sale: “She was a open and ready intellectually.” Her father, Jack, died less than three weeks before Bull’s suicide, says Margaret’s childhood friend Iris Rominger, who assumed that Bull had left the cult. “I guess it’s kind of a blessing.”•

Tags: , , , , , , , , , , ,

The next robots will be more agile, the cords cut. They’ll be nimble enough to move around a warehouse, and manufacturing may be reshored to the United States. Of course, that won’t add up to as many jobs as you might think, because while they’ll be safe enough to collaborate with humans, they won’t require too many in the short run and hardly any in the longer term.

An excerpt from James Hagerty’s Wall Street Journal article about the nouveau machines:

Today, industrial robots are most common in auto plants—which have long been the biggest users of robot technology—and they do jobs that don’t take much delicacy: heavy lifting, welding, applying glue and painting. People still do most of the final assembly of cars, especially when it involves small parts or wiring that needs to be guided into place.

Now robots are taking on some jobs that require more agility. At aRenault SA plant in Cleon, France, robots made by Universal Robots AS of Denmark drive screws into engines, especially those that go into places people find hard to get at. The robots employ a reach of more than 50 inches and six rotating joints to do the work. They also verify that parts are properly fastened and check to make sure the correct part is being used.

The Renault effort demonstrates a couple of trends that are drastically changing how robots are made. For one, they’re getting much lighter. The Renault units weigh only about 64 pounds, so “we can easily remove them and reinstall them in another place,” says Dominique Graille, a manager at Renault, which is using 15 robots from Universal now and plans to double that by year-end.

Researchers hope robots will become so easy to set up and move around that they can reduce the need for companies to make heavy investments in tools and structures that are bolted to the floor. That would allow manufacturers to make shorter runs of niche or custom products without having to spend lots of time and money reconfiguring factories. “We’re getting away from the [structures and machinery] that can only be used for one thing on the factory floor and [instead] using robots that can be easily repurposed,” says Henrik Christensen, director of robotics at Georgia Institute of Technology.•

Tags: , ,

In relation to the recent Paul Ehrlich post, here’s the opening of an excellent 1990 New York Times article by John Tierney about a wager the Malthusian made about the population bomb with his fierce academic rival, the cornucopian economist Julian L. Simon:

In 1980 an ecologist and an economist chose a refreshingly unacademic way to resolve their differences. They bet $1,000. Specifically, the bet was over the future price of five metals, but at stake was much more — a view of the planet’s ultimate limits, a vision of humanity’s destiny. It was a bet between the Cassandra and the Dr. Pangloss of our era.

They lead two intellectual schools — sometimes called the Malthusians and the Cornucopians, sometimes simply the doomsters and the boomsters — that use the latest in computer-generated graphs and foundation-generated funds to debate whether the world is getting better or going to the dogs. The argument has generally been as fruitless as it is old, since the two sides never seem to be looking at the same part of the world at the same time. Dr. Pangloss sees farm silos brimming with record harvests; Cassandra sees topsoil eroding and pesticide seeping into ground water. Dr. Pangloss sees people living longer; Cassandra sees rain forests being decimated. But in 1980 these opponents managed to agree on one way to chart and test the global future. They promised to abide by the results exactly 10 years later — in October 1990 — and to pay up out of their own pockets.

The bettors, who have never met in all the years they have been excoriating each other, are both 58-year-old professors who grew up in the Newark suburbs. The ecologist, Paul R. Ehrlich, has been one of the world’s better-known scientists since publishing The Population Bomb in 1968. More than three million copies were sold, and he became perhaps the only author ever interviewed for an hour on The Tonight Show. When he is not teaching at Stanford University or studying butterflies in the Rockies, Ehrlich can generally be found on a plane on his way to give a lecture, collect an award or appear in an occasional spot on the Today show. This summer he won a five-year MacArthur Foundation grant for $345,000, and in September he went to Stockholm to share half of the $240,000 Crafoord Prize, the ecologist’s version of the Nobel. His many personal successes haven’t changed his position in the debate over humanity’s fate. He is the pessimist.

The economist, Julian L. Simon of the University of Maryland, often speaks of himself as an outcast, which isn’t quite true. His books carry jacket blurbs from Nobel laureate economists, and his views have helped shape policy in Washington for the past decade. But Simon has certainly never enjoyed Ehrlich’s academic success or popular appeal. On the first Earth Day in 1970, while Ehrlich was in the national news helping to launch the environmental movement, Simon sat in a college auditorium listening as a zoologist, to great applause, denounced him as a reactionary whose work “lacks scholarship or substance.” Simon took revenge, first by throwing a drink in his critic’s face at a faculty party and then by becoming the scourge of the environmental movement. When he unveiled his happy vision of beneficent technology and human progress in Science magazine in 1980, it attracted one of the largest batches of angry letters in the journal’s history.

In some ways, Simon goes beyond Dr. Pangloss, the tutor in Candide who insists that “All is for the best in this best of possible worlds.” Simon believes that today’s world is merely the best so far. Tomorrow’s will be better still, because it will have more people producing more bright ideas. He argues that population growth constitutes not a crisis but, in the long run, a boon that will ultimately mean a cleaner environment, a healthier humanity and more abundant supplies of food and raw materials for everyone. And this progress can go on indefinitely because — “incredible as it may seem at first,” he wrote in his 1980 article — the planet’s resources are actually not finite.•

Tags: , ,

Algorithms might be able to run a corporation, but what about an entire country? With our luck in America, it would probably be the Dubya 2163X.

In an Esquire interview John Hendrickson conducted with Zoltan Istvan, Transhumanist Party Presidential candidate, the technologically progressive contender comments on the potential future intersection of AI and politics, as well as on moral machines and the existential threat of superintelligence. The opening:

Question:

Can a robot be president? Can that happen?

Zoltan Istvan:

I have advocated for the use of artificial intelligence to potentially, one day, replace the president of the United States, as well as other politicians. And the reason is that you might actually have an entity that would be truly unselfish, truly not influenced by any type of lobbyist. Now, of course, I’m not [talking about] trying to have a robot today, especially if I’m running for the U.S. presidency. But in the future–maybe 30 years into the future–it’s very possible you could have an artificial intelligence system that can run the country better than a human being.

Question:

Why is that?

Zoltan Istvan:

Because human beings are naturally selfish. Human beings are naturally after their own interests. We are geared towards pursuing our own desires, but oftentimes, those desires have contrasts to the benefit of society, at large, or against the benefit of the greater good. Whereas, if you have a machine, you will be able to program that machine to, hopefully, benefit the greatest good, and really go after that. Regardless of any personal interest that the machine might have. I think it’s based on having a more altruistic living entity that would be able to make decisions, rather than a human.

Question:

But what happens if people democratically pick a bad robot?

Zoltan Istvan:

So, this is the danger of even thinking this way. Because it’s possible that you could get a robot that might become selfish during its term as president. Or it could be hacked, you know? The hacking could be the number one worry that everyone would have with an artificial intelligence leading the country. But, it could also do something crazy, like malfunction, and maybe we wouldn’t even know if it’s necessarily malfunctioning. This happens all the time in people. But the problem is, that far into the future, it wouldn’t be just one entity that’s closed off into some sort of computer that would be walking around. At that stage, an artificial intelligence that is leading the nation would be totally interconnected with all other machines. That presents another situation, because, potentially, it could just take over everything.

That said, though, let’s say we had an on-and-off switch.•

Tags: ,

I’ve mentioned this story before, but when I was a small child, I was taking a bus trip with my parents from the Port Authority early one morning, and we saw Truman Capote seated on the benches, wearing a big straw hat, wasted out of his mind. He was trying to get a homeless woman to talk to him. “Come over here, dear,” he kept urging her. She had no interest.

Here’s a half-hour portrait of Capote at the height of his career, as In Cold Blood was published.

https://www.youtube.com/watch?v=7278BPpa-jw

https://www.youtube.com/watch?v=_DqLhbb7nPw

The opening of a 1994 New Scientist interview with sociological salesman Alvin Toffler, which, among other things, reflects on his incredibly popular 1970 book, Future Shock:

Question:

What led you to write Future Shock? 

Alvin Toffler:

While covering Congress, it occurred to us that big technological and social changes were occurring in the United States, but that the political system seemed totally blind to their existence. Between 1955 and 1960, the birth control pill was introduced, television became universalized [sic], commercial jet travel came into being and a whole raft of other technological events occurred. Having spent several years watching the political process, we came away feeling that 99 per cent of what politicians do is keep systems running that were laid in place by previous generations of politicians.

Our ideas came together in 1965 in an article called ‘The future as a way of life,’ which argued that change was going to accelerate and that the speed of change could induce disorientation in lots of people. We coined the phrase ‘future shock’ as an analogy to the concept of culture shock. With future shock you stay in one place but your own culture changes so rapidly that it has the same disorienting effect as going to another culture.

Question:

Were you surprised by the reaction to the book? 

Alvin Toffler:

I think that it touched a nerve. Remember we were coming out of the Sixties, countries were being torn apart, change was almost out of control for a period. It touched a nerve, it gave a language, it introduced a metaphor that people could use to describe their own experience.

Question:

Looking back to 1970 when the book came out, how would you have done it differently? 

Alvin Toffler:

The great weakness was the book wasn’t radical enough, although everybody said it was a very radical book. The reason for that is that we introduced the concept of the general crisis of industrialism. Marx had talked about the general crisis of capitalism and the argument of the left was always that capitalism would collapse upon itself and socialism would triumph. We argued that both capitalism and socialism would collapse eventually because both were the offspring of industrial civilization, and that we were on the edge of a new way of life, a new civilization. Had we understood more deeply the consequences of that idea we would not have accepted as naively as we did the forecasts of the economists. If you think that economists are arrogant now, in the Sixties they were really riding high. They claimed we would never have another recession, and the reason was that we understand how the economy works, and ‘all we have to do is fine-tune it” as one economist told us. We were young and naive and we bought that notion. We should have anticipated that the revolution we were talking about would have hit the economy in a much deeper way.•

____________________________

Orson Welles narrated the 1972 documentary McGraw-Hill produced about Toffler‘s bestsellerThe movie is odd and paranoid and overheated and fun.

It might be bad for the morale of the human workers at Amazon for the company to openly flaunt its dreams of replacing them with robots, but you can’t worry too much about those temps. 

A few days ago, Jeff Bezos’ everything company played host to an AI challenge. The technology proved to not be quite ready to replace pickers, but time is on silicon’s side. From Mike Murphy at Quartz:

We humans often get injured or sick, and can’t usually work round the clock. We also sometimes have families and enjoy healthcare. Robots, on the other hand, have none of these problems.

Which is why Amazon hosted a competition over the weekend to find out if a robot could take the jobs of any of its many employees—more than 50,000 people work in its US warehouses alone—who fulfil our insatiable desires for books, toasters, cameras, and live ladybugs.

The Amazon Picking Challenge, hosted during a robotics conference in Seattle last week, tested a robot’s ability to autonomously grab items from a shelf and place them in a tub. While we have robots that can be programmed to pick things up and put them other places—Rethink Robotics’ Baxter is great at this—it’s much harder to get them to recognize millions of items of different shapes, colors, and sizes on their own.•

Tags:

From the April 15, 1935 Brooklyn Daily Eagle:

In a sidebar to Mary-Ann Russon’s International Business Times report about which jobs are least and most prone to technological unemployment, umpires are listed as having a 98.3% chance of being replaced by robots. I can’t speak to umpires in other sports, but I would assume in American baseball, sensors could already do as good or better a job calling balls and strikes.

An excerpt:

Jobs least likely to be automated

The jobs that are least likely to be automated include jobs in the science, technology, engineering and mathematics (STEM) industries, such as engineers, scientists, astronomers, architects, surgeons, psychologists, dentists, chiropractors, opticians, electricians and dietitians.

Jobs where a greater amount of personal care and in-depth attention are required were also very unlikely to be automated, such as therapists, teachers, personal trainers, choreographers, air traffic controllers, archaeologists, fashion and set designers, the clergy, lawyers, vets, the police, dancers, journalists, firefighters, tour guides, public relation specialists and most computer-related professions.

Jobs most likely to be automated

In contrast, jobs that required a lot of data to be processed or a great deal of routine in repeating the same task over and over again were very likely to be automated, such as bank tellers, loan officers, administrators, insurance underwriters in the finance industry, as well as retail roles like cashiers, retail assistants, telemarketers, sales executives and waitressing.

Also most at risk of automation are most repairman jobs, and most types of clerks that handle administration, whether the clerk be working in a hotel, a brokerage, an office, the mailroom, handling payroll or managing files.•

Tags: , ,

According to the NGO Freedom House, even though the number of dictatorships has dwindled, there remain 106 in the world. At BBC Future, Rachel Nuwer examines the personality traits and political conditions that allow such authoritarian governments to exist and wonders if we’ll ever live in a dictator-less world. An excerpt:

The causative factors that give rise to dictatorships in the first place have not changed much over the centuries. Some of the first were established in Classical Rome in times of emergencies. “A single individual like Julius Caesar was given a lot of power to help society cope with a crisis, after which that power was supposed to be relinquished,” says Richard Overy, a historian at the University of Exeter. “But usually, he wasn’t so keen to relinquish it.” Many modern and recent dictatorships – those of Adolf Hitler and Benito Mussolini, for instance – were also established in times of turmoil, and future ones likely will be, too. “Over the next century, there will be acute points of crisis,” Overy says. “I don’t think we’ve seen the end of dictatorship any more than we’ve seen the end of war.”  

But just as violence on a whole has declined across history, so, too, has the number of dictatorships, especially since the 1970s, as regimes across Latin America and Eastern Europe fell. There are slight undulations; the crumbling of the Soviet Union was accompanied by a steep decline in dictatorships, but now many of those countries are creeping back toward that former mode of governance. Overall, though, dictatorships are scarcer now than they were in the past. “It’s harder for people to justify dictatorships today, partly because the whole globe is in the eye of the media,” Overy says. “Getting away with things is more difficult than it used to be.”   

Consequently, days might be numbered on at least some remaining dictatorships – particularly if their oppressive rule is contributing to home-grown economic problems. “When you’re operating in an economy that’s perpetuating your collapse, your backers become nervous that you won’t be able to help them, so they start to shop around,” [NYU professor Bruce] Bueno de Mesquita says. Such situations sometimes result in military coups, he adds, which tend to push countries in a more positive direction for citizen wellbeing, at least based on past examples.

Some dictatorships, however, show no signs of cracking.•

Tags: , ,

Paul Ehrlich was not subtle, as people seldom are when throwing around the word “bomb.”

The Stanford insect biologist spent the ’60s and ’70s scaring the bejeezus out of people, predicting imminent societal collapse due to overpopulation, with hundreds of millions starving to death. In the big picture, he was right that environmental damage would prove challenging to the survival of the human species, but the devil was in the details, and his presumptions about the short-term ramifications of overpopulation were way off the mark.

Justin Fox of Bloomberg Review reflects on Ehrlich’s 1968 book The Population Bomb, a Malthusian message so chillingly effective that he did a solid hour one evening on Johnny Carson’s Tonight Show. The opinion writer finds the philippic a mixed blessing. He points out the scientist’s wrong-mindedness about overcrowding while acknowledging that today’s widely held anti-Ehrlich belief that population will level off naturally could also be incorrect.

One note: Embedded below the Bloomberg excerpt is a new NYT documentary about the ominous prognostication that never came to pass. In it, a comment Ehrlich makes reveals the misanthropy that has always seemed to be lurking behind his views. It’s this: “The idea that every woman should have as many babies as she wants is, to me, exactly the same kind of idea as everybody ought to be permitted to throw as much of their garbage into their neighbor’s backyard as they want.” Wow.

From Fox:

In a just-released New York Times mini-documentary on the book and its aftermath, the now-83-year-old Stanford biologist says insufferable things like, “One of the things that people don’t understand is that timing to an ecologist is very, very different from timing to an average person.” Uh, then why did you write a book clearly aimed at average people that confidently predicted that in the 1970s hundreds of millions would die of famine? “I expressed more certainty because I was trying to bring people to get something done.” (In that vein he also co-founded the activist group Zero Population Growth, rechristened in 2002 as Population Connection.)

Still, I figured I’d give the book itself a chance. I’ve had a copy for years, and thanks to a recentbook-sorting projectI was able to find it in a matter of seconds this morning. Because it’s not very long, I was able to read it in an hour or two. And I have to say it surprised me.

First of all, half of Ehrlich’s prediction came true. He forecast in the book that global population, about 3.5 billion at the time, would double by 2005. He was only six years off on that — world population hit 7 billion in 2011 — which I figure counts as getting it right.

What Ehrlich famously got wrong was the planet’s carrying capacity. Sure, global population doubled. But thanks to theGreen Revolution, per-acre grain yields went up much faster than that. The inflection point in global agricultural productivity, in fact, came just as Ehrlich was finishing his book.

Here’s the interesting thing, though — Ehrlich was well aware that this was a possibility.•

__________________________

“Sometime in the next 15 years, the end will come.”

Tags: ,

Is this the life or what 

Sooo…I just masturbated at work, and then after I went to get a sandwich. Is this the life or what!

Google’s all-or-nothing approach to driverless cars was apparently brought about because road tests proved the computer-human tandem incompatible. It was a punch to the gut of Google X at the time, but it forced the emergence of a fully autonomous vehicle. From Alistair Barr at the Wall Street Journal:

Google’s self-driving car project, one of the first to emerge from Google X, also faced major challenges, [Astro] Teller said.

In the fall of 2012, the team thought it had finished because it had built a car capable of driving itself safely on highways. Google gave some of the vehicles to other Google employees to use to commute to and from work and made them promise to continue paying attention to the road, Teller said.

“The cars performed flawlessly. The people did not,” he added. While not providing details, Teller said the employees paid less attention because they assumed the car would take care of any incidents.

“It was not pretty. We stopped doing it. We realized humans cannot be a backup system for the computer,” Teller said.

The team had to re-design a new vehicle capable to driving itself all the way from point A to point B with no help from a human driver. Teller said this was an “existential” blow to the team at the time.•

Tags: ,

« Older entries § Newer entries »