Stephen Hawking

You are currently browsing articles tagged Stephen Hawking.

In a recent Guardian essay, Stephen Hawking identified our age as the “most dangerous time for our planet.” Considering climate change, wealth inequality, the retreat of liberal democracy, and, perhaps, a re-embrace of nuclear proliferation, the physicist may very well be correct. While I thought his prescriptions to address the widening chasm between haves and have-nots were noble, they didn’t seem realistic to me. I have serious concerns that we’re increasingly taking a horrifying path.

In a Washington Post op-ed, Vivek Wadhwa feels similarly about Hawking’s best intentions. He suggests a better answer might be citizens having a greater voice in the nature of the technological tools shaping our future, though I’m dubious if that will make a dent, either. Technology isn’t often directed by a succession of sober-minded, rational choices. An excerpt:

Technology is the main culprit here, widening the gulf between the haves and the have-nots. As Hawking explained, automation has already decimated jobs in manufacturing and is allowing Wall Street to accrue huge rewards that the rest of us underwrite. Over the next few years, technology will take more jobs from humans. Robots will drive the taxis and trucks; drones will deliver our mail and groceries; machines will flip hamburgers and serve meals. And, if Amazon’s new cashierless stores are a success, supermarkets will replace cashiers with sensors. This is not speculation; it is imminent. (Amazon founder Jeffrey P. Bezos owns The Washington Post.)

The dissatisfaction is not particularly American. With the developing world coming online with smartphones and tablets, billions more people are becoming aware of what they don’t have. The unrest we have witnessed in the United States, Britain and, most recently, Italy will become a global phenomenon.

Hawking’s solution is to break down barriers within and between nations, to have world leaders acknowledge that they have failed and are failing the many, to share resources and to help the unemployed retrain. But this is wishful thinking. It isn’t going to happen.

Witness the outcome of the elections: We moved backward on almost every front. Our politicians will continue to divide and conquer, Silicon Valley will deny its culpability, and the very technologies, such as social media and the Internet, that were supposed to spread democracy and knowledge will instead be used to mislead, to suppress and to bring out the ugliest side of humanity.•

Tags: ,

8484205576_75d9daa43f_b

The twin political shocks of 2016–the bad Brexit and the worse Trump victory–have provoked Stephen Hawking to pen the Guardian editorial, “This is the Most Dangerous Time for Our Planet,” which encourages those with vast power and wealth to address the needs of ones left behind in our technological age.

The physicist’s focus is noble, though I wonder about the efficacy of his prescriptions. Hawking believes we need to retrain those whose skills are no longer required. That’s easier said than done, and automation may make it unlikely enough jobs exist even for those who are successfully upskilled. Not every trucker can become a self-driving car engineer.

He further feels we need to support those being retrained financially while they’re indoctrinated into a computer-dominant era, though with the miserly, bigoted plutocrats soon entering the White House, a dismantling of existing social safety nets is far more likely than some form of Universal Basic Income.

The scientist also believes global development on a larger scale is needed to in the face of mass migration, which is great but unlikely in many war-torn areas and even in more stable locales, that level of investment is unlikely without a profit motive. Hawking is rightly saying it shouldn’t be that way, but that’s the way it is.

An excerpt:

The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.

This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.

We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.•

Tags:

briefhistory

kacz_022

Stephen Hawking and Ted Kaczynski agree: Machine intelligence may be the death of us. Of course, the Unabomber could himself kill you if only he had your snail-mail address.

Found in the Afflictor inbox an offer from a PR person for a free copy of Anti-Tech Revolution: Why and How, Kaczynski’s new book. The release describes the author as a “social theorist and ecological anarchist,” conveniently leaving a few gaps on the old résumé: serial killer, maimer, domestic terrorist, etc.

A few minutes later, I read Hawking’s inaugural speech at the new Leverhulme Centre for the Future of Intelligence at Cambridge, an institution created to deal sanely and non-violently with the potential problem of humanity being extincted by its own cleverness.

An excerpt from each follows.


From Kaczynski:

People would bitterly resent any system to which they belonged if they believed that when they grew old, or if they became disabled, they would be thrown on the trash-heap.

But when all people have become useless, self-prop systems will find no advantage in taking care of anyone. The techies themselves insist that machines will soon surpass humans in intelligence.119 When that happens, people will be superfluous and natural selection will favor systems that eliminate them—if not abruptly, then in a series of stages so that the risk of rebellion will be minimized.

Even though the technological world-system still needs large numbers of people for the present, there are now more superfluous humans than there have been in the past because technology has replaced people in many jobs and is making inroads even into occupations formerly thought to require human intelligence.120 Consequently, under the pressure of economic competition, the world’s dominant self-prop systems are already allowing a certain degree of callousness to creep into their treatment of superfluous individuals. In the United States and Europe, pensions and other benefits for retired, disabled, unemployed, and other unproductive persons are being substantially reduced;121 at least in the U.S., poverty is increasing; and these facts may well indicate the general trend of the future, though there will doubtless be ups and downs.

It’s important to understand that in order to make people superfluous, machines will not have to surpass them in general intelligence but only in certain specialized kinds of intelligence. For example, the machines will not have to create or understand art, music, or literature, they will not need the ability to carry on an intelligent, non-technical conversation (the “Turing test”), they will not have to exercise tact or understand human nature, because these skills will have no application if humans are to be eliminated anyway. To make humans superfluous, the machines will only need to outperform them in making the technical decisions that have to be made for the purpose of promoting the short-term survival and propagation of the dominant self-prop systems. So, even without going as far as the techies themselves do in assuming intelligence on the part of future machines, we still have to conclude that humans will become obsolete.•


From Hawking:

It is a great pleasure to be here today to open this new Centre.  We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity.  So it is a welcome change that people are studying instead the future of intelligence.

Intelligence is central to what it means to be human.  Everything that our civilisation has achieved, is a product of human intelligence, from learning to master fire, to learning to grow food, to understanding the cosmos. 

I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer.  It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.

Artificial intelligence research is now progressing rapidly.  Recent landmarks such as self-driving cars, or a computer winning at the game of Go, are signs of what is to come.  Enormous levels of investment are pouring into this technology.  The achievements we have seen so far will surely pale against what the coming decades will bring.

The potential benefits of creating intelligence are huge.  We cannot predict what we might achieve, when our own minds are amplified by AI.  Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one — industrialisation.  And surely we will aim to finally eradicate disease and poverty.  Every aspect of our lives will be transformed.  In short, success in creating AI, could be the biggest event in the history of our civilisation.

But it could also be the last, unless we learn how to avoid the risks.  Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.   It will bring great disruption to our economy.  And in the future, AI could develop a will of its own — a will that is in conflict with ours.

In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.  We do not yet know which. 

That is why in 2014, I and a few others called for more research to be done in this area.  I am very glad that someone was listening to me! 

The research done by this centre is crucial to the future of our civilisation and of our species.  I wish you the best of luck!•

Tags: ,

hawkingabhot

At the beginning of 2015, the great fiction writer Ken Kalfus suggested we take a deep breath before attempting to colonize Mars and instead send a human-less probe to Alpha Centauri. It would be a gift to our descendants, and in the meanwhile we could take a more sober approach to relocating humans into space.

Kalfus’ vision might be realized thanks to the largesse of Yuri Milner, one of those modern Russian entrepreneurs so awash in wealth and next-level technology that they can dream the biggest dreams when tiring of mansions and yachts, ones formerly only possible for states, like creating “global brains” and exploring space. 

In a New York Times article, the excellent Dennis Overbye writes of the proposed mission, in which Milner is partnering with Stephen Hawking, among others. The opening:

Can you fly an iPhone to the stars?

In an attempt to leapfrog the planets and vault into the interstellar age, a bevy of scientists and other luminaries from Silicon Valley and beyond, led by Yuri Milner, a Russian philanthropist and Internet entrepreneur, announced a plan on Tuesday to send a fleet of robot spacecraft no bigger than iPhones to Alpha Centauri, the nearest star system, 4.37 light-years away.

If it all worked out — a cosmically big “if” that would occur decades and perhaps $10 billion from now — a rocket would deliver a “mother ship” carrying a thousand or so small probes to space. Once in orbit, the probes would unfold thin sails and then, propelled by powerful laser beams from Earth, set off one by one like a flock of migrating butterflies across the universe.

Within two minutes, the probes would be more than 600,000 miles from home — as far as the lasers could maintain a tight beam — and moving at a fifth of the speed of light. But it would still take 20 years for them to get to Alpha Centauri. Those that survived would zip past the star system, making measurements and beaming pictures back to Earth.

Much of this plan is probably half a lifetime away.•

Tags: , ,

journalistcar (1)

Here are 50 ungated pieces of wonderful journalism from 2015, alphabetized by author name, which made me consider something new or reconsider old beliefs or just delighted me. (Some selections are from gated publications that allow a number of free articles per month.) If your excellent work isn’t on the list, that’s more my fault than yours.

  • Who Runs the Streets of New Orleans?” (David Amsden, The New York Times Magazine) As private and public sector missions increasingly overlap, here’s an engaging look at the privatization of some policing in the French Quarter.
  • In the Beginning” (Ross Andersen, Aeon) A bold and epic essay about the elusive search for the origins of the universe.
  • Ask Me Anything (Anonymous, Reddit) A 92-year-old German woman who was born into Nazism (and participated in it) sadly absolves herself of all blame while answering questions about that horrible time.
  • Rethinking Extinction” (Stewart Brand, Aeon) The Whole Earth Catalog founder thinks the chance of climate-change catastrophe overrated, arguing we should utilize biotech to repopulate dwindling species.
  • Anchorman: The Legend of Don Lemon” (Taffy Brodesser-Akner, GQ) A deeply entertaining look into the perplexing facehole of Jeff Zucker’s most gormless word-sayer and, by extension, the larger cable-news zeitgeist.
  • How Social Media Is Ruining Politics(Nicholas Carr, Politico) A lament that our shiny new tools have provided provocative trolls far more credibility than a centralized media ever allowed for.
  • Clans of the Cathode” (Tom Carson, The Baffler) One of our best culture critics looks at the meaning of various American sitcom families through the medium’s history.
  • The Black Family in the Age of Mass Incarceration” (Ta-Nehisi Coates, The Atlantic) The author examines the tragedy of the African-American community being turned into a penal colony, explaining the origins of the catastrophic policy failure.
  • Perfect Genetic Knowledge” (Dawn Field, Aeon) The essayist thinks about a future in which we’ve achieved “perfect knowledge” of whole-planet genetics.
  • A Strangely Funny Russian Genius” (Ian Frazier, The New York Review of Books) Daniil Kharms was a very funny writer, if you appreciate slapstick that ends in a body count.
  • Tomorrow’s Advance Man” (Tad Friend, The New Yorker) Profile of Silicon Valley strongman Marc Andreessen and his milieu, an enchanted land in which adults dream of riding unicorns.
  • Build-a-Brain” (Michael Graziano, Aeon) The neuroscientist’s ambitious thought experiment about machine intelligence is a piece I thought about continuously throughout the year.
  • Ask Me Anything (Stephen Hawking, Reddit) Among other things, the physicist warns that the real threat of superintelligent machines isn’t malice but relentless competence.
  • Engineering Humans for War” (Annie Jacobsen, The Atlantic) War is inhuman, it’s been said, and the Pentagon wants to make it more so by employing bleeding-edge biology and technology to create super soldiers.
  • The Wrong Head” (Mike Jay, London Review of Books) A look at insanity in 1840s France, which demonstrates that mental illness is often expressed in terms of the era in which it’s experienced.
  • Death Is Optional” (Daniel Kahneman and Noah Yuval Harari, Edge) Two of my favorite big thinkers discuss the road ahead, a highly automated tomorrow in which medicine, even mortality, may not be an egalitarian affair.
  • Where the Bodies Are Buried,” (Patrick Radden Keefe, The New Yorker) Ceasefires, even treaties, don’t completely conclude wars, as evidenced by this haunting revisitation of the heartbreaking IRA era.
  • Porntopia” (Molly Lambert, Grantland) The annual Adult Video News Awards in Las Vegas, the Oscars of oral, allows the writer to look into a funhouse-mirror reflection of America.
  • The Robots Are Coming” (John Lanchester, London Review of Books) A remarkably lucid explanation of how quickly AI may remake our lives and labor in the coming decades.
  • Last Girl in Larchmont” (Emily Nussbaum, The New Yorker) The great TV critic provides a postmortem of Joan Rivers and her singular (and sometimes disquieting) brand of feminism.
  • “President Obama & Marilynne Robinson: A Conversation, Part 1 & Part 2” (Barack Obama and Marilynne Robinson, New York Review of Books) Two monumental Americans discuss the state of the novel and the state of the union.
  • Ask Me Anything (Elizabeth Parrish, Reddit) The CEO of BioViva announces she’s patient zero for the company’s experimental age-reversing gene therapies. Strangest thing I read all year.
  • Why Alien Life Will Be Robotic” (Sir Martin Rees, Nautilus) The astronomer argues that ETs in our inhospitable universe have likely already transitioned into conscious machines.
  • Ask Me Anything (Anders Sandberg, Reddit) Heady conversation about existential risks, Transhumanism, economics, space travel and future technologies conducted by the Oxford researcher. 
  • Alien Rights” (Lizzie Wade, Aeon) Manifest Destiny will, sooner or later, became a space odyssey. What ethics should govern exploration of the final frontier?
  • Peeling Back the Layers of a Born Salesman’s Life” (Michael Wilson, The New York Times) The paper’s gifted crime writer pens a posthumous profile of a protean con man, a Zelig on the make who crossed paths with Abbie Hoffman, Otto Preminger and Annie Leibovitz, among others.
  • The Pop Star and the Prophet” (Sam York, BBC Magazine) Philosopher Jacques Attali, who predicted, back in the ’70s, the downfall of the music business, tells the writer he now foresees similar turbulence for manufacturing.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Stephen Hawking’s answered some of the Reddit Ask Me Anything questions that were submitted a few weeks back. Some highlights: The physicist hopes for a world in which wealth redistribution becomes the norm when and if machines do the bulk of the labor, though he realizes that thus far that hasn’t been the inclination. He believes machines might subjugate us not because of mayhem or malevolence but because of their sheer proficiency. Hawking also thinks that superintelligence might be wonderful or terrible depending on how carefully we “direct” its development. I doubt that human psychology and individual and geopolitical competition will allow for an orderly policy of AI progress. It seems antithetical to our nature. And we actually have no place setting standards governing people of the distant future. They’ll have to make their own wise decisions based on the challenges they know and information they have. Below are a few exchanges from the AMA.

________________________

Question:

Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Stephen Hawking:

You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

________________________

Question:

Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? 

Stephen Hawking:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

________________________

Question:

I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Stephen Hawking:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

_____________________

 Question:

I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.

Stephen Hawking:

You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.•

 

Tags:

hawkingabhot

Stephen Hawking fears the unknown, whether it be aliens from other planets or intelligent machines on Earth. In that sense, he’s a physicist operating as a risk manager, though despite his warnings, we probably can only do so much in our time to govern what happens in the future, as questions we can’t anticipate now will arise.

The scientist uses the example of Native Americans being interrupted unexpectedly by Columbus to illustrate what may happen if extraterrestrials descended upon us. But wouldn’t a truer analogy be something completely unforeseen wreaking havoc, a thing that goes far beyond our current imaginations? And while we have more information in our interconnected world than Natives did then, are we really be any more prepared for the darkest of black swans?

Hawking speaks to these issues in a new El Pais interview conducted by Nuño Domínguez and Javier Salas. Two excerpts below.

____________________________

Question:

You recently launched a very ambitious initiative to search for intelligent life in our galaxy. A few years ago, though, you said it would be better not to contact extraterrestrial civilizations because they could even exterminate us. Have you changed your mind?

Stephen Hawking:

If aliens visit us, the outcome could be much like when Columbus landed in America, which didn’t turn out well for the Native Americans. Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they can reach. To my mathematical brain, the numbers alone make thinking about aliens perfectly rational. The real challenge is to work out what aliens might actually be like.

____________________________

Question:

Why should we fear artificial intelligence?

Stephen Hawking:

Computers will overtake humans with AI at some point within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.

Question:

What do you think our fate as a species will be?

Stephen Hawking:

I think the survival of the human race will depend on its ability to find new homes elsewhere in the universe, because there’s an increasing risk that a disaster will destroy Earth. I therefore want to raise public awareness about the importance of space flight. I have learnt not to look too far ahead, but to concentrate on the present. I have so much more I want to do.•

Tags: , ,

Well, of course we shouldn’t engage in autonomous warfare, but what’s obvious now might not always seem so clear. What’s perfectly sensible today might seem painfully naive tomorrow.

I think humans create tools to use them, eventually. When electricity (or some other power source) is coursing through those objects, the tools almost become demanding of our attention. If you had asked the typical person 50 years ago–20 years ago?–whether they would be accepting of a surveillance state, the answer would have been a resounding “no.” But here we are. It just creeped up on us. How creepy.

I still, however, am glad that Stephen Hawking, Steve Wozniak, Elon Musk and a thousand others engaged in science and technology have petitioned for a ban on AI warfare. It can’t hurt.

From Samuel Gibbs at the Guardian:

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.•

Tags: , , ,

A few days ago, Mark Zuckerberg took a break from collecting your personal information and allowing you to create content for him for free, to answer some questions in a Facebook Q&A. Like most Silicon Valley successes, he likes meditating on the big questions. Stephen Hawking and Jeff Jarvis are among the questioners.

___________________________

Stephen Hawking:

I would like to know a unified theory of gravity and the other forces. Which of the big questions in science would you like to know the answer to and why?

Mark Zuckerberg:

That’s a pretty good one!

I’m most interested in questions about people. What will enable us to live forever? How do we cure all diseases? How does the brain work? How does learning work and how we can empower humans to learn a million times more?

I’m also curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about. I bet there is.

___________________________

Jeff Jarvis:

Mark: What do you think Facebook’s role is in news? I’m delighted to see Instant Articles and that it includes a business model to help support good journalism. What’s next?

Mark Zuckerberg:

People discover and read a lot of news content on Facebook, so we spend a lot of time making this experience as good as possible.

One of the biggest issues today is just that reading news is slow. If you’re using our mobile app and you tap on a photo, it typically loads immediately. But if you tap on a news link, since that content isn’t stored on Facebook and you have to download it from elsewhere, it can take 10+ seconds to load. People don’t want to wait that long, so a lot of people abandon news before it has loaded or just don’t even bother tapping on things in the first place, even if they wanted to read them.

That’s easy to solve, and we’re working on it with Instant Articles. When news is as fast as everything else on Facebook, people will naturally read a lot more news. That will be good for helping people be more informed about the world, and it will be good for the news ecosystem because it will deliver more traffic.

It’s important to keep in mind that Instant Articles isn’t a change we make by ourselves. We can release the format, but it will take a while for most publishers to adopt it. So when you ask about the “next thing”, it really is getting Instant Articles fully rolled out and making it the primary news experience people have.

___________________________

Ben Romberg:

Hi Mark, tell us more about the AI initiatives that Facebook are involved in.

Mark Zuckerberg:

Most of our AI research is focused on understanding the meaning of what people share.

For example, if you take a photo that has a friend in it, then we should make sure that friend sees it. If you take a photo of a dog or write a post about politics, we should understand that so we can show that post and help you connect to people who like dogs and politics.

In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc.

For vision, we’re building systems that can recognize everything that’s in an image or a video. This includes people, objects, scenes, etc. These systems need to understand the context of the images and videos as well as whatever is in them.

For listening and language, we’re focusing on translating speech to text, text between any languages, and also being able to answer any natural language question you ask.

This is a pretty basic overview. There’s a lot more we’re doing and I’m looking forward to sharing more soon.

___________________________

Jenni Moore:

Also in 10 years time what’s your view on the world where do you think we all will be from a technology perspective and social media?

Mark Zuckerberg:

In 10 years, I hope we’ve improved a lot of how the world connects. We’re doing a few big things:

First, we’re working on spreading internet access around the world through Internet.org. This is the most basic tool people need to get the benefits of the internet — jobs, education, communication, etc. Today, almost 2/3 of the world has no internet access. In the next 10 years, Internet.org has the potential to help connect hundreds of millions or billions of people who do not have access to the internet today.

As a side point, research has found that for every 10 people who gain access to the internet, about 1 person is raised out of poverty. So if we can connect the 4 billion people in the world who are unconnected, we can potentially raise 400 million people out of poverty. That’s perhaps one of the greatest things we can do in the world.

Second, we’re working on AI because we think more intelligent services will be much more useful for you to use. For example, if we had computers that could understand the meaning of the posts in News Feed and show you more things you’re interested in, that would be pretty amazing. Similarly, if we could build computers that could understand what’s in an image and could tell a blind person who otherwise couldn’t see that image, that would be pretty amazing as well. This is all within our reach and I hope we can deliver it in the next 10 years.

Third, we’re working on VR because I think it’s the next major computing and communication platform after phones. In the future we’ll probably still carry phones in our pockets, but I think we’ll also have glasses on our faces that can help us out throughout the day and give us the ability to share our experiences with those we love in completely immersive and new ways that aren’t possible today.

Those are just three of the things we’re working on for the next 10 years. I’m pretty excited about the future.•

Tags: , ,

NYU psychologist Gary Marcus is one of the talking heads interviewed for this CBS Sunday Morning report about the future of robots and co-bots and such. He speaks to the mismeasure of the Turing Test, the current mediocrity of human-computer communications and the potential perils of Strong AI. To his comment about the company dominating AI winning the Internet, I really doubt any one company will be dominant across most or even many categories. Quite a few will own a piece, and there’ll be no overall blowout victory, though there are vast riches to be had in even small margins. View here.

Tags: , , ,

Sadly, Ray Kurzweil is going to die sometime this century, as are you and I. We’re not going to experience immortality of the flesh or have our consciousnesses downloaded into a mainframe. Those amazing options he thinks are near will be enjoyed, perhaps, by people in the future, not us. But I agree with Kurzweil that while AI may become an existential threat, I don’t think that’s necessarily a deal breaker. Without advanced AI and exponential growth of other technologies our species is doomed sooner than later, so let’s go forth boldly if cautiously. From Kurzweil in Time:

“Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of ‘the AI.’ Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.”

Tags: , ,

Computer pioneer Clive Sinclair has been predicting since the 1980s that self-designing intelligent machines will definitely be the doom of us, but he’s not letting it ruin his day. Che sera sera, you carbon-based beings. As you were. From Leo Kelion at the BBC:

“His ZX Spectrum computers were in large part responsible for creating a generation of programmers back in the 1980s, when the machines and their clones became best-sellers in the UK, Russia, and elsewhere.

At the time, he forecast that software run on silicon was destined to end ‘the long monopoly’ of carbon-based organisms being the most intelligent life on Earth.

So it seemed worth asking him what he made of Prof Stephen Hawking’s recent warning that artificial intelligence could spell the end of the human race.

‘Once you start to make machines that are rivalling and surpassing humans with intelligence it’s going to be very difficult for us to survive – I agree with him entirely,’ Sir Clive remarks.

‘I don’t necessarily think it’s a bad thing. It’s just an inevitability.’

So, should the human race start taking precautions?

‘I don’t think there’s much they can do,’ he responds. ‘But it’s not imminent and I can’t go round worrying about it.’

It marks a somewhat more relaxed view than his 1984 prediction that it would be ‘decades, not centuries’ in which computers ‘capable of their own design’ would rise.

‘In principle, it could be stopped,’ he warned at the time. ‘There will be those who try, but it will happen nonetheless. The lid of Pandora’s box is starting to open.'”

Tags: , ,

If we play our cards right, humans might be able to survive in this universe for a 100 billion years, but we’re not working the deck very well in some key ways. Human-made climate change, of course, is a gigantic near-term danger. Some see AI and technology as another existential threat, which of course it is in the long run, though the rub is we’ll need advanced technologies of all kinds to last into the deep future. A Financial Times piece by Sally Davies reports on Stephen Hawking’s warnings about technological catastrophe, something he seems more alarmed by as time passes:

“The astrophysicist Stephen Hawking has warned that artificial intelligence ‘could outsmart us all’ and is calling for humans to establish colonies on other planets to avoid ultimately a ‘near-certainty’ of technological catastrophe.

His dire predictions join recent warnings by several Silicon Valley tycoons about artificial intelligence even as many have piled more money into it.

Prof Hawking, who has motor neurone disease and uses a system designed by Intel to speak, said artificial intelligence could become ‘a real danger in the not-too-distant future’ if it became capable of designing improvements to itself.

Genetic engineering will allow us to increase the complexity of our DNA and ‘improve the human race,’ he told the Financial Times. But he added it would be a slow process and would take about 18 years before human beings saw any of the benefits.

‘By contrast, according to Moore’s Law, computers double their speed and memory capacity every 18 months. The risk is that computers develop intelligence and take over. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,’ he said.”

 

Tags: ,

The opening of Dwight Garner’s lively New York Times Book Review piece about the latest volume, a career summation of sorts, by Edward O. Wilson, a biologist who has watched his aunts ants have sex:

“The best natural scientists, when they aren’t busy filling us with awe, are busy reminding us how small and pointless we are.’Stephen Hawking has called humankind ‘just an advanced breed of monkeys on a minor planet of a very average star.’ The biologist and naturalist Edward O. Wilson, in his new book, which is modestly titled The Meaning of Human Existence, puts our pygmy planet in a different context.

‘Let me offer a metaphor,’ he says. ‘Earth relates to the universe as the second segment of the left antenna of an aphid sitting on a flower petal in a garden in Teaneck, N.J., for a few hours this afternoon.’ The Jersey aspect of that put-down really drives in the nail.

Mr. Wilson’s slim new book is a valedictory work. The author, now 85 and retired from Harvard for nearly two decades, chews over issues that have long concentrated his mind: the environment; the biological basis of our behavior; the necessity of science and humanities finding common cause; the way religion poisons almost everything; and the things we can learn from ants, about which Mr. Wilson is the world’s leading expert.

Mr. Wilson remains very clever on ants. Among the questions he is most asked, he says, is: ‘What can we learn of moral value from the ants?’ His response is pretty direct: ‘Nothing. Nothing at all can be learned from ants that our species should even consider imitating.’

He explains that while female ants do all the work, the pitiful males are merely ‘robot flying sexual missiles’ with huge genitalia. (This is not worth imitating?) During battle, they eat their injured. ‘Where we send our young men to war,’ Mr. Wilson writes, ‘ants send their old ladies.’ Ants: moral idiots.

The sections about ants remind you what a lively writer Mr. Wilson can be. This two time winner of the Pulitzer Prize in nonfiction stands above the crowd of biology writers the way John le Carré stands above spy writers. He’s wise, learned, wicked, vivid, oracular.”

Tags: , ,

Love Stephen Hawking though I do, I was disappointed when he joined the “Philosophy Is Dead” chorus. I can’t tell you many otherwise intelligent people I’ve heard refer to the discipline as “bullshit,” and that’s hugely perplexing to me. With the challenges we’re facing over the next several decades–answering the technological-driven questions about ethics and the very meaning of humanness, for instance–I can’t think of a more vital time for philosophers. From “Physicists Should Stop Saying Silly Things about Philosophy,” a post by Sean Carroll in which he bats away several complaints about the study of ideas:

Roughly speaking, physicists tend to have three different kinds of lazy critiques of philosophy: one that is totally dopey, one that is frustratingly annoying, and one that is deeply depressing.

  • ‘Philosophy tries to understand the universe by pure thought, without collecting experimental data.’

This is the totally dopey criticism. Yes, most philosophers do not actually go out and collect data (although there are exceptions). But it makes no sense to jump right from there to the accusation that philosophy completely ignores the empirical information we have collected about the world. When science (or common-sense observation) reveals something interesting and important about the world, philosophers obviously take it into account. (Aside: of course there are bad philosophers, who do all sorts of stupid things, just as there are bad practitioners of every field. Let’s concentrate on the good ones, of whom there are plenty.)

Philosophers do, indeed, tend to think a lot. This is not a bad thing. All of scientific practice involves some degree of ‘pure thought.’ Philosophers are, by their nature, more interested in foundational questions where the latest wrinkle in the data is of less importance than it would be to a model-building phenomenologist. But at its best, the practice of philosophy of physics is continuous with the practice of physics itself. Many of the best philosophers of physics were trained as physicists, and eventually realized that the problems they cared most about weren’t valued in physics departments, so they switched to philosophy. But those problems — the basic nature of the ultimate architecture of reality at its deepest levels — are just physics problems, really. And some amount of rigorous thought is necessary to make any progress on them. Shutting up and calculating isn’t good enough.”

Tags: ,

Stephen Hawking thinks Artificial Intelligence might be the worst thing ever, unless, of course, it’s the best. (Perhaps it could be both?) Hawking certainly wouldn’t be alive without it. A lot of us wouldn’t be. From the physicist’s cautionary tale in the Independent:

“Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

Looking further ahead, there are no fundamental limits to what can be achieved.”

 

Tags:

Andrei Linde, Stanford physicist by way of Russia, and his “chaotic expansion” theory of the universe, are featured in an early chapter of Jim Holt’s terrific 2012 book, Why Does the World Exist? In this clip, Linde relays how his central idea for explaining how it all came to be was rebuffed–somewhat–by Stephen Hawking in an unusual circumstance.

Tags: , ,

Stephen Hawking’s 2008 NASA address encouraging space colonization.

Tags:

From Michael Venables’ new Ars Technica piece about Stephen Hawking, a passage about the possibility of intelligent life beyond Earth:

Stephen Hawking: We think that life develops spontaneously on Earth, so it must be possible for life to develop on suitable planets elsewhere in the universe. But we don’t know the probability that a planet develops life. If it is very low, we may be the only intelligent life in the galaxy. Another frightening possibility is intelligent life is not only common, but that it destroys itself when it reaches a stage of advanced technology.

Evidence that intelligent life is very short-lived is that we don’t seem to have been visited by extra terrestrials. I’m discounting claims that UFOs contain aliens. Why would they appear only to cranks and weirdos? Do I believe that there is some government conspiracy to conceal the evidence and keep for themselves the advanced technology the aliens have? If that were the case, they aren’t making much use of it. Further evidence that there isn’t any intelligent life within a few hundred light years comes from the fact that SETI, the Search for Extra Terrestrial Life, hasn’t picked up their television quiz shows. It is true that we advertise our presence by our broadcast. But given that we haven’t been visited for four billion years, it isn’t likely that aliens will come any time soon.”

Tags: ,

As Stephen Hawking’s motor skills further deteriorate, plans are afoot to tap directly into his head with cutting edge technology. From the Telegraph:

“Hawking, 70, has been working with scientists at Standford University who are developing a the iBrain – a tool which picks up brain waves and communicates them via a computer.

The scientist, who has motor neurone disease and lost the power of speech nearly 30 years ago, currently uses a computer to communicate but is losing the ability as the condition worsens.

But he has been working with Philip Low, a professor at Stanford and inventor of the iBrain, a brain scanner that measures electrical activity.

‘We’d like to find a way to bypass his body, pretty much hack his brain,’ said Prof Low.

Researchers will unveil their latest results at a conference in Cambridge next month, and may demonstrate the technology on Hawking.”

 

Tags: ,

From “A Little Device Trying to Read Your Thoughts,” David Ewing Duncan’s New York Times article about Stephen Hawking adopting the iBrain:

“Already surrounded by machines that allow him, painstakingly, to communicate, the physicist Stephen Hawking last summer donned what looked like a rakish black headband that held a feather-light device the size of a small matchbox.

Called the iBrain, this simple-looking contraption is part of an experiment that aims to allow Dr. Hawking — long paralyzed by amyotrophic lateral sclerosis, or Lou Gehrig’s disease — to communicate by merely thinking.

The iBrain is part of a new generation of portable neural devices and algorithms intended to monitor and diagnose conditions like sleep apnea, depression and autism. Invented by a team led by Philip Low, a 32-year-old neuroscientist who is chief executive of NeuroVigil, a company based in San Diego, the iBrain is gaining attention as a possible alternative to expensive sleep labs that use rubber and plastic caps riddled with dozens of electrodes and usually require a patient to stay overnight.

‘The iBrain can collect data in real time in a person’s own bed, or when they’re watching TV, or doing just about anything,’ Dr. Low said.”

••••••••••

Main title music by Philip Glass for Errol Morris’ 1991 Hawking film:

Tags: , , , ,

"It wouldn't be Stephen's voice any more" (Image by Errol Morris.)

From “The Man Who Saves Stephen Hawking’s Voice,” a New Scientist Q&A conducted by Catherine de Lange with the phsyicist’s personal technician, Sam Blackburn, who is soon leaving his post:

Stephen’s voice is very distinctive, but you say there might be a problem retaining it?
I guess the most interesting thing in my office is a little grey box, which contains the only copy we have of Stephen’s hardware voice synthesiser. The card inside dates back to the 1980s and this particular one contains Stephen’s voice. There’s a processor on it which has a unique program that turns text into speech that sounds like Stephen’s, and we have only two of these cards. The company that made them went bankrupt and nobody knows how it works any more. I am trying to reverse engineer it, which is quite tricky.

Can’t you update it with a new synthesiser?
No. It has to sound exactly the same. The voice is one of the unique things that defines Stephen in my opinion. He could easily change to a voice that was clearer, perhaps more soothing to listen to – less robotic sounding – but it wouldn’t be Stephen’s voice any more.”

••••••••••

“Which came first, the chicken or the egg?”:

Tags: , , ,

A 1988 panel discussion about the origins of our universe and more, with an amazing lineup: Carl Sagan, Arthur C. Clarke and Stephen Hawking.

Tags: , ,

If.

Tags: