Ray Kurzweil

You are currently browsing articles tagged Ray Kurzweil.

It would be great if all of us could grow smarter, but smart isn’t everything. Being wise and ethical are important, too.

PayPal co-founders Peter Thiel and Elon Musk have had access to elite educations, started successful businesses and amassed vast fortunes, but in this time of Trump they don’t seem particularly enlightened. Thiel ardently supported the bigoted, unqualified sociopath to the White House, while Musk’s situational ethics in dealing with the new abnormal are particularly amoral.

At SXSW, Ray Kurzweil said he believes technology has already made us much smarter and will improve us exponentially in that manner by 2029 when the Singularity arrives. While his views of the future are too aggressive, Kurzweil’s view of today seems oddly rose-colored. Why if we’re so much brighter do we have unintelligent reality TV host in the White House? Why is there ever-deepening wealth inequality? Why are we ravaged by an opioid epidemic? 

If we’re smarter now–a big if–and it’s divorced from basic morality and decency, are we any better off?

From Dyani Sabin’s Inverse piece about Kurzweil’s appearance in Austin:

The future isn’t going to look like a science fiction story with a few super intelligent A.I.s that attack us.

“That’s not realistic. We don’t have one or two A.I.s in the world. Today we have billions,” he says. And unlike Musk who imagines the rise of the A.I. as something that threatens human existence, Kurzweil says that doesn’t hold with how we interact with A.I.s today.

“What’s actually happening is they are powering all of us. They’re making us smarter. They may not yet be inside our bodies but by the 2030s we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”

This isn’t just a pipe dream to Kurzweil, who’s had reasonable luck predicting where the future is going to go. “There are people with computers in their brains today — Parkinson’s patients,” he points out. “That’s how these things start.” Following the path of steps from the technology we have now, to what will happen twenty years from now, Kurzweil says, “in the 2030’s there will be something you can take that will go inside your brain and help your memory.” And that’s just the beginning.

Uploading our brains into the cloud will allow humanity to waste less time on lower-level types of mental tasks, Kurzweil says. He’s very interested in the idea of uploading the neocortex because it’s responsible for things like art, music, and humor. By allowing our brains to connect more on that level, by melding with artificial intelligence, we will expand our ability to do these things and be better people. “Ultimately it will affect everything,” he says. “We’re going to be able to meet the physical needs of all humans. We’re going to expand our minds and exemplify these artistic qualities that we value.”•

Tags: ,

bigears12 (1)

Think Ray Kurzweil is brilliant, though I have many disagreements with him, especially what I feel is the increasingly frantic timeline for his outré predictions. The futurist likes to tout his amazing record for accuracy as a prognosticator, but there have been jaw-dropping clunkers and there’ll likely be more. Additionally, his belief that ingesting thousands of dollars of supplements daily will enable him to survive until eternal life is possible–he thinks that day is very soon, of course–seems likewise foolhardy.

Two things I agree with Kurzweil about: 1) The world seems worse when tools allow us to better gather information about injustice, and 2) Sooner or later, we’ll increase human intelligence through bioengineering, even if the specter of such currently freaks out people

From Todd Bishop at Geekwire:

On the effect of the modern information era: People think the world’s getting worse, and we see that on the left and the right, and we see that in other countries. People think the world is getting worse. … That’s the perception. What’s actually happening is our information about what’s wrong in the world is getting better. A century ago, there would be a battle that wiped out the next village, you’d never even hear about it. Now there’s an incident halfway around the globe and we not only hear about it, we experience it.

Why machines won’t displace humans: We’re going to merge with them, we’re going to make ourselves smarter. We’re already doing that. These mobile devices make us smarter. We’re routinely doing things we couldn’t possibly do without these brain extenders.•

Tags: ,

Ray Kurzweil thinks humans who can survive to 2030 will become immortal, but I’m willing, regrettably, to bet the over.

I don’t doubt there can be radical life extension if Homo sapiens persevere long enough, but the answers may be a lot more complicated than medical science riding a wave of Moore’s Law. Computing power, nanotechnology and genetic code will all likely be key to such a breakthrough, but time, that precious thing, is sadly not on our side.

An excerpt from David Hochman’s very good Playboy Interview with Google’s Director of Engineering:

Ray Kurzweil:

The point is health care is now an information technology subject to the same laws of acceleration and progress we see with other technologies. We’ll soon have the ability to rejuvenate all the body’s tissues and organs and develop drugs targeted specifically at the underlying metabolic process of a disease rather than taking a hit-or-miss approach. But nanotechnology is where we really move beyond biology.

Playboy:

Tiny robots fighting disease in our veins?

Ray Kurzweil:

Yes. By the 2020s we’ll start using nanobots to complete the job of the immune system. Our immune system is great, but it evolved thousands of years ago when conditions were different. It was not in the interest of the human species for individuals to live very long, so people typically died in their 20s. The life expectancy was 19. Your immune system, for example, does a poor job on cancer. It thinks cancer is you. It doesn’t treat cancer as an enemy. It also doesn’t work well on retroviruses. It doesn’t work well on things that tend to affect us later in life, because it didn’t select for longevity.

We can finish the job nature started with a nonbiological T cell. T cells are, in fact, nanobots—natural ones. They’re the size of a blood cell and are quite intelligent. I actually watched one of my T cells attack bacteria on a microscope slide. We could have one programmed to deal with all pathogens and could download new software from the internet if a new type of enemy such as a new biological virus emerged.

As they gain traction in the 2030s, nanobots in the bloodstream will destroy pathogens, remove debris, rid our bodies of clots, clogs and tumors, correct DNA errors and actually reverse the aging process. One researcher has already cured type 1 diabetes in rats with a blood-cell-size device.

Playboy:

So if we can hang on for 15 more years, we can basically live forever?

Ray Kurzweil:

I believe we will reach a point around 2029 when medical technologies will add one additional year every year to your life expectancy. By that I don’t mean life expectancy based on your birthdate but rather your remaining life expectancy.•

Tags: ,

beyondwestwrold6

Critical though I sometimes am of the aggressive timelines of Ray Kurzweil’s predictions, I acknowledge finding him endlessly interesting. The Singularitarian-in-Chief sat down with Neil deGrasse Tyson in Manhattan for a public conversation about all things future. Kurzweil thinks tomorrow’s nanotechnologies will be broadly accessible to all classes as smartphones are today, which is probably true, but his argument that this availability will limit wealth inequality doesn’t seem to follow. Smartphones, after all, have not been equalizers.

From Jose Pagliery and Hope King at CNN Money:

CNNMoney asked Kurzweil: What happens to inequality in this future? Will brain superpowers and health be limited to the rich?

“Yeah, like cell phones,” Kurzweil responded. “Only the rich have access to these technologies — at a point in time when they don’t work.”

Industry perfects products for mass consumption, Kurzweil noted. And the tech will inevitably get cheaper. As computer makers keep doubling the number of chips on a circuit board, the “price performance of information technology” doubles every year, he said.

“Nanobots will be available to everyone,” Kurzweil said. “These technologies are ultimately democratized because they keep getting less and less expensive.”

And even if Kurzweil thinks AI will probably replace many of today’s workers, he’s optimistic about future jobs for humans. But when Tyson pressed him to name specific jobs, Kurzweil was stumped. After all, no one in 1910 could predict today’s computer chip designers and website developers.

Tags: , , ,

babybot (2)

In an excellent Five Books interview, writer Calum Chace suggests a quintet of titles on the topic of Artificial Intelligence, four of which I’ve read. In recommending The Singularity Is Near, he defends the author Ray Kurzweil against charges of techno-quackery, though the futurist’s predictions have grown more desperate and fantastic as he’s aged. It’s not that what he predicts can’t ever be be done, but his timelines seem to me way too aggressive.

Nick Bostrom’s Superintelligence, another choice, is a very academic work, though an important one. Interesting that Bostrom thinks advanced AI is a greater existential threat to humans than even climate change. (I hope I’ve understood the philosopher correctly in that interpretation.) The next book is Martin Ford’s Rise of the Robots, which I enjoyed, but I prefer Chace’s fourth choice, Andrew McAfee and Erik Brynjolfsson’s The Second Machine Age, which covers the same terrain of technological unemployment with, I think, greater rigor and insight. The final suggestion is one I haven’t read, Greg Egan’s sci-fi novel Permutation City, which concerns intelligence uploading and wealth inequality.

An excerpt about Kurzweil:

Question:

Let’s talk more about some of these themes as we go through the books you’ve chosen. The first one on your list is The Singularity is Near, by Ray Kurzweil. He thinks things are moving along pretty quickly, and that a superintelligence might be here soon. 

Calum Chace:

He does. He’s fantastically optimistic. He thinks that in 2029 we will have AGI. And he’s thought that for a long time, he’s been saying it for years. He then thinks we’ll have an intelligence explosion and achieve uploading by 2045. I’ve never been entirely clear what he thinks will happen in the 16 years in between. He probably does have quite detailed ideas, but I don’t think he’s put them to paper. Kurzweil is important because he, more than anybody else, has made people think about these things. He has amazing ideas in his books—like many of the ideas in everybody’s books they’re not completely original to him—but he has been clearly and loudly propounding the idea that we will have AGI soon and that it will create something like utopia. I came across him in 1999 when I read his book, Are We Spiritual Machines? The book I’m suggesting here is The Singularity is Near, published in 2005. The reason why I point people to it is that it’s very rigorous. A lot of people think Kurzweil is a snake-oil salesman or somebody selling a religious dream. I don’t agree. I don’t agree with everything he says and he is very controversial. But his book is very rigorous in setting out a lot of the objections to his ideas and then tackling them. He’s brave, in a way, in tackling everything head-on, he has answers for everything. 

Question:

Can you tell me a bit more about what ‘the singularity’ is and why it’s near?

Calum Chace:

The singularity is borrowed from the world of physics and math where it means an event at which the normal rules break down. The classic example is a black hole. There’s a bit of radiation leakage but basically, if you cross it, you can’t get back out and the laws of physics break down. Applied to human affairs, the singularity is the idea that we will achieve some technological breakthrough. The usual one is AGI. The machine becomes as smart as humans and continues to improve and quickly becomes hundreds, thousands, millions of times smarter than the smartest human. That’s the intelligence explosion. When you have an entity of that level of genius around, things that were previously impossible become possible. We get to an event horizon beyond which the normal rules no longer apply.

I’ve also started using it to refer to a prior event, which is the ‘economic singularity.’ There’s been a lot of talk, in the last few months, about the possibility of technological unemployment. Again, it’s something we don’t know for sure will happen, and we certainly don’t know when. But it may be that AIs—and to some extent their peripherals, robots—will become better at doing any job than a human. Better, and cheaper. When that happens, many or perhaps most of us can no longer work, through no fault of our own. We will need a new type of economy.  It’s really very early days in terms of working out what that means and how to get there. That’s another event that’s like a singularity — in that it’s really hard to see how things will operate at the other side.•

Tags: , , , , , ,

washingmachine6

In the big picture, I’m with the FT‘s Tim Harford on the issue of “economic singularity,” meaning that while I believe things may change more quickly going forward, I don’t believe scarcity will be solved immediately or soon thereafter. Not today and not tomorrow, unless we’re defining that latter term very broadly. 3D printers will be a tremendous boon in terms of material goods (though they will also bring new dangers), but the world isn’t on the verge of unbridled material wealth. That just doesn’t happen overnight.

In his latest column, Harford is skeptical that the moment of unthinkably great production has (almost) arrived, as prophesied by the messiahs of machine utopia: Robin Hanson, whom I mostly know from a rather kooky, sci-fi Ted Talk; and Ray Kurzweil, a brilliant inventor who has reinvented himself as a sage of increasingly outré near-term predictions.

Harford’s opening:

Are we nearing a dramatic moment in economic history? Before humans developed agriculture, the world population — and thus the world economy — doubled in size roughly every 250,000 years. After acquiring the power of agriculture, the world economy doubled in size roughly every 900 years. After the industrial revolution, growth accelerated again, and since the second world war the world economy has been doubling in size roughly every 15 years. These numbers have been collated by Robin Hanson, an economist at George Mason University in Virginia; they are based on educated guesses by various economic historians.

If another step change of a similar scale were to happen, the world economy would double in size between now and Christmas. That is hard to imagine but, before the industrial revolution happened, it too would have been hard to imagine. And a small band of believers, not short on imagination, look forward to an economic “singularity”. Hanson is one of them, and the computer scientist Ray Kurzweil, author of The Singularity Is Near, is perhaps the most famous.

The singularity would be a point at which, rather than humans developing new technologies, the new technologies developed themselves. They would do so at a rate far beyond our comprehension. After the singularity, our civilisation would be in the hands of cyborgs, or brains uploaded into the cloud, or genetically enhanced superbeings, or something else able to make itself smarter at a tremendous rate. The future economy might consist of rapid interactions between artificial intelligences. The idea that it might double in size every few weeks no longer seems quite so unimaginable.

But it is one thing to imagine such a future. It is another thing to have confidence that it is approaching.•

Tags: , ,

Ray Kurzweil, who will never die, is a brilliant and amusing inventor and thinker, but I believe he’s wrong in predicting that in 20 years or so we’re going to have nanobots introduced into our systems that allow us to directly plug our brains into the Internet. In what appears to be a Singularitarian circle jerk, some other futurists, including his associate Peter Diamandis, are very excited by his pronouncement, though let’s remember that Kurzweil has sometimes been wildly off in his prognostications. Remember when computers disappeared in 2009 because information was written directly onto our retinae by eyeglasses and contact lenses? Neither do I.

Such developments aren’t theoretically impossible, but such an aggressive timeline and so little attention to the downsides is puzzling. From Diamandis at Singularity Hub:

The implications of a connected neocortex are quite literally unfathomable. As such, any list I can come up with will pale in comparison to reality…but here are a few thoughts to get the ball rolling.

Brain-to-Brain Communication

This will deliver a new level of human intimacy, where you can truly know what your lover, friend or child is feeling. Intimacy far beyond what we experience today by mere human conversation. Forget email, texting, phone calls, and so on — you’ll be able to send your thoughts to someone simply by thinking them.

Google on the Brain

You’ll have the ability to “know” anything you desire, at the moment you want to know it. You’ll have access to the world’s information at the tip of your neurons. You’ll be able to calculate complex math equations in seconds. You’ll be able to navigate the streets of any cities, intuitively. You’ll be able to hop into a fighter jet and fly it perfectly. You’ll be able to speak and translate any language effortlessly.

Scalable Intelligence

Just imagine that you’re in a bind and you need to solve a problem (quickly). In this future world, you’ll be able to scale up the computational power of your brain on demand, 10x or 1,000x…in much the same way that algorithms today can spool up 1,000 processor cores on Amazon Web Service servers.•

Tags: ,

I love Ray Kurzweil, but unfortunately, he’s not going to become immortal as he expects he will, and it’s unlikely he’ll be right in his prediction that nanobots introduced into our brains will be doing the thinking for us by the 2030s. Most of what Kurzweil says is theoretically possible, especially if we’re talking about human life surviving for a significant span, but his timeframe for execution of radical advances seems increasingly frantic to me. From Andrew Griffin at the Independent:

In the near future, humans’ brains will be helped out by nanobot implants that will make us into “hybrids,” one of the world’s leading thinkers has claimed.

Ray Kurzweil, an inventor and director of engineering at Google, said that in the 2030s the implants will help us connect to the cloud, allowing us to pull information from the internet. Information will also be able to sent up over those networks, letting us back up our own brains.

“We’re going to gradually merge and enhance ourselves,” he said, reported CNN. “In my view, that’s the nature of being human — we transcend our limitations.”

As the cloud that our brains access improves, our thinking would get better and better, Kurzweil said. So while initially we would be a “hybrid of biological and non-biological thinking”, as we move into the 2040s, most of our thinking will be non-biological.•

Tags: ,

Ray Kurzweil, that brilliant guy, has been correct in many of his technological predictions and very wrong in others. An example of the latter: 2009 came and went and computers hadn’t disappeared because information was being written directly onto our retinae by special glasses. 

The technologist now prognosticates that in fifteen years, our brains will be connected to the cloud, able to call upon any of the vast (and growing) trove of information. From Anthony Cuthbertson at International Business Times:

Artificial intelligence pioneer Ray Kurzweil has predicted that within 15 years technology will exist that will allow human brains to be connected directly to the internet.

Speaking at the Exponential Finance conference in New York on Wednesday (3 June), Kurzweil hypothesised that nanobots made from DNA strands could be used to transform humans into hybrids 

“Our thinking then will be a hybrid of biological and non-biological thinking,” Kurzweil said. “We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human – we transcend our limitations.”

“We’ll be able to extend (our limitations) and think in the cloud. We’re going to put gateways to the cloud in our brains.”

Connecting brains to the internet or a cloud computer network will allow for advanced thinking, Kurzweil predicts, and by the late 2030s human thought could be predominantly non-biological.

Tags: ,

It’s probably a fair bet that most people believe computers are already more intelligent than us. But even computationally it’s possible our smartphones will be smarter than us in five to ten years. Even if it hasn’t happened by then, it will happen. Something that was impossible a few decades ago, that would have cost billions if it had been possible, will soon be available at a reasonable price, prepared to sit in your pocket or palm.

As Ted Greenwald of the WSJ recently reminded, smart machines don’t have to make us dumb. From automobiles to digital watches, we’ve always ceded certain chores to technology, but these new machines won’t be anything like the ones we know. They will be by far the greatest tools we’ve ever created. What will that mean, positive or negative? I’m wholeheartedly in favor of them, even think they’re necessary, but that doesn’t mean great gifts aren’t attended by great challenges.

From Vivek Wadhwa at the Washington Post:

Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years — until 2025 — giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling.  Within seven years — about when the iPhone 11 is likely to be released — the smartphones in our pockets will be as computationally intelligent as we are. It doesn’t stop there, though.  These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race. Already, our computers have a big advantage over us: they are connected via the Internet and share information with each other billions of times faster than we can. It is hard to even imagine what becomes possible with these advances and what the implications are.•

Tags: , ,

Sentient computers aren’t theoretically impossible, but no one–no one–can say when they’ll be a reality, not with any real confidence. In a Genetic Literacy Project piece about sexbots coming to (and in) your bedroom, David Warmflash stresses this very point. An excerpt:

Now, when we really imagine androids, most of us think of the super-intelligent human-looking beings that science fiction has dreamed up, such as Star Trek’s Data. To get there the field of AI needs to advance significantly. It is common these days for futurists to predict how much time it will be until humans create certain technologies imagined by science fiction. The predictions are made by calculating the present rate of technological progress in phenomena, such as computing power and speed. However, since they cannot really know anything about the obstacles that programmers and engineers will face along the way, the predictions are often wrong. In 2000, the popular futurist, transhumanist author Ray Kurzweil predicted this:

By 2009, computers will disappear. Visual information will be written directly onto our retinas by devices in our eyeglasses and contact lenses. In addition to high resolution virtual monitors appearing to hover in space, these intimate displays will provide full-immersion visual virtual reality. We will have ubiquitous very high bandwidth wireless connection to the Internet at all times. “Going to a Website” will mean entering a virtual reality environment–at least for the visual and auditory senses–where we will meet other real people. There will be simulated people as well, but these virtual personalities will not be up to human standards, at least not by 2009. The minuscule electronics powering these developments will be invisibly embedded in our glasses and clothing. Thus we won’t be searching for our misplaced mobile phones, Palms, notebooks, and other gadgets.

Becoming an android: Human mind uploading

While many of those predictions certainly could come true in the years to come true in the years to come, clearly they were too optimistic in 2000. When it comes to predicting when sentient computers will appear, things get even harder. Conventional computer programming is advancing at warp speed and works very well for a wide range of applications, from interpreting medical imaging data to controlling spacecraft, but the programmer needs to understand the system that the programing is designed to control. That simply does not work when the goal is to build a mind that learns, develops, and eventually thinks for itself. For this reason, AI scientists are using strategies inspired by evolutionary biology and neuroscience.

At the time of the Wright brothers, nobody could predict how long it would take before the first human moon landing. Similarly, today, we don’t know how long it will take for sentient machines to appear. All we can say is that, at some point, a sentient, artificial mind will probably be created.•

Tags: ,

I wish Ray Kurzweil would live forever, but I fear he won’t make it.

Like almost everyone reading this (and writing it!), the brilliant inventor and futurist will likely die sometime in the twenty-first century. Kurzweil hopes to defy the odds–defy death itself–by taking a regimen of supplements which cost thousands of dollars a day, hoping he will remain alive and healthy until technology can make him immortal in one fashion or another. A passage from a new profile of the Googler by Caroline Daniel of the Financial Times:

Though the 67-year-old Kurzweil looks fresh-faced (he uses antioxidant skin cream daily), he is ageing, even if his “biological age comes out in the late forties. It hasn’t moved that much.”

But this is peanuts compared with Kurzweil’s ultimate goal: to live for ever. That means staying healthy enough to get to what he dubs “Bridge Two, when the biotechnology revolution will reprogramme our inherited biology”, and “Bridge Three”: molecular nanotechnology enabling us to rebuild our bodies.

Radical life extension has been on Kurzweil’s mind for decades. Today such sci-fi heroics to save mankind from death are being embraced by Silicon Valley’s tech elite. Billionaires such as Peter Thiel, PayPal co-founder, call death “the great enemy”; death is no longer seen as inevitable but as the latest evil to be “disrupted”. Google, too, has created a separate venture, Calico, to combat ageing. “I had a discussion two years ago with the head of Google Ventures about longevity. It resulted in Calico. I’m an adviser.

“I think every death is tragic. We’ve learnt to accept it, the cycle of life and all that, but humans have an opportunity to transcend beyond natural limitations. Life expectancy was 19 a thousand years ago. It was 37 in 1800. Everyone believes in life extension. Somebody comes out with a cure for disease, it’s celebrated. It’s not, ‘Oh, gee, that’s going to forestall death.’ ”

A scientist in Newsweek magazine in 2009 mocked Kurzweil, saying his was “the most public mid-life crisis” ever. “These are ad hominem attacks. There’s what I call ‘death-ist’ philosophy of people who celebrate death,” he responds.

Kurzweil claims the fundamental mistake his critics make is in believing progress is linear. This is his key thesis: “The reality of information technology is it progresses exponentially . . . 30 steps linearly gets you to 30. One, two, three, four, step 30 you’re at 30. With exponential growth, it’s one, two, four, eight. Step 30, you’re at a billion.”

If medical progress might once have been a hit and miss affair, he argues that we are now starting to understand “the software of life.”•

Tags: ,

Sadly, Ray Kurzweil is going to die sometime this century, as are you and I. We’re not going to experience immortality of the flesh or have our consciousnesses downloaded into a mainframe. Those amazing options he thinks are near will be enjoyed, perhaps, by people in the future, not us. But I agree with Kurzweil that while AI may become an existential threat, I don’t think that’s necessarily a deal breaker. Without advanced AI and exponential growth of other technologies our species is doomed sooner than later, so let’s go forth boldly if cautiously. From Kurzweil in Time:

“Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of ‘the AI.’ Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.”

Tags: , ,

Google’s translated text isn’t perfect, but it’s far better than I could do. Of course, those algorithms aren’t conscious of their accomplishment, while I’m aware of my shortcoming. Erasing that distinction isn’t a bridge too far, but it’s going to take a long time to cross. EconTalk host Russ Roberts did an excellent podcast this week with cognitive scientist Gary Marcus about the future of AI. A couple of excerpts follow.

________________________________

Russ Roberts:

Now, to be fair to AI and those who work on it, I think, I don’t know who, someone made the observation but it’s a thoughtful observation that any time we make progress–well, let me back up. People say, ‘Well, computers can do this now, but they’ll never be able to do xyz.’ Then, when they learn to do xyz, they say, ‘Well, of course. That’s just an easy problem. But they’ll never be able to do what you’ve just said’–say–‘understand the question.’ So, we’ve made a lot of progress, right, in a certain dimension. Google Translate is one example. Siri is another example. Wayz, is a really remarkable, direction-generating GPS (Global Positioning System) thing for helping you drive. They seem sort of smart. But as you point out, they are very narrowly smart. And they are not really smart. They are idiot savants. But one view says the glass is half full; we’ve made a lot of progress. And we should be optimistic about where we’ll head in the future. Is it just a matter of time?

Gary Marcus:

Um, I think it probably is a matter of time. It’s a question of whether are we talking decades or centuries. Kurzweil has talked about having AI in about 15 years from now. A true artificial intelligence. And that’s not going to happen. It might happen in the century. It might happen somewhere in between. I don’t think that it’s in principle an impossible problem. I don’t think that anybody in the AI community would argue that we are never going to get there. I think there have been some philosophers who have made that argument, but I don’t think that the philosophers have made that argument in a compelling way. I do think eventually we will have machines that have the flexibility of human intelligence. Going back to something else that you said, I don’t think it’s actually the case that goalposts are shifting as much as you might think. So, it is true that there is this old thing that whatever used to be called AI is now just called engineering, once we can do it.

________________________________

Russ Roberts:

Given all of that, why are people so obsessed right now–this week, almost, it feels like–with the threat of super AI, or real AI, or whatever you want to call it, the Musk, Hawking, Bostrom worries? We haven’t made any progress–much. We’re not anywhere close to understanding how the brain actually works. We are not close to creating a machine that can think, that can learn, that can improve itself–which is what everybody’s worried about or excited about, depending on their perspective, and we’ll talk about that in a minute. But, why do you think there’s this sudden uptick, spike in focusing on the potential and threat of it right now?

Gary Marcus:

Well, I don’t have a full explanation for why people are worried now. I actually think we should be worried. I don’t understand exactly why there was such a shift in the public view. So, I wanted to write about this for The New Yorker a couple of years ago, and my editor thought, ‘Don’t write this. You have this reputation as this sober scientist who understands where things are. This is going to sound like Science Fiction. It will not be good for your reputation.’ And I said, ‘Well, I think it’s really important and I’d like to write about it anyway.’ We had some back and forth, and I was able to write some about it–not as much as I wanted. And now, yeah, everybody is talking about it. I don’t know if it’s because Bostrom’s book is coming out or because people, there’s been a bunch of hyping, AI stories make AI seem closer than it is, so it’s more salient to people. I’m not actually sure what the explanation is. All that said, here’s why I think we should still be worried about it. If you talk to people in the field I think they’ll actually agree with me that nothing too exciting is going to happen in the next decade. There will be progress and so forth and we’re all looking forward to the progress. But nobody thinks that 10 years from now we’re going to have a machine like HAL in 2001. However, nobody really knows downstream how to control the machines. So, the more autonomy that machines have, the more dangerous they are. So, if I have an Angry Birds App on my phone, I’m not hooked up to the Internet, the worst that’s going to happen if there’s some coding error maybe the phone crashes. Not a big deal. But if I hook up a program to the stock market, it might lose me a couple hundred million dollars very quickly–if I had enough invested in the market, which I don’t. But some company did in fact lose a hundred million dollars in a few minutes a couple of years ago, because a program with a bug that is hooked up and empowered can do a lot of harm. I mean, in that case it’s only economic harm; and [?] maybe the company went out of business–I forget. But nobody died. But then you raise things another level: If machines can control the trains–which they can–and so forth, then machines that either deliberately or unintentionally or maybe we don’t even want to talk about intentions: if they cause damage, can cause real damage. And I think it’s a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don’t even have a theory about how to regulate that. Now, anybody can build any kind of computer program they want. There’s very little regulation. There’s some, but very little regulation. It’s kind of, in little ways, like the Wild West. And nobody has a theory about what would be better. So, what worries me is that there is at least potential risk. I’m not sure it’s as bad as like, Hawking, said. Hawking seemed to think like it’s like night follows day: They are going to get smarter than us; they’re not going to have any room for us; bye-bye humanity. And I don’t think it’s as simple as that. The world being machines eventually that are smarter than us, I take that for granted. But they may not care about us, they might not wish to do us harm–you know, computers have gotten smarter and smarter but they haven’t shown any interest in our property, for example, our health, or whatever. So far, computers have been indifferent to us.•

Tags: , ,

Speaking of human laborers being squeezed: Open Source with Christopher Lydon has an episode called “The End of Work,” with two guests, futurist Ray Kurzweil and MIT economist Andrew McAfee. A few notes.

  • McAfee sees the Technological Revolution as doing for gray matter what the Industrial Revolution did for muscle fiber, but on the way to a world of wealth without toil–a Digital Athens–the bad news is the strong chance of greater income inequality and decreased opportunities for many. Kodak employed 150,000; Instagram a small fraction of that. With the new technologies, destruction (of jobs) outpaces creation. Consumers win, but Labor loses.
  • Kurzweil is more hopeful in the shorter term than McAfee. He says we have more jobs and more gratifying ones today than 100 years ago and they pay better. We accomplish more. Technology will improve us, make us smarter, to meet the demands of a world without drudgery. It won’t be us versus the machines, but the two working together. The majority of jobs always go away, most of the jobs today didn’t exist so long ago. New industries will be invented to provide work. He doesn’t acknowledge a painful period of adjustment in distribution before abundance can reach all.

Tags: , ,

Steve Martin and Richard Feynman had a similar idea: Let’s get small. As we can now put the Encyclopedia Britannica on the head of a pin, we’ll eventually place nanobots inside of people to regulate health and cure illnesses, though I will guess it’ll take substantially longer than the boldest projections. From Diane Ackerman’s new book The Human Age, via Delancey Place:

“The nanotechnology world is a wonderland of surfaces unimaginably small, full of weird properties, and invisible to the naked eye, where we’re nonetheless reinventing industry and manufacturing in giddy new ways. Nano can be simply, affordably lifesaving during natural disasters. The 2012 spate of floods in Thailand inspired scientists to whisk silver nanoparticles into a solar-powered water filtration system that can be mounted on a small boat to purify water for drinking from the turbid river it floats on.

In the Namibian desert, inspired by water-condensing bumps on the backs of local beetles, a new breed of water bottle harvests water from the air and refills itself. The bottles will hit the market in 2014, for use by both marathon runners and people in third-world countries where fresh water may be scarce. South African scientists have created water-purifying tea bags. Nano can be as humdrum as the titanium dioxide particles that thicken and whiten Betty Crocker frosting and Jell-O pudding. It can be creepy: pets genetically engineered with firefly or jellyfish protein so that they glow in the dark (fluorescent green cats, mice, fish, monkeys, and dogs have already been created). It can be omnipresent and practical: the army’s newly invented self-cleaning clothes. It can be unexpected, as microchips embedded in Indian snake charmers’ cobras so that they can be identified if they stray into the New Delhi crowds. Or it can dazzle and fill us with hope, as in medicine, where it promises nano-windfalls. …

The futurist Ray Kurzweil predicts that ‘by the 2030s we’ll be putting millions of nanobots inside our bodies to augment our immune system, to basically wipe out disease. One scientist cured Type I diabetes in rats with a blood-cell-size device already.”

Tags: , , ,

Here’s another video that’s popped up again after being unavailable for a spell. It’s narrated 1977 footage of innovations aimed to aid the deaf and blind. At the 3:40 mark, there’s excellent footage of the Kurzweil Reading Machine and its inventor.

Tags:

Marvin Minsky, visionary of robotic arms, thinking computers and major motion pictures, is interviewed by Ray Kurzweil. The topic, unsurprisingly: “Is the Singularity Near?”

Tags: ,

Ray Kurzweil believes humans will be immortal one day, and it will be sooner than you or I might imagine. And he doesn’t worry about brain implants or the like altering our identity since identity is fluid already naturally. From some of his thoughts about a path to forever in a new Wall Street Journal article by Alexandra Wolfe:

“He thinks that humans will one day be able to live indefinitely, but first we must cross three ‘bridges.’

The first of these is staying healthy much longer. To that end, the smooth-skinned and youthful Mr. Kurzweil consumes 120 vitamins and supplements every day, takes nutrients intravenously (so that his body can absorb them better), drinks green tea and exercises regularly. That regimen keeps his ‘real age’ in the 40s, he says.

The second bridge is reprogramming our biology, which began with the Human Genome Project and includes, he says, the regeneration of tissue through stem-cell therapies and the 3-D printing of new organs.

We will cross the third and final bridge, he says, when we embed nanobots in our brains that will affect our intelligence and ability to experience virtual environments. Nanobots in our bodies will act as an extension of our immune system, he says, to identify and destroy pathogens our own biological cells can’t.

Mr. Kurzweil projects that the 2030s will be a ‘golden era,’ a time of revolution in how medicine is practiced. He compares the human body to a car. ‘Isn’t there a natural limit to how long an automobile lasts?’ he asks. ‘However, if you take care of it and if anything goes wrong, you fix it and maybe replace it, it can go on forever.’ He sees no reason that technology can’t do the same with human parts. The body is constantly changing already, he says, with cells replacing themselves every few days to months.

His vision of the future raises the question of what it means to be human. Yet he believes that adding technology to our bodies doesn’t change our essence. ‘The philosophical issue of identity is, Am I the same person as I was six months ago?’ he says. ‘There’s a continuity of identity.'”

Tags: ,

In a Popular Science piece, Erik Sofge offers a smackdown of the Singularity, thinking it less science than fiction. An excerpt:

“The most urgent feature of the Singularity is its near-term certainty. [Vernor] Vinge believed it would appear by 2023, or 2030 at the absolute latest. Ray Kurzweil, an accomplished futurist, author (his 2006 book The Singularity is Near popularized the theory) and recent Google hire, has pegged 2029 as the year when computers with match and exceed human intelligence. Depending on which luminary you agree with, that gives humans somewhere between 9 and 16 good years, before a pantheon of machine deities gets to decide what to do with us.

If you’re wondering why the human race is handling the news of its impending irrelevance with such quiet composure, its possible that the species is simply in denial. Maybe we’re unwilling to accept the hard truths preached by Vinge, Kurzweil and other bright minds.

Just as possible, though, is another form of denial. Maybe no one in power cares about the Singularity, because they recognize it as science fiction. It’s a theory that was proposed by a SF writer. Its ramifications are couched in the imagery and language of SF. To believe in the Singularity, you have to believe in one of the greatest myths ever told by SF—that robots are smart, and always on the verge of becoming smarter than us.

More than 60 years of AI research indicates otherwise.”

Tags: , ,

John Adams wasn’t thinking specifically of technology when he said the following, but he might as well have been: “I must study politics and war, that my sons may have the liberty to study mathematics and philosophy, geography, natural history, and naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry and porcelain.” 

Eventually, and not too far off in the future, Amazon won’t need any human pickers to pull items from their inventory shelves, 3-D printers will allow ideas to spring fully grown from our heads and no one will have much use for a human taxi driver. It will all be AI. That’s great, though it does pose some new problems. Chiefly, how do we reconcile what’s largely a free-market economy with one that’s “post-jobs” to a certain degree, at least if we’re talking about the type of traditional work that our society is built on? We may become wealthier as a people, but how does that wealth reach the people? That could make for a messy transition, and to some extent it already has. Another question is what do we do with ourselves if toil is a thing of the past, and other challenges we thought were our own are assumed by silicon?

From Ray Kurzweil’s site, an exchange between a reader and the futurist about molecular assemblers, which may take building out of our hands, freeing them perhaps for painting and statuary but more likely for some yet unknown tasks:

Question:

Suppose molecular assemblers are indeed proven to be feasible on a large scale and we are given an infinite abundance to produce as much as we want — limited only by the amount of matter in our vicinity — with minimal effort.

If this scenario comes to fruition, how will humans be able to cope with the lack of challenges in their lives? It seems like with assemblers there will be very little incentive to do anything.

Since everything could be obtained effortlessly through assemblers, there appears to be little purpose to hold a job, since all possessions could be obtained for free.

Ray Kurzweil:

Future molecular assemblers will make physical things, but not create new knowledge.

We are doubling knowledge about every year and that will remain a challenge requiring increasing levels of intelligence.”

Tags:

A passage from Carole Cadwalladr’s new Guardian profile of futurist and Google employee Ray Kurzweil, who is often, though not always, right when making his bold predictions about technology:

Bill Gates calls him ‘the best person I know at predicting the future of artificial intelligence.’ He’s received 19 honorary doctorates, and he’s been widely recognised as a genius. But he’s the sort of genius, it turns out, who’s not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating diary is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.

But then, he has other things on his mind. The future, for starters. And what it will look like. He’s been making predictions about the future for years, ever since he realised that one of the key things about inventing successful new products was inventing them at the right moment, and ‘so, as an engineer, I collected a lot of data.’ In 1990, he predicted that a computer would defeat a world chess champion by 1998. In 1997, IBM’s Deep Blue defeated Garry Kasparov. He predicted the explosion of the world wide web at a time it was only being used by a few academics and he predicted dozens and dozens of other things that have largely come true, or that will soon, such as that by the year 2000, robotic leg prostheses would allow paraplegics to walk (the US military is currently trialling an ‘Iron Man’ suit) and ‘cybernetic chauffeurs’ would be able to drive cars (which Google has more or less cracked).

His critics point out that not all his predictions have exactly panned out (no US company has reached a market capitalisation of more than $1 trillion; ‘bioengineered treatments’ have yet to cure cancer). But in any case, the predictions aren’t the meat of his work, just a byproduct. They’re based on his belief that technology progresses exponentially (as is also the case in Moore’s law, which sees computers’ performance doubling every two years). But then you just have to dig out an old mobile phone to understand that. The problem, he says, is that humans don’t think about the future that way. ‘Our intuition is linear.’

When Kurzweil first started talking about the ‘singularity,’ a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human – will be passed in 2029. The difference is that when he began saying it, the fax machine hadn’t been invented. But now, well… it’s another story.”

Tags: , ,

Computer dating, with the help of IBM’s ENIAC, stretches back at least to the 1960s (listen here to a 50-year-old radio report about it). But when futurist Ray Kurzweil talks about computer dating, he doesn’t think of the machine as a middleman but as a ladies’ man (or lady or some other variation on the theme). It’s disquieting to a lot of us, but is it just around the bend? The opening of Ben Child’s Guardian article about Kurzweil’s recent review of Spike Jonze’s Her:

“It might just be music to the ears of lovelorn geeks prepared to wait another 15 years to meet the love of their lives: a prominent futurologist has claimed that AI girlfriends (and presumably boyfriends) like the one played by Scarlett Johansson in the Oscar-nominated film Her could become a reality by 2029.

Ray Kurzweil, an inventor and Google’s director of engineering makes the claim in a review of Spike Jonze’s much-praised sci-fi romance. In a post on his website, Kurzweil delivered a generally positive verdict on the film, which stars Joaquin Phoenix as a man called Theodore who falls in love with his operating system, Samantha, before moving on to its technological implications.”

Tags: , ,

The opening of Ray Kurzweil’s compelling review of the Oscar-nominated Her, a near-future film he sees as nearer than most do:

Her, written, directed and produced by Spike Jonze, presents a nuanced love story between a man and his operating system.

Although there are caveats I could (and will) mention about the details of the OS and how the lovers interact, the movie compellingly presents the core idea that a software program (an AI) can — will — be believably human and lovable.

This is a breakthrough concept in cinematic futurism in the way that The Matrix presented a realistic vision that virtual reality will ultimately be as real as, well, real reality.

Jonze started his feature-motion-picture career directing Being John Malkovich, which also presents a realistic vision of a future technology — one that is now close at hand: being able to experience reality through the eyes and ears of someone else.

With emerging eye-mounted displays that project images onto the wearer’s retinas and also look out at the world, we will indeed soon be able to do exactly that. When we send nanobots into the brain — a circa-2030s scenario by my timeline — we will be able to do this with all of the senses, and even intercept other people’s emotional responses.”

Tags: ,

Ray Kurzweil, always looking forward, believing that then is actually now, discusses how the computer, which used to be all the way across campus and is now in our pockets, will soon be within us, like a pacemaker.

Tags:

« Older entries