Two thoughts about the intersection of human and artificial intelligence:
- If we survive other existential risks long enough, we’ll eventually face the one posed by superintelligence. Or perhaps not. That development isn’t happening today or tomorrow, and by the time it does machine learning might be embedded within us. Maybe a newly engineered version of ourselves is the next step. We won’t be the same, no, but we’re not meant to be. Once evolution stops, so do we.
- The problem of understanding the human brain will someday be solved. That will be a boon in many ways medically, but there’s some question as to whether this giant leap for humankind is necessary to create intelligent, conscious machines. The Wright brothers didn’t need to simulate the flapping wings of birds in creating the Flyer. Maybe we can put the “ghost” in the machine before we even fully understand it? I would think the brain work will be done first because of the earnest way it’s being pursued by governments and private entities, but I wonder if that’s necessary.
From Ariana Eunjung Cha’s Washington Post piece about Paul Allen’s dual brain projects:
Although today’s computers are great at storing knowledge, retrieving it and finding patterns, they are often still stumped by a simple question: “Why?”
So while Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — despite their maddening quirks — do a pretty good job of reminding you what’s on your calendar, you’d probably fire them in short of a week if you put them up against a real person.
That will almost certainly change in the coming years as billions of dollars in Silicon Valley investments lead to the development of more sophisticated algorithms and upgrades in memory storage and processing power.
The most exciting — and disconcerting — developments in the field may be in predictive analytics, which aims to make an informed guess about the future. Although it’s currently mostly being used in retail to figure out who is more likely to buy, say, a certain sweater, there are also test programs that attempt to figure out who might be more likely to get a certain disease or even commit a crime.
Google, which acquired AI company DeepMind in 2014 for an estimated $400 million, has been secretive about its plans in the field, but the company has said its goal is to “solve intelligence.” One of its first real-world applications could be to help self-driving cars become better aware of their environments. Facebook chief executive Mark Zuckerberg says his social network, which has opened three different AI labs, plans to build machines “that are better than humans at our primary senses: vision, listening, etc.”
All of this may one day be possible. But is it a good idea?•