Blake Masters’ blog has ideas about and notes from Peter Thiel’s recent Stanford address, “The Future of Legal Technology.” From an exchange during the audience Q&A, which points out, among other things, that we can sometimes mistake error for genius:
“Question:
What is your take on building machines that work just like the human brain?
Peter Thiel:
If you could model the human brain perfectly, you can probably build a machine version of it. There are all sorts of questions about whether this is possible.
The alternative path, especially in the short term, is smart but not AI-smart computers, like chess computers. We didn’t model the human brain to create these systems. They crunch moves. They play differently and better than humans. But they use the same processes. So most AI that we’ll see, at least first, is likely to be soft AI that’s decidedly non-human.
Question:
But chess computers aren’t even soft AI, right? They are all programmed. If we could just have enough time to crunch the moves and look at the code, we’d know what/s going on, right? So their moves are perfectly predictable.
Peter Thiel:
Theoretically, chess computers are predictable. In practice, they aren’t. Arguably it’s the same with humans. We’re all made of atoms. Per quantum mechanics and physics, all our behavior is theoretically predictable. That doesn’t mean you could ever really do it.
Question:
There’s the anecdote of Kasparov resigning when Deep Blue made a bizarre move that he fatalistically interpreted as a sign that the computer had worked dozens of moves ahead. In reality the move was caused by a bug.
Peter Thiel:
Well… I know Kasparov pretty well. There are a lot of things that he’d say happened there…” (Thanks Browser.)