William Herkewitz

You are currently browsing articles tagged William Herkewitz.

Data is opportunity, and biological information processed by computational models may even be able to forecast some evolutionary courses. From William Herkewitz in Popular Mechanics:

“With the flu, every year is a rapid arms race. Countless numbers of viruses are pushed to mutate and spread over and over again, transforming until they can finally evade the human antibodies that have built up to fight off their predecessors. This shifting landscape poses a particular problem for the people in charge of reformulating the annual flu vaccine. How do you prepare for the onslaught of a virus that doesn’t exist yet?

Two computational biologists have just unveiled the first computer model that forecasts the yearly changes in the worldwide populations of flu viruses. As they report today in Nature, their model will have immediate impact in the development of flu vaccines—and proves that in some cases, projecting evolutionary change may not be beyond our reach.

‘We don’t actually predict new mutations in the flu virus,’ says Marta Luksza, one of the scientists at Columbia University, ‘Our model only considers the rise and fall of families of closely related viruses.’ Still, the computer model has proven that it can with 93 percent accuracy predict which families will harbor the most widespread viruses in the upcoming year.”

Tags: ,

Douglas Hofstadter, cognitive scientist and author of Gödel, Escher, Bach, explains why Watson and Siri aren’t true AI and why the field lost its way decades ago, in a Q&A conducted by William Herkewitz at Popular Mechanics, which has a terribly designed website. The opening:

Question:

You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Douglas Hofstadter:

Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level—things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking—that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.

Question:

Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

Douglas Hofstadter:

I can’t really predict that. But what I can say is that I’ve monitored Google Translate—which uses a similar approach—for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.

Question:

So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

Douglas Hofstadter:

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically.”

Tags: ,