Watson and Jeopardy and Why I’m Only Marginally Impressed
Well, Watsonbeat human its human competitors on Jeopardy, but I think the challenge Ken Jennings gave to IBM today sums up the one-trick pony aspect of Watson: Let put it on Dancing with the Stars to see what it can really do.
Watson, could not see, nor hear, and it could not write, let alone dance. Watson was build precisely for winning at Jeopardy, like Deep Blue was built to win at chess. Deep Blue could not play Jeopardy, and although Watson could probably name chess moves and players, it cannot play chess. Watson is an interesting advance, but it is not a universal one. Some of Watson’s wrong answers were indeed very poor choices. Like the researchers working on Darpa’s Urban Challenge (and its previous desert version) little tweaks are made to algorithms and sensors to optimize the device for the terrain or knowledge required to meet the challenge. A very different approach than human learning. For the most part, these machines have very limited ability to learn on their own, and when they do learn, the domain is very constrained. Watson could have its performance improved and be even more impressive, but even more impressive at Jeopardy, and not much else. Watson may well become the grandfather of a great answer-machine in the sky, providing facts and figures faster that Google, but it still won’t be able to tell me the best source of knowledge about something, or help me figure out what I really want to ask it. Those skills are entirely different from Watson’s current algorithms, as are voice understanding and penmanship.
NPR ran a good interview today. Hear today’s All Things Considered conversation with Ken Jennings and Watson’s principal investigator, IBM’s David Ferrucci here.
Daniel W. Rasmus
Daniel W. Rasmus, Founder and Principal Analyst of Serious Insights, is an internationally recognized speaker on the future of work and education. He is the author of several books, including Listening to the Future and Management by Design.