What We Won’t Learn from IBM’s Watson Playing Jeopardy!

What We Won’t Learn from IBM’s Watson Playing Jeopardy!

IBM-Watson1

The game show Jeopardy! will host an unusual array of contestants on Februrary 14-16, 2011. Two of the shows superstars, Ken Jennings and Brad Rutter, will be pitted against Watson, an artificial intelligence (AI) created by IBM engineers.

Watson will not teach the world anything new about a generalized artificial intelligence. In the moment, as Watson answers questions, it will look impressive, but it retains the fragility of all AI systems: only being able to do what they are programmed to do.

Watson, like many proofs-of-concept, has some very specific goals. It is designed to understand Jeopardy!’s answer-as-question format and respond with the correct tidbit of knowledge as rapidly as a human. Watson will have to interpret the rather obscure language used in the show to beat its human competitors. The IBM press release states that “The Jeopardy! format provides the ultimate challenge because the game’s clues involve analyzing subtle meaning, irony, riddles, and other complexities in which humans excel and computers traditionally do not. “

My skepticism about Waston comes from the necessity, and perhaps propensity, of engineers to break problems into tractable segments. Rather than solving for human language understanding, Watson was specifically coded to focus on questions and answers. Move outside its anticipated interface and it can’t adapt.

Watson follows in the footsteps of the Microelectronic Computer Corporation’s (MCC) CYC project, which in 1984, was designed to meet another challenge, that of the Japanese Fifth Generation Project (See ‘Fifth Generation’ Became Japan’s Lost Generation, NY Times, June 4, 1992) CYC continues on as Cycorp (www.cyc.com), but it has not yet achieved its earliest goal, which was reading a newspaper and correctly interpreting the text so it could converse with a person about what it read. Watson may move the needle on natural language interpretation, but it is a one way journey. It may better understand our questions, but engineering constraints leave it limited to the terseness of the Jeopardy! question-format for responses.

Watson may well prove a wonder at finding specific information to answer questions, but it is not going to be able to provide insight or interpretation. If the Jeopardy! team comes up with question like: This separates the interior of a cell from its external environment. Watson, and its human opponents, are likely to respond with “cell membrane.” The humans might be able to riff on the concept of a cell membrane. They would be able to talk about things like cell adhesion, ion conductivity and cell signaling. A Trekker might make a reference to Star Trek Episode #69 where Lee Meriwether plays an alien projection that disrupts every cell in a human body. Watson would stand mutely by waiting for another prompt.

Watson may teach us something even more important than how to understand punny-language, it may teach us about how humans behave in situations where a black box makes them look bad. IBM’s “Deep Blue” beat Gary Kasparov at Chess (see Kasparov Vs Deep Blue on Chess Corner), and he accused IBM of cheating (Deep Blue was dismantled without a rematch.) It will be interesting to watch Ken and Brad at moments when Watson outdraws them. Watson may win Jeopardy!, but it isn’t going to be able to share anecdotes from its life during the interview segment, and it won’t take any pride or joy in its accomplishments, or express frustration when too slow on the draw, or just off on an answer—and aren’t those the real signs we look for when we judge intelligence?

 

What Quora Adds to the Conversation About Questions

Regardless of Watson‘s success on Jeopardy!, Watson will remained locked into it formatted paradigm. The advent of Quora demonstrates the need for non-algorithmic responses to human queries. Quora uses a Q&A model to facilitate the exploration of issues for its subscribers. What Quora illustrates is that in many cases there is no right or wrong answer to a question, that questions are tied to opinion, especially in emergent fields, or areas of ethics and politics, aesthetics and psychology…and in many other areas. Watson may well be poised to be the best retriever of fact we can current create, but much of our longing for answers goes beyond the simple retrieval of fact or anecdote. In many cases our questions are about wrestling with ideas, about struggling with a concept—with our peers, about finding meaning, not fact.

Watson does not move that kind of questioning forward. Quora demonstrates a major technology question we must grapple with: what is the balance between automation and facilitation. As we continue to face a jobless recovery, technology can make it worse by replacing people with automation, or it can help expand the economy, by connecting people, facilitating the exchange of ideas and opening up new opportunities for innovation. Those two are not mutually exclusive, but the dichotomy needs to be recognized to be appropriately addressed by those who are making choices about our economic future.

Daniel W. Rasmus

Daniel W. Rasmus, Founder and Principal Analyst of Serious Insights, is an internationally recognized speaker on the future of work and education. He is the author of several books, including Listening to the Future and Management by Design.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.