Short version: Watson decimates the Champs. Here's how. [Thanks, once again, to Rashad8821 at YouTube.]
Watson keeps beating them to the buzzer so consistently often that you have to wonder if the text of the question reaches Watson somehow faster than it does the human contestants. Interestingly, however, with full 30 seconds at its disposal, it gets the final question completely wrong -- When did Toronto become a US city? [For an explanation how Watson might have been confused by the clue, read this post at IBM's Smarter Planet blog].
4 Comments:
The first link (by Greg Lindsey) says that Watson is consistent with the buzzer, while the human participants are not. Even microsends might matter. Apparently, if you buzz too early, you are locked out for a split second. So, maybe, Watson's buzzer should have had some randomness built into it (to mimic a typical human participant or a champ)
Sridhar
Watson gets a text file with the question. That seems to give him an edge at the buzzer over the others who either read the question or listen to it.
@Sridhar, @Ankur: The buzzer stuff is only an interesting aside -- but it makes sense only in the context of a game show format like Jeopardy's. Within this context, sending the question to Watson in text form appears to be a major factor: I just don't know when exactly the question gets sent to Watson -- as soon as Alex starts to read it? Or, when he has just finished reading it? The difference could be a couple of seconds -- which is pretty huge, on Watson's time scale.
But this discussion about being first to buzz should not take anything away from the machine's amazing capabilities in parsing natural language. I'm sure you noticed the graphic, throughout the show, with Watson's guesses which were right, like, 80+ percent of the time! Isn't that awesome?
What I am looking forward to is an analysis of the kinds of mistakes (of which there were quite a few on the third day) Watson made...
Abi, actually I have been planning to write about this and will do soon.
Some IBM people came to UIUC recently and I asked them the same basic question I asked on your blog: is Watson optimized to play Jeopardy? In other words, would it also be able to, say, have a conversation with a human? My worry was that this performance could also be achieved if it was trained, like in a coaching class, at many many Jeopardy type questions, independently of having any capabilities at natural language. (I know this is an extreme example, but an example nonetheless)
I got confusing answers from them - perhaps the specific people I spoke to didn't know enough. They gave me a white-paper which apparently explains quite a bit. I hope to read it soon and will post a review on my buzz. For the moment, I like Watson, but I am wary of all the PR.
Post a Comment