Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

Watson, game 2

Not much new to report after the second game of the Watson Jeopardy Challenge. I’ve added a few updates to yesterday’s post on Watson and the result was as expected, though Watson struggled a lot more in this game than in the prior round, deciding not to answer many questions due to low confidence and making a few mistakes. In a few cases it was saved by not buzzing fast enough even though it had over 50% confidence, as it would have answered slightly wrong.

Some quick updates from yesterday you will also find in the comments:

  • Toronto’s 2nd busiest airport, the small Island airport, has the official but rarely used name of Billy Bishop. Bishop was one of the top flying aces of WWI, not WWII. Watson’s answer is still not clear, but that it made mistakes like this is not surprising. That it made so few is surprising
  • You can buzz in as soon as Trebek stops speaking. If you buzz early, you can’t buzz again for 0.2 seconds. Watson gets an electronic signal when it is time to buzz, and then physically presses the button. The humans get a light, but they don’t bother looking at it, they try timing when Trebek will finish. I think this is a serious advantage for Watson.
  • This IBM Blog Post gives the details on the technical interface between Watson and the game.
  • Watson may have seemed confident with its large bet of $17,973. But in fact the bet was fixed in advance:
    • Had Jennings bet his whole purse (and got it right) he would have ended up with $41,200.
    • If Watson had lost his bet of 17,973, he would have ended up with $41,201 and bare victory.
    • Both got it right, and Jennings bet low, so it ended up being $77,147 to $24,000.
    • Jennings’ low bet was wise at it assured him of 2nd place and a $300K purse instead of $200K. Knowing he could not beat Watson unless Watson bet stupidly, he did the right thing.
    • Jennings still could have bet more and got 2nd, but there was no value to it, the purse is always $300K
    • If Watson had wanted to 2nd guess, it might have realized Jennings would do this and bet appropriately but that’s not something you can do more than once.
    • As you might expect, the team put a bunch of thought into the betting algorithm as that is one thing computers can do perfectly sometimes. I’ve often seen Jeopardy players lose from bad betting.
  • It still sure seemed like a program sponsored by IBM. But I think it would have been nice if the PI of DeepQA was allowed up on stage for the handshake.
  • I do wish they had programmed a bit of sense of humour into Watson. Fake, but fun.
  • Amusingly Watson got a category about computer keyboards and didn’t understand it.
  • Unlike the human players who will hit the buzzer before they have formed the answer in their minds, in hope that they know it, Watson does not hit unless it has computed a high confidence answer.
  • Watson would have bombed on visual or audio clues. The show has a rule allowing those to be removed from the game for a disabled player, these were applied!
  • A few of the questions had some interesting ironies based on what was going on. I wonder if that was deliberate or not. To be fair, I would think the question-writers would not be told what contest they were writing for.

Watson, come here, I want you

The computer scientist world is abuzz with the game show world over the showdown between IBM’s “Watson” question-answering system and the best human players to play the game Jeopardy. The first game has been shown, with a crushing victory by Watson (in spite of a tie after the first half of the game.)

Tomorrow’s outcome is not in doubt. IBM would not have declared itself ready for the contest without being confident it would win, and they wouldn’t be putting all the advertising out about the contest if they had lost. What’s interesting is how they did it and what else they will be able to do with it.

Dealing with a general question has long been one of the hard problems in AI research. Watson isn’t quite there yet but it’s managed a great deal with a combination of algorithmic parsing and understanding combined with machine learning based on prior Jeopardy games. That’s a must because Jeopardy “answers” (clues) are often written in obfuscated styles, with puns and many idioms, exactly the sorts of things most natural language systems have had a very hard time with.

Watson’s problem is almost all understanding the question. Looking up obscure facts is not nearly so hard if you have a copy of Wikipedia and other databases on hand, particularly one parsed with other state-of-the-art natural language systems, which is what I presume they have. In fact, one would predict that Watson would do the best on the hardest $2,000 questions because these are usually hard because they refer to obscure knowledge, not because it is harder to understand the question. I expect that an evaluation of its results may show that its performance on hard questions is not much worse than on easy ones. (The main thing that would make easy questions easier would be the large number of articles in its database confirming the answer, and presumably boosting its confidence in its answer.) However, my intuition may be wrong here, in that most of Watson’s problems came on the high-value questions.

It’s confidence is important. If it does not feel confident it doesn’t buzz in. And it has a serious advantage at buzzing in, since you can’t buzz in right away on this game, and if you’re an encyclopedia like the two human champions and Watson, buzzing in is a large part of the game. In fact, a fairer game, which Watson might not do as well at, would involve randomly choosing which of the players who buzz in in the first few tenths of a second gets to answer the question, eliminating any reaction time advantage. Watson gets the questions as text, which is also a bit unfair, unless it is given them one word a time at human reading speed. It could do OCR on the screen but chances are it would read faster than the humans. It’s confidence numbers and results are extremely impressive. One reason it doesn’t buzz in is that even with 3,000 cores it takes 2-6 seconds to answer a question.

Indeed a totally fair contest would not have buzzing in time competition at all, and just allow all players who buzz in to answer an get or lose points based on their answer. (Answers would need to be in parallel.)

Watson’s coders know by now that they probably should have coded it to receive wrong answers from other contestants. In one instance it repeated a wrong answer, and in another case it said “What is Leg?” after Jennings had incorrectly answered “What is missing an arm?” in a question about an Olympic athlete. The host declared that right, but the judges reversed that saying that it would be right if a human who was following up the wrong answer said it, but was a wrong answer without that context. This was edited out. Also edited out were 4 crashes by Watson that made the game take 4 hours instead of 30 minutes.

It did not happen in what aired so far, but in the trials, another error I saw Watson make was declining to answer a request to be more specific on an answer. Watson was programmed to give minimalist answers, which often the host will accept as correct, so why take a risk. If the host doesn’t think you said enough he asks for a more specific answer. Watson sometimes said “I can be no more specific.” From a pure gameplay standpoint, that’s like saying, “I admit I am wrong.” For points, one should say the best longer phrase containing the one-word answer, because it just might be right. Though it has a larger chance of looking really stupid — see below for thoughts on that.

The shows also contain total love-fest pieces about IBM which make me amazed that IBM is not listed as a sponsor for the shows, other than perhaps in the name “The IBM Challenge.” I am sure Jeopardy is getting great ratings (just having their two champs back would do that on its own but this will be even more) but I have to wonder if any other money is flowing.

Being an idiot savant

Watson doesn’t really understand the Jeopardy clues, at least not as a human does. Like so many AI breakthroughs, this result comes from figuring out another way to attack the problem different from the method humans use. As a result, Watson sometimes puts out answers that are nonsense “idiot” answers from a human perspective. They cut back a lot on this by only having it answer when it has 50% confidence or higher, and in fact for most of its answers it has very impressive confidence numbers. But sometimes it gives such an answer. To the consternation of the Watson team, it did this on the Final Jeopardy clue, where it answered “Toronto” in the category “U.S. Cities.”  read more »