Sounds nice, but
I think I will stick with mine:
(Shameless self-plug alert!) :-P
If you don't want your own Watson question-and-answer machine after watching the supercomputer whup the human race on Jeopardy! last week, you must be a lawyer. Only lawyers think they already have all the answers. But if you grew up watching Robbie the Robot in Lost in Space, HAL in 2001: A Space Odyssey, the unnamed but …
"Could it help answer those unanswerable 'wife' questions "
You don't need a computer to answer those questions.
1.Which pair of shoes looks best with this dress?
Does not matter. If she gives you a choice between a & b, and you
pick 'a' she goes with 'b'. If you pick 'b' she goes with 'a'.
2.&3. Say 'no', unless you're an idiot.
"Some Watson algorithms are written in C or C++, particularly where the speed of the processing is important. But Gondek says that most of the hundreds of algorithms that do question analysis, passage scoring, and confidence estimation are written in Java. So maybe you want to use a RHEL-JBoss stack for your Watson."
what! are they serious, no simple assembly SIMD used only crap compiler output, perhaps they should set and teach Watson to parse all the worlds assembly for speed and correct and sane fast output then have it write the compiler code routines in SIMD for a given CPU, now that id like to see.
have people like the x264 assembly guys teach it their not so secret source yasm macro's code too to make it a lot simpler https://github.com/DarkShikari/x264-devel
make Watson analyse and re write GCC to produce faster SIMD output based on this training for giggles and get a better compiler and faster binary as a side effect for all users.
Watson's NLP is impressive along with its massive memory and parallel speed but the technology to identify trivia or anything else for that matter has been around for some time.
This is because questions and answers are explicitly linked as dependent and independent variables, which may be dynamically classified to place independent variables in an order which permits rapid identification of the dependent variable.
The idea was developed to identify microbes by Dr. Rypka at Lovelace and has been published online for several years. The engine mathematics and an application example using flags is published online here:
Um, I actually DID grow up watching Robbie the Robot on Lost in Space ...in his guest apperance there. The Robot that was there every week wasn't Robbie from the movie Forbidden Planet, it was a totally new model known as a B-9. As Watson wuld have known....
B9 was Best. Kid's. Friend. Ever. Can you say, "Danger, Will Robinson!" ???
Which is $0.32 per second. But I want more processing power than that. A lot more. So ok how about 1M times as much!. That'll be $320000 per second. (Yes I know, they don't have that much to buy ... shame ... but we can all dream ;)
Anyway back in dream land, $320000 is ... umm.. still a bit pricey. ok I'll have to wait a few years. Unless ... how about we all have a whip round and see if we can buy 1 second?! ... Oh and the first question for our supercomputer isn't 42... It'll be:
"Start the Technological Singularity now please!" :)
Then we can all sit back and watch the fireworks! ... popcorn is extra ;)
This post has been deleted by its author
I would be very impressed if they could beat the human brain with a machine that needed more than 20GW of power!. After that goal, its simply a case of finding ways to optimise and miniaturise the design. (My point is, power usage is far less impressive than actually achieving the goal of beyond human levels of intelligence).
I have been able to Google the answer to any trivia question in a matter of seconds for more than a decade.
What is the point of building a computer system to answer trivia questions in a particular format and who even cares? And why does it require so damn much compute power anyway?
Seriously, this whole Watson thing has been a confusing non-event for me.
Google can retrieve documents containing the terms in the query--it does not give you an answer, but rather some suggestions of documents where the answer might be found. Keep in mind that IR algorithms generally just assume a unigram model for speed.
Watson, on the other hand, does NLP--something that no IR system does. It analyzes the clue and provides a concise answer to the question (or rather question to the answer). Practically speaking, this could be really useful in the legal profession, as lawyers have to look through lots of long documents to find the information they're looking for (though maybe they don't care since they're billing by the hour). This could be useful in medicine, as well, where the system could help diagnose a disease (a manifestation of Dr. House in computer form, for those familiar with the show).
This kind of high-speed, high precision expert sytem sounds much like the McKenzie's Friend legal software used as a minor plot device in Ken MacLeod's "The Stone Canal". Although the McKenzie advised in realtime during the case, rather than being used to help build arguments and find precedents beforehand.
Still, it would be very interesting to see where this sort of technology goes.
Yes, of course I realize that Google can not literally play Jeopardy. My point is that by entering some keywords and doing 2 seconds of scanning through search result excerpts I can achieve the same result. So, great, IBM has managed to build a supercomputer that saves me those 2 seconds. And I'm supposed to be amazed?
I question the usefulness of the NLP they're doing. I doubt it has the linguistic precision required to render useful legal information (any more so than a search engine), and as for medical diagnoses, those are better left to expert systems that ask series of diagnostic questions. NLP doesn't enter that equation.
It is only able to provide answers to questions when the answer is
1 - Simple
2 - Already known
so it looks like borrowing the money to buy a computer then asking the computer to pay for itself and then take over the world is still not an option.