Skip to main content

Finally- Sharp Artificial Intelligence!! Beware of our jobs !! NLP, cooking(not joking), medical analytics a piece of cake for a IBM super computer!! meet Watson !

Watson is a computer system like no other ever built. It analyzes natural language questions and content well enough and fast enough to compete and win against champion players at the popular American quiz show,Jeopardy!

IBM's Watson Supercomputer Destroys Humans in Jeopardy | Engadget


“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.” 

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?
Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.
For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.
With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.
Watson went on a tear, winning four of six games. It displayed remarkable facility with cultural trivia (“This action flick starring Roy Scheider in a high-tech police helicopter was also briefly a TV series” — “What is ‘Blue Thunder’?”), science (“The greyhound originated more than 5,000 years ago in this African country, where it was used to hunt gazelles” — “What is Egypt?”) and sophisticated wordplay (“Classic candy bar that’s a female Supreme Court justice” — “What is Baby Ruth Ginsburg?”).
By the end of the day, the seven human contestants were impressed, and even slightly unnerved, by Watson. Several made references to Skynet, the computer system in the “Terminator” movies that achieves consciousness and decides humanity should be destroyed. “My husband and I talked about what my role in this was,” Samantha Boardman, a graduate student, told me jokingly. “Was I the thing that was going to help the A.I. become aware of itself?” She had distinguished herself with her swift responses to the “Rhyme Time” puzzles in one of her games, winning nearly all of them before Watson could figure out the clues, but it didn’t help. The computer still beat her three times. In one game, she finished with no money.
“He plays to win,” Boardman said, shaking her head. “He’s really not messing around!” Like most of the contestants, she had started calling Watson “he.”
David  Ferrucci is I.B.M.’s senior manager for its Semantic Analysis and Integration department, heads the Watson project. 
The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. Using this method, you could put hundreds of articles and books and movie reviews discussing Sherlock Holmes into the computer, and it would calculate that the words “deerstalker hat” and “Professor Moriarty” and “opium” are frequently correlated with one another, but not with, say, the Super Bowl. So at that point you could present the computer with a question that didn’t mention Sherlock Holmes by name, but if the machine detected certain associated words, it could conclude that Holmes was the probable subject — and it could also identify hundreds of other concepts and words that weren’t present but that were likely to be related to Holmes, like “Baker Street” and “chemistry.”
FERRUCCI’S MAIN breakthrough was not the design of any single, brilliant new technique for analyzing language. Indeed, many of the statistical techniques Watson employs were already well known by computer scientists. One important thing that makes Watson so different is its enormous speed and memory. Taking advantage of I.B.M.’s supercomputing heft, Ferrucci’s team input millions of documents into Watson to build up its knowledge base — including, he says, “books, reference material, any sort of dictionary, thesauri, folksonomies, taxonomies, encyclopedias, any kind of reference material you can imagine getting your hands on or licensing. Novels, bibles, plays.”
Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one. In essence, Watson thinks in probabilities. It produces not one single “right” answer, but an enormous number of possibilities, then ranks them by assessing how likely each one is to answer the question.
Ferrucci showed me how Watson handled this sample “Jeopardy!” clue: “He was presidentially pardoned on Sept. 8, 1974.” In the first pass, the algorithms came up with “Nixon.” To evaluate whether “Nixon” was the best response, Watson performed a clever trick: it inserted the answer into the original phrase — “Nixon was presidentially pardoned on Sept. 8, 1974” — and then ran it as a new search, to see if it also produced results that supported “Nixon” as the right answer. (It did. The new search returned the result “Ford pardoned Nixon on Sept. 8, 1974,” a phrasing so similar to the original clue that it helped make “Nixon” the top-ranked solution.)
Other times, Watson uses algorithms that can perform basic cross-checks against time or space to help detect which answer seems better. When the computer analyzed the clue “In 1594 he took a job as a tax collector in Andalusia,” the two most likely answers generated were “Thoreau” and “Cervantes.” Watson assessed “Thoreau” and discovered his birth year was 1817, at which point the computer ruled him out, because he wasn’t alive in 1594. “Cervantes” became the top-ranked choice.
When Watson is playing a game, Ferrucci lets the audience peek into the computer’s analysis. A monitor shows Watson’s top five answers to a question, with a bar graph beside each indicating its confidence. During one of my visits, the host read the clue “Thousands of prisoners in the Philippines re-enacted the moves of the video of thisMichael Jackson hit.” On the monitor, I could see that Watson’s top pick was “Thriller,” with a confidence level of roughly 80 percent. This answer was correct, and Watson buzzed first, so it won $800. Watson’s next four choices — “Music video,” “Billie Jean,” “Smooth Criminal” and “MTV” — had only slivers for their bar graphs. It was a fascinating glimpse into the machine’s workings, because you could spy the connective thread running between the possibilities, even the wrong ones. “Billie Jean” and “Smooth Criminal” were also major hits by Michael Jackson, and “MTV” was the main venue for his videos. But it’s very likely that none of those correlated well with “Philippines.”
After a year, Watson’s performance had moved halfway up to the “winner’s cloud.” By 2008, it had edged into the cloud; on paper, anyway, it could beat some of the lesser “Jeopardy!” champions. Confident they could actually compete on TV, I.B.M. executives called up Harry Friedman, the executive producer of “Jeopardy!” and raised the possibility of putting Watson on the air.
Friedman told me he and his fellow executives were surprised: nobody had ever suggested anything like this. But they quickly accepted the challenge. “Because it’s I.B.M., we took it seriously,” Friedman said. “They had the experience with Deep Blue and the chess match that became legendary.”
Ultimately, Watson’s greatest edge at “Jeopardy!” probably isn’t its perfect memory or lightning speed. It is the computer’s lack of emotion. “Managing your emotions is an enormous part of doing well” on “Jeopardy!” Bob Harris, a five-time champion, told me. “Every single time I’ve ever missed a Daily Double, I always miss the next clue, because I’m still kicking myself.” Because there is only a short period before the next clue comes along, the stress can carry over. Similarly, humans can become much more intimidated by a $2,000 clue than a $200 one, because the more expensive clues are presumably written to be much harder.
Whether Watson will win when it goes on TV in a real “Jeopardy!” match depends on whom “Jeopardy!” pits against the computer. Watson will not appear as a contestant on the regular show; instead, “Jeopardy!” will hold a special match pitting Watson against one or more famous winners from the past. If the contest includes Ken Jennings — the best player in “Jeopardy!” history, who won 74 games in a row in 2004 — Watson will lose if its performance doesn’t improve. It’s pretty far up in the winner’s cloud, but it’s not yet at Jennings’s level; in the sparring matches, Watson was beaten several times by opponents who did nowhere near as well as Jennings. (Indeed, it sometimes lost to people who hadn’t placed first in their own appearances on the show.) The show’s executive producer, Harry Friedman, will not say whom it is picking to play against Watson, but he refused to let Jennings be interviewed for this story, which is suggestive.

Medical Analytics -Watson can :
  • Use natural language processing to automatically develop insights from unstructured text.
  • Build predictive models to uncover patterns and relationships in patient data
  • Classify patients into groups and provide care management options by group
Medical Oncology care plans: https://www.youtube.com/watch?v=HZsPc0h_mtM
Cooking- BBQ sauce
The IBM team put Watson in the kitchen because that's where they believe it would be relevant to most people across cultures. The recipes created by the supercomputer are “not traditional,” says Abrams. “The dishes are meant to surprise.”

For the Fourth of July, IBM, in collaboration with the Institute of Culinary Arts New York, debuted a small batch of Watson’s Bengali Butternut BBQ sauce. It’s not your typical BBQ sauce. With ingredients like white wine, butternut squash, dates butter, the nutrition label reads, “Contains: Cognitive Computing, IBM Cloud, Big Data & Analytics.” The sauce was produced and bottled by IBM for only a handful of people.
FoxNews.com managed to get one. The taste was definitely, as Abrams says, not expected. Strong on the vinegar and butternut squash taste, it was light and airy, with a strong acid sting on the back of the tongue a moment after taking a bite.
For those who couldn’t get their hands on the BBQ sauce and want to try what a computer created, IBM put the recipe on their Tumblr page. But a word of caution for novice chefs with a small pantry; the recipe calls for 18 ingredients, some of which will need to be purchased from specialty grocery stores.
Abrams says reaction of the sauce has been positive. “People were totally not phased by this. On the contrary they were intrigued. The whole idea is to give people, quite literally, a taste of what cognitive computing is about.”
In order to be able to generate brand new recipes, Chef Watson was trained using 35,000 existing recipes. From those, it learned about different cuisines, what ingredients typically go together, and what it takes to make certain types of dishes, like soups or burritos.

Refs:
http://researcher.watson.ibm.com/researcher/view_group.php?id=2099

Comments