Jeopardy how does watson ring in




















Notify me of new posts via email. Blog at WordPress. Got a Secret? Share this: Twitter Facebook Email. Like this: Like Loading Interesting … thanks for sharing! Watching Watson is fun. How do you know all about this? Watson, did you write this blog? Blessings, Ava xox Reply. I get it. Lets hope skynet doesnt come from IBM Reply. Posted by humanitarikim on February 15, at PM.

Remember-It was a human that built Watson, therefore human still wins. Congrats on FP Reply. Oh my goodness.. I need to tune in tonight!! Great post! Let me know when Watson writes its first short story… Reply. Posted by anonnickus on February 15, at PM. Posted by fireandair on February 15, at PM. Posted by michael hill on February 15, at PM. Posted by jaswrites on February 16, at AM. I think it depends on whether it runs on a Markov Chain or just a random set sequence. This computer scares me.

Posted by sharon on February 15, at PM. Congrats on the FP. Posted by unclerave on February 15, at PM. Posted by unclerave on February 16, at AM. Sorry, Ken! I meant Ken and Brad. I watched the Matrix again last night, and then read this.

Its slightly scary, not going to lie. I friend of mine was just telling me this was going to be happening. How exciting. Go Watson Go! Posted by James R. Clawson on February 15, at PM. Hope the humans win! I missed the show! Nice post. Posted by dejavurun on February 15, at PM. I was pleased to see that the humans are doing as well as they are! There is no logical reason to fear Watson.

Posted by barbaralongley on February 15, at PM. Watson kicked butt tonight. Interesting post! Watson takes the other two contestants to the cleaners… Reply. Posted by leeleegirl4 on February 15, at PM. Posted by jaswrites on February 15, at PM.

Well today I watched Watson play. Ok… I have a good question. Im freaking my self out. Posted by jaswrites on February 16, at PM. Today Watson wins at Jeopardy and tomorrow it dethrones Google. Judgment Day, the beginning of the war between man and machines. Be scare. Watson is not connected to the Internet. Amazing… Reply. Posted by Quidmont on February 16, at AM. Great post. Posted by literaryescape on February 16, at PM. Posted by zuleikalim on February 16, at PM. Wow wow and wow again, this is great to the world of artificial intelligence indeed.

Nice Reply. Posted by unclerave on February 17, at PM. Posted by Janus S. Suarez on February 17, at PM. When Robots think like Humans Life is a bit of information Reply. Ken […] Reply. Yes interesting indeed! I think the humans lost this one. No real surprise there. I am sure it does! Well, apparently he still does not get […] Reply. Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:. Email required Address never made public. Name required.

Follow Following. No Pun Intended Join other followers. Sign me up. Already have a WordPress. Log in now. Loading Comments Email Required Name Required Website. Post was not sent - check your email addresses! Sorry, your blog cannot share posts by email. For example, achieving a 5 percent error rate would require 10 19 billion floating-point operations.

Important work by scholars at the University of Massachusetts Amherst allows us to understand the economic cost and carbon emissions implied by this computational burden. And if we estimate the computational burden of a 1 percent error rate, the results are considerably worse. Is extrapolating out so many orders of magnitude a reasonable thing to do? Yes and no. Certainly, it is important to understand that the predictions aren't precise, although with such eye-watering results, they don't need to be to convey the overall message of unsustainability.

Extrapolating this way would be unreasonable if we assumed that researchers would follow this trajectory all the way to such an extreme outcome. We don't. Faced with skyrocketing costs, researchers will either have to come up with more efficient ways to solve these problems, or they will abandon working on these problems and progress will languish.

On the other hand, extrapolating our results is not only reasonable but also important, because it conveys the magnitude of the challenge ahead. The leading edge of this problem is already becoming apparent. When DeepMind's researchers designed a system to play the StarCraft II video game , they purposefully didn't try multiple ways of architecting an important component, because the training cost would have been too high.

Even though they made a mistake when they implemented the system, they didn't fix it, explaining simply in a supplement to their scholarly publication that " due to the cost of training, it wasn't feasible to retrain the model. Even businesses outside the tech industry are now starting to shy away from the computational expense of deep learning. A large European supermarket chain recently abandoned a deep-learning-based system that markedly improved its ability to predict which products would be purchased.

The company executives dropped that attempt because they judged that the cost of training and running the system would be too high. Faced with rising economic and environmental costs, the deep-learning community will need to find ways to increase performance without causing computing demands to go through the roof. If they don't, progress will stagnate. But don't despair yet: Plenty is being done to address this challenge.

One strategy is to use processors designed specifically to be efficient for deep-learning calculations. Fundamentally, all of these approaches sacrifice the generality of the computing platform for the efficiency of increased specialization. But such specialization faces diminishing returns. So longer-term gains will require adopting wholly different hardware frameworks—perhaps hardware that is based on analog, neuromorphic, optical, or quantum systems.

Thus far, however, these wholly different hardware frameworks have yet to have much impact. We must either adapt how we do deep learning or face a future of much slower progress. Another approach to reducing the computational burden focuses on generating neural networks that, when implemented, are smaller. This tactic lowers the cost each time you use them, but it often increases the training cost what we've described so far in this article. Which of these costs matters most depends on the situation.

For a widely used model, running costs are the biggest component of the total sum invested. For other models—for example, those that frequently need to be retrained— training costs may dominate.

In either case, the total cost must be larger than just the training on its own. So if the training costs are too high, as we've shown, then the total costs will be, too. And that's the challenge with the various tactics that have been used to make implementation smaller: They don't reduce training costs enough. For example, one allows for training a large network but penalizes complexity during training.

Another involves training a large network and then "prunes" away unimportant connections. Yet another finds as efficient an architecture as possible by optimizing across many models—something called neural-architecture search. While each of these techniques can offer significant benefits for implementation, the effects on training are muted—certainly not enough to address the concerns we see in our data. And in many cases they make the training costs higher.

One up-and-coming technique that could reduce training costs goes by the name meta-learning. The idea is that the system learns on a variety of data and then can be applied in many areas.

For example, rather than building separate systems to recognize dogs in images, cats in images, and cars in images, a single system could be trained on all of them and used multiple times.

He and his coauthors showed that even small differences between the original data and where you want to use it can severely degrade performance.

They demonstrated that current image-recognition systems depend heavily on things like whether the object is photographed at a particular angle or in a particular pose. So even the simple task of recognizing the same objects in different poses causes the accuracy of the system to be nearly halved. Benjamin Recht of the University of California, Berkeley, and others made this point even more starkly, showing that even with novel data sets purposely constructed to mimic the original training data, performance drops by more than 10 percent.

If even small changes in data cause large performance drops, the data needed for a comprehensive meta-learning system might be enormous. So the great promise of meta-learning remains far from being realized. Another possible strategy to evade the computational limits of deep learning would be to move to other, perhaps as-yet-undiscovered or underappreciated types of machine learning.

As we described, machine-learning systems constructed around the insight of experts can be much more computationally efficient, but their performance can't reach the same heights as deep-learning systems if those experts cannot distinguish all the contributing factors.

Neuro-symbolic methods and other techniques are being developed to combine the power of expert knowledge and reasoning with the flexibility often found in neural networks. Like the situation that Rosenblatt faced at the dawn of neural networks, deep learning is today becoming constrained by the available computational tools. Faced with computational scaling that would be economically and environmentally ruinous, we must either adapt how we do deep learning or face a future of much slower progress.

Clearly, adaptation is preferable. A clever breakthrough might find a way to make deep learning more efficient or computer hardware more powerful, which would allow us to continue to use these extraordinarily flexible models. If not, the pendulum will likely swing back toward relying more on experts to identify what needs to be learned.

While using Watson as a diagnosis tool might be its most obvious application in healthcare, using it to assist in choosing the right therapy for a cancer patient made even more sense. MSKCC was a tertiary referral centre - by the time patients arrived, they already had their diagnosis.

So Watson was destined first to be an oncologist's assistant, digesting reams of data - MSKCC's own, medical journals, articles, patients notes and more - along with patients' preferences to come up with suggestions for treatment options.

Each would be weighted accordingly, depending on how relevant Watson calculated they were. Unlike its Jeopardy counterpart, healthcare Watson also has the ability to go online - not all its data has to be stored.

And while Watson had two million pages of medical data from , sources to swallow, it could still make use of the general knowledge garnered for Jeopardy - details from Wikipedia, for example.

What it doesn't use, however, is the Urban Dictionary. Fed into Watson late last year, it was reportedly removed after answering a researcher's query with the word "bullshit".

As such, the sources are now medical publications like Nature and the British Medical Journal. And there are other safety nets too. The doctor and a data scientist are sitting next to each other, correcting Watson. Spurious material, or conflicted material or something from a pharmaceutical company that the doctor feels may be biased - that is caught during the training cycle," added Saxena. WellPoint and MSKCC used Watson as the basis for systems that could read and understand volumes of medical literature and other information - patients' treatment and family histories, for example, as well as clinical trials and articles in medical journals - to assist oncologists by recommending courses of treatment.

Interactive Care Insights for Oncology provides suggestions for treatment plans for lung cancer patients, while New WellPoint Interactive Care Guide and Interactive Care Reviewer reviews clinicians' suggested treatments against their patients' plans and is expected to be in use at 1, healthcare providers this year.

Watson has bigger ambitions than a clinician's assistant, however. Its medical knowledge is around that of a first year medical student, according to IBM, and the company hopes to have Watson pass the general medical licensing board exams in the not too distant future. We're starting with cancer and we will soon add diabetes, cardiology, mental health, other chronic diseases.

And then our work is on the payment side, where we are streamlining the authorisation and approval process between hospitals, clinics and insurance companies," Saxena said. The ultimate aim for Watson is to be an aid to diagnosis - rather than just suggesting treatments for cancer, as it does today, it could assist doctors in identifying the diseases that bring people to the clinics in the first place.

Before then, there is work to be done. While big data vendors often trumpet the growth of unstructured data and the abandoning of relational databases, for Watson, it's these older sources of data that present more of a problem. Watson does not process structured data directly and it doesn't interpret images. It can interpret the report attached to an image, but not the image itself. In addition, IBM is working on creating a broader healthcare offering that will take it beyond its oncology roots.

We're using it as a learning process to create algorithms and methodologies that would be readily generalisable to any area of healthcare. They don't have to have to say, right, we have oncology under control, now let's start again with family practice or cardiology," Kohn said. Watson has also already found some interest in banking. Citi is using Watson to improve customer experience with the bank and create new services. It's easy to see how Watson could be put to use, say, deciding whether a borderline-risk business customer is likely to repay the loan they've applied for, or used to pick out cases of fraud or identity theft before customers may be aware they're happening.

Citi is still early in its Watson experiments. A spokeswoman said the company is currently just "exploring use cases". From here on in, rather than being standalone products, the next Watson offerings to hit the market will be embedded into products in the IBM Smarter Planet product line. They're expected to appear in the second half of the year. The idea behind the Engagement Advisor, aimed at contact centres, is that customer service agents can query their employers' databases and other information sources using natural language while they're conducting helpline conversations with their clients.

One of the companies testing out the service is Australia's ANZ bank, where it will be assisting call centre staff with making financial services recommendations to people who ring up. Watson could presumably one day scour available evidence for the best time to find someone able to talk and decide the communication channel most likely to generate a positive response, or pore over social media for disgruntled customers and provide answers to their problems in natural language.

There are also plans to change how Watson's delivered, too. Instead of just interacting with it via a call centre worker, customers will soon be able to get to grips with the Engagement Advisor.

Rather than have some call centre agent read out Watson generated information to a customer with, say, a fault with their new washing machine or a stock-trader wanting advice on updating their portfolio, the consumer and trader could just quiz Watson directly from their phone or tablet, by typing their query straight into a business' app. Apps with Watson under the hood should be out in the latter half of this year, according to Forbes.

IBM execs have also previously suggested that Watson could end up a supercharged version of Siri , where people will be able to speak directly into their phone and pose a complex question for Watson to answer - a farmer holding up his smartphone to take video of his fields, and asking Watson when to plant corn, for example.

IBM is keen to spell out the differences between Watson and Siri. Siri, on the other hand, simply looks for keywords to search the web for lists of options that it chooses one from," the company says. But, the comparison holds: Watson could certainly have a future as your infinitely knowledgeable personal assistant. While adding voice-recognition capabilities to Watson should be no great shakes for IBM given its existing partnerships, such a move would require Watson to be able to recognise images something IBM's already working on that would require Watson to query all sorts of sources of information including newspapers, books, photos, repositories of data that have been made publicly available, social media and the internet at large.

That Watson should take on such a role in the coming years, especially if the processing goes on in an IBM datacentre and not on the mobile itself, as you would expect, is certainly within the realms of the possible.



0コメント

  • 1000 / 1000