IBM Watson CTO on Why Augmented Intelligence Beats AI

This episode of Fast Forward was recorded in the IBM Watson Experience Center here in New York City. My guest was Rob High, the Vice President and Chief Technology Officer of IBM Watson.

High works across multiple teams within IBM, including engineering, development, and strategy. He is one of the most lucid thinkers in the space of artificial intelligence, and our conversation covered many of the way that technology is reshaping our jobs, our society and our lives. Read and watch our conversation below.

Dan Costa: What is the dominant misconception that people have about artificial intelligence?

Rob High: I think the most common problem that we're running into with people talking about AI is they still live in the world where I think Hollywood has amplified this idea that cognitive computing, AI, is about replicating the human mind, and it's really not. Things like the Turing test tend to reinforce that what we're measuring is the idea of AI being able to compete with fooling people into believing that what you're dealing with is another human being, but that's really not been where we have found the greatest utility.

This even goes back to, if you look at almost every other tool that has ever been created, our tools tend to be most valuable when they're amplifying us, when they're extending our reach, when they're increasing our strength, when they're allowing us to do things that we can't do by ourselves as human beings. That's really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Let's talk a little bit about that shift, because it's an entirely new type of computing. It's the evolution of computing from what we both grew up with, a programmatic computing where you would use computation to reach and answer using a very complex process, to cognitive computing, which operates a little differently. Can you explain that transition?

Probably the biggest notable difference is that it's very probabilistic, whereas programmed computing is really about laying out all the conditional statements that define the things that you're paying attention to and how to respond to them. It's highly deterministic. It's highly mathematically precise. With a classic programmed computer, you can design a piece of software. Because you know what the mathematical model is that it represents, you can test it mathematically. You can prove its correctness.

Cognitive computing is much more probabilistic. It's largely about testing the signals of the spaces that we're focused on, whether that is vision or speech or language, and trying to find the patterns of meaning in those signals. Even then, there's never absolute certainty. Now, this is in part because that's the way it's computed, but also because that's the nature of human experience. If you think about everything that we say or see or hear, taste or touch or smell or anything that is part of our senses, we as human beings are always attempting to evaluate what that really is, and sometimes we don't get it right.

What's the probability that when I heard that sequence of sounds, it really meant this word? What's the probability that when I saw this sequence of words it meant this statement? What's the probability that when I see this shape and an image that I'm looking at that it is that object? Even for human beings, that's a probabilistic problem, and to that extent it's always the way that these cognitive systems work as well.

If somebody comes to you and they have a problem that they want to solve, they think that there is a cognitive computing solution to that, they come to Watson, they say, "Look, we're going to use Watson to try and solve this problem." Out of the box, Watson doesn't do very much. They need to teach it how to solve their problem. Can you talk about that onboarding process?

Actually, we should talk about two dimensions of this. One is that some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space. It's a component that, in the case of speech, is focused strictly on the problem of trying to take your speech and recognize what words you've expressed in that speech, or take an image and try and identify what's in the image, or take language and attempt to understand what its meaning is, or take a conversation and participate in that.

First of all, what we're talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application. That's point one.

Point two is the one that you started with, which is, all right, now that I've got the service, how do we get it to do the things we want it to do well? The technique really is one of teaching. The probabilistic nature of these systems is founded on the fact that they are based on machine learning or deep learning, and those algorithms have to be taught how to recognize the patterns that represent meaning within a set of signals, which you do by providing data, data that represents examples of that situation that you've had before where you've been able to label that as saying, "When I hear that combination of sounds, it means this word. When I see this combination of pixels, it means that object." When I had those examples, I can now bring you to the cognitive system, to these cognitive services, and teach them how to do a better job of recognizing whatever it is that we want it to do.

I think one of the examples that illustrates this really well is in the medical space, where Watson is helping doctors make decisions and parsing large quantities of data, but then ultimately working with them on a diagnosis in partnership. Can you talk a little bit about how that training takes place and then how the solution winds up delivering better outcomes?

The work that we've done in oncology is a good example of where really it's a composition of multiple different kinds of algorithms that, across the spectrum of work that needs to be performed, are used in different ways. We start with, for example, looking at the medical record, looking at your medical record and using the cognitive system to look over all the notes that the clinicians have taken over the years that they've been working with you and finding what we call pertinent clinical information. What is the information in those medical notes that are now relevant to the consultation that you're about to go into? Taking that, doing population similarity analytics, trying to find the other patients, the other cohorts that have a lot of similarity to you, because that's going to inform the doctor on how to think about different treatments and how those treatments might be appropriate for you and how you're going to react to those treatments.

Then we go into what we call the standard of care practices, which are relatively well-defined techniques that doctors share on how they're going to treat different patients for different kinds of diseases, recognizing that those are really designed for the average person. Then we lay on top of that what we call clinical expertise. Having been taught by the best doctors in different diseases what to look for and where the outliers are and how to reason about the different standard of care practices, which of those is most appropriate or how to take the different pathways through those different care practices and now apply them in the best way possible, but finally going in and looking at the clinical literature, all the hundreds of thousands, 600,000 articles in PubMed about the advances in science that have occurred in that field that are relevant to now making this treatment recommendation.

All those are different aspects of algorithms that we're applying at different phases of that process, all of which have been taught by putting some of the best doctors in the world in front of these systems and having them use the system and correct the system when they see something going wrong, and having the system learn essentially through that use on how to improve its own performance. We're using that specifically in the case of oncology to help inform doctors in the field about treatment options that they may not be familiar with, or even if they have some familiarity with it may not have had any real experience with and don't really understand how their patients are going to respond to it and how to get the most effective response from their patients.

What that basically has done is democratized the expertise. We can take the best doctors at Memorial Sloan Kettering who had the benefit of seeing literally thousands of patients a year around the same disease from which they've developed this tremendous expertise, capture that in the cognitive system, bring that out to a community or regional clinic setting where those doctors may not have had as much time working with the same disease across a large number of different patients, giving them the opportunity to benefit from that expertise that's now been captured in the cognitive system.

I think that idea of distributing that expertise, first of all, capturing it is a non-trivial task, but then once you've done that, being able to distribute it really across the planet, you're going to have the expertise of the best doctors at Memorial Sloan Kettering being able to be delivered in China, in India, in small clinics, and I think that's pretty extraordinary.

It has a tremendous social impact on our welfare, on our health, on the things that will benefit us as a society.

On the flip side, the thing that concerns people about artificial intelligence is that it's going to replace people, it's going to replace jobs. It's tied into the automation movement. The thing that strikes me is, staying in the medical space, radiologists. Radiologists look at hundreds and hundreds of slides a day. Watson or an AI-based system could replicate that same type of diagnosis and image analysis. Ten years from now, do you think there are going to be more or fewer human radiologists employed in the US? What's the impact on industries like that?

The impact is actually about helping people do a better job. It's really about ... take it in the case of the doctor. If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It's not about replacing them. It's about helping them do their job better.

It does have some of the same dynamic that every tool that we've ever created in society. I like to say if you go back and look at the last 10,000 years of modern society since the advent of the agricultural revolution, we've been as a human society building tools, hammers, shovels, hydraulics, pulleys, levers, and a lot of these tools have been most durable when what they're really doing is amplifying human beings, amplifying our strength, amplifying our thinking, amplifying our reach.

That's really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would've been by theirselves. That's really the way we think about it. That's how we're evolving the technology. That's where the economic utility is going to be.

I completely agree, but I do think there's going to be industries that are obviated because of the efficiency introduced by these intelligent systems.

They're going to be transitioned. Yeah, they're going to be transitioned. I don't want to diminish that point by saying it this way, but I also want to be sure that we aren't thinking about this as the elimination of jobs. This is about transforming the jobs that people perform. I'll give you an example. A lot of discussion about how this may take away jobs in the call center. Well, guess what? There's a lot of work that call center agents do that they don't need to be doing, they don't like doing, that takes away from their ability to do things that are more interesting.

The churn that we see in call centers is largely driven by the fact that if you think about the job of being a call center agent, you're sitting at the end of telephone call listening to irate customers all day long asking the same question over and over again, and it's hard to go home at night feeling really good about what you did that day. It's hard to brag to your friends and family about this job that you have and how good you are at doing it when that's the situation you're in.

If we can get the cognitive system through a conversational agent to offload some percentage, let's say 30 percent of those calls coming in, and answering the customers' most common and pressing questions quickly, efficiently, and take care of that mundane work, then what's left after all that's been taken care of are the kinds of questions that people have that inherently require more of a human touch that then you're going to turn over to that call center agent. The problem that they're dealing with for that customer is more interesting, more challenging, requires them to have more intellectual effort put into it, but also they're dealing with a customer who's been satisfied. They're coming in a little bit happier. They're not coming in all irate about their problem.

For the call center agent, it actually has improve their job. It actually makes it possible for them to do their job better and be more fulfilled by that. In the meantime, for the customer, for the consumer, they got their most pressing issues solved quickly. They're not sitting on hold for 10 minutes. They're not waiting for the get routed to the right person with just the right knowledge. They're getting the information they need most readily and able to move on with their life with probably a better decision, certainly better information or at least more consistent information. It actually benefits both sides of that equation.

It's interesting. Some of the demos I saw today is that the call center applications can anticipate and detect the emotional state of the people that are calling in pretty effectively, so it's not just transactional. It can actually read the state of the person on the other end of the line pretty well.

Which is really essential if you think about [it]; a conversation has two elements to it. One is that what people say to begin with is generally not what they're really there for. If I say, "What's my balance?" well, that's not really my problem. Yeah, I need to know my account balance, I need to know how much money I have, but my problem is I'm trying to buy something, or I'm trying to figure out how to get money in the right position to pay my bills this month, or I'm trying to save up for my kids' education. My problem is bigger than that first question I asked, and a conversation should be about getting to that real problem.

The second common characteristic of a conversation is that typically it carries a sort of emotional arc to it. People come in in a certain emotional state, and part of the conversation is to move them through an emotional shift that oftentimes means moving them from being angry to now being satisfied. In some conversations, we might get into it. It might actually get a little heated. You see an emotional arc that starts out maybe calm and then moves to a more contentious discussion that eventually then gets resolved.

Being sensitive and aware of emotional state in the parties involved is an important part of being effective in that conversation.

What are some of the other applications that you think are really transformative that are available today?

I think that [with] any of them, what we're doing is engaging the user, the customer, in a way that results in inspiring them. For me, ultimately, and again going back to conversations as an example, typically when human beings get into a conversation, we come to the table with an idea. You have an idea. I have an idea. That starting idea is the beginning of the conversation, and over the course of the conversation we evolve those ideas. We blend them. We merge them. We maybe discount them or amplify them. We evolve to a point where coming out of the conversation we have a better idea, hopefully. Ideally.

To do that, there has to be not only the give and take, but an element of how do you inspire somebody? How do you cause people to activate their imagination? How do you cause them to think about something they hadn't thought about before or see something in a light they hadn't thought of before or to see another point of view that takes them down a path that they didn't even know to think about, to ask questions they're not thinking to ask? Those are the examples, those are the situations that I think are most promising and will have the greatest benefit for people.

Is that happening today, or is that something that needs to happen down the line as the technology evolves?

No, it's happening. We have examples of that happening now. In fact, going back to oncology as an exemplar, for the best doctors in the world, the treatment options that are being presented may be obvious to them for the most part. There may be one out of ten cases where they might say, "Well, wait a minute, that was an interesting idea." It won't be as often, but, like you said earlier, if we take that now out to community settings, regional settings, and in areas where there aren't that levels of expertise, the fact that the system can introduce new ideas, new treatment options, it's really about introducing new ideas. We're seeing that already.

Then, of course, moving beyond what I think has become the classic chatbot scenario that I think some of us are beginning to see in different examples to now a situation where if somebody gives a credit card fraud alert on their credit card and they go to a chatbot today, it might be just simply, "Was that transaction something that you did or not? If it is, then fine. If not, then we're going to do something about canceling the transaction," into now, "Okay, you need a new credit card. Where's the best place to get it to you? Should we mail it you? Should we not mail it to you? Oh, you're getting ready to go on this trip. Then clearly we're not going to be able to mail it to you. We've got to get it to you faster than that.

"Oh, you're going overseas. Maybe there's a credit card option here that you weren't exposed to before, didn't know about, where we handle currency exchanges in your favor better. Oh, you're using this for business. This is an overseas trip. You're using this for business expenses. Well, here's a credit card that has an interest rate that's more appropriate for that." These are all very simple examples, but each of them are opening up a new set of ideas that doesn't typically happen in your simple chatbot today and yet can really be very empowering for human beings.

The interesting point there is that as you're going through all of those options, in the past that would be a script. There would be a script with a couple of branches. It would be predefined in advance. It's a very different thing when a chatbot does it that's actually reacting to the information you give and the information you've already given and leading you down paths that have not been scripted. It knows that you're traveling, but you haven't necessarily told it. It found that information from your email history.

It can find things about you it discovered along the way.

We talked about oncology because it's a great example. We talked about chatbots because most people have had some interaction with them. But this is a technology that really scales across every industry. It's hard to think of an industry that won't have some kind of cognitive component to it. Are there any examples that are just way out there that people haven't thought about yet?

The thing that's amazing to me is how every single day somebody's coming up with another new idea. That's why I think we're in such a very interesting phase, because by having focused on decomposing what we have in terms of cognitive capabilities into building block services, it's really freeing up people to use their imagination and go pursue ideas that we've never really considered before, whether that is using visual recognition to survey the landscape.

In California, for example, a company there is using visual recognition to look at the topography and the topology and recognize in the image the difference between a concrete surface, an asphalt roof surface, a grass surface, trees and shrubs and these things, to estimate how much water is being consumed and where there may be water leaks and things that could be done to improve the efficient use of water, as an example.

Or, in the legal arena, using these things to go off and help lawyers read through with literally millions and millions of pages of background material that is like finding the needle in a haystack. Where's that one piece of paper that is really relevant to this particular case? Trying to sort through all that. The opportunities are just enormous.

I think that one of those qualifications is having large quantities of data that need to be parsed through. You talked about medical records and being able to scan the medical records for the relevant information. Those records over the course of your lifetime could be many of hundreds of pages long. That's the thing that, maybe your family doctor has an inkling of that, but they're not going to remember all of it, whereas the system never forgets.

Yeah. A doctor may have five, maybe ten minutes to look through that medical history before coming and consulting with you, and yet there's all kinds of very relevant information that may be in your history, your past, that under any other circumstances they would miss just because they don't have the time, that if they had that would make a difference.

Think about a situation where if a woman had told her doctor that her mother just died of breast cancer two years ago. Well, chances are that doctor will have noted that in that record, but at this moment, if this woman's coming in presenting a lump in her breast, and if that doctor doesn't see that, well, that's a very important piece of missing information. Now, maybe they'll rediscover that by talking to the patient, but maybe not. Do you really want to take the risk of not having known that when something like that is so germane?

The overarching characteristic for where this stuff tends to be useful is you mentioned where there's lots and lots of data. Yeah, but really it's when any of those aspects of who we are as human beings, where what our cognitive capability begins to reach its limit. We're good at reading. We can read something. We can assimilate it. We can adapt to the information and make use of that in very powerful ways as human beings. But we're not very good at reading lots of data. We can't read more than ... The idea of reading tens of thousands, hundred thousand, millions of pages of literature in a day is so far beyond our capacity.

The question becomes, as we grow into a world where the amount of information that's produced on a daily basis is growing exponentially, how much more of that information are we not making use of that has information in it, has that little tidbit of information that's absolutely critical to the decision we need to make are we not getting to? If it's not the amount of information we read, it's: How much do we assimilate? How much are we able to recall? Are we able to see the little patterns that are relevant in that information to our decisions?

There are lots of things that we as human beings are good at. There's also a lot of things that we're not very good, and that's I think where cognitive computing really starts to make a huge difference, is when it's able to bridge that distance to make up that gap.

It seems pretty clear this is the world that we're moving into. How prepared are we? What do you look at our education system, our economy, our political structures? How well prepared are we to live in a world with this type of cognitive computing as a component?

It's interesting. This draws on one of the key value points that we possess as human beings, which is our ability to adapt. If you look at it in purely discrete terms, where is this going, and if we were to leap forward 10 years and look at it and say, "Where will we be 10 years? Are we prepared for that?" the answer is going to probably be, no. There's a lot more that we have to do. But human beings have this remarkable ability to adapt on the fly and grow with the changes that are occurring around them.

Think back 10 years ago when the smartphone was really just starting to become available to us, let alone popular, and how much change we have gone through as a society over the last 10 years. Think about what your life is like on a daily basis with and without your smartphone. We can complain about how much it may be taking away from other experiences, and that may be true, but the point is, we didn't spend a lot of time 10 years ago fretting over, were we prepared as a society, even though in fact we've gone through a lot of changes over the last 10 years that we probably weren't fully aware of as we assimilated this change in technology and started making use of it in very effective ways.

There's a lot that we have to do. There's a lot that we're going to be doing over time, a lot of growth that we'll go through, a lot of education and politics and other things that we have to go through changes on, but we will.

We'll get to my last questions. What technological trend concerns you the most? Is there anything that keeps you up at night?

I think that the biggest concern I have right now is people do need to take responsibility. We as engineers and providers of technology, consumers of technology, people who have responsibility for regulating technology, really do need to be conscious and think through now what we want to do to protect ourselves and prepare ourselves for the changes that are occurring. It won't be because we won't adapt to it. We will. The problem is of course, in the process of adapting it, we also won't be conscious of what that is doing and how that's affecting us and where people may be exploiting that technology in ways that we don't prefer, that we aren't comfortable with, or in retrospect we won't necessarily want.

I do think that we need to be conscious and thinking about what we do and we don't want to have happen in our lives with this technology. Specifically, vendors in particular, we as the suppliers of this technology, and the people who are consuming these technology components and building applications out of it should at this moment presume responsibility for our ethical behavior or behaviors that are born from ethical values.

As an example, we strongly recommend to any of our application developers, any of the institutions that are creating applications using these technologies, that they be very transparent with their end users about the fact that this is a cognitive application, it's a computer, and not attempt to masquerade as a real human being, for example. Don't pretend. Don't let this thing pretend.

Don't imitate.

Don't imitate it and don't let your customers ever be mislead into believing that this thing is a real person. Ethically, it's wrong. I think it creates the risk of vulnerability. A human being who is interacting with a human being can make certain assumptions about our flaws, about our inability to actually retain a lot of information, where when dealing with a cognitive system, we need to be mindful that the people who are providing that cognitive solution have a responsibility to the privacy and protection of the information that we supply it. We shouldn't be ever forgetful of that fact.

In terms of technology on the upside, what technology do you use every day that just inspires wonder? What's changed your life?

I think the fact that I can now get access to information that, even if I could get it in the internet, we've had information available to us in the internet for a long time, but oftentimes we stop trying to get that information because it's overwhelming. I was out looking at some camera equipment, and just trying to make decisions about the trade-offs between different cameras-

I'll send you a link to our buyers guide.

There you go. It gets overwhelming, and yet you have to rely on other people to provide that advice for you and assume that they've done the research for you, but even then, they're doing so based on some assumptions they've made about what you need and what you care about. At some point you just simply give up and you say, "Okay, fine, just tell me what to do, I'll do it." Or you go to a whole bunch of websites and you see all these opinions and it just gets confusing and contradictory and so you say, "Well, the heck with all them. I'm just going to go with what feels good to me."

Now, because these systems can accumulate and assimilate and organize vast quantities of information, even for the people who are making recommendations, even for the advisors, it benefits them because it helps them do a better job. A way I like to say it is it doesn't do our thinking for us, it does our research for us so we can do our thinking better, and that's true of us as end users and it's true of advisors. It's true of anybody who's in that role of being an analyst.

I think of the application, because we're always trying to help people make buying decisions. We're not far from a system that could look at all the photos that you've taken over the last five years, see that you like to do wildlife photography or closeups of flowers, and then make a camera recommendation based on the pictures that you take.

That's right. Flamingos. I don't know why.

This is the best camera for taking pictures of flamingos.

Flamingos, right.

We're almost there. The technology exists, it just hasn't been programmed yet.

Yeah.

Or taught, as we do these days. Rob High, thanks so much for doing this.

Thank you very much.

For more Fast Forward with Dan Costa, subscribe to the podcast. On iOS, download Apple's Podcasts app, search for "Fast Forward" and subscribe. On Android, download the Stitcher Radio for Podcasts app via Google Play.

This article originally appeared on PCMag.com.