Craig Mundie: Pacific Health Summit

Special Presentation: How Computing Can Help Transform Healthcare
Remarks by Craig Mundie, Chief Research & Strategy Officer, Microsoft Corporation
Seattle, Wash.
June 23, 2011

CRAIG MUNDIE: It’s great to be back. This is the seventh summit, and it will be the fifth one that I’ve been the lunchtime entertainment.

You know, many of the topics each year that have been discussed are things that I have some tangential knowledge of, but I think one of the things that’s clearly true is almost everything in the health care field is moving in a direction where information technology will be more and more integral to our ability to do these things. And whether you want to talk about the future of health care delivery or you want to think about the future of how we’re going to focus on health and wellness as opposed to just remediation, it’s pretty clear that technology will be the glue that binds a lot of this together and enables these changes.

So, each year, I try to look forward and show you some things that I think are happening that are going to basically manifest themselves one way or the other in the years ahead.

And just to make a point, it turned out I didn’t speak the last two years, but if you go back to 2006, you know, I had a bunch of robots, one that actually ran around in the audience, and it was the early model of this particular robot.

And today, there are 400 of these things in production environments and hospitals around the world where doctors are sitting at a control station and driving these robots around in order to be able to be telepresent with patients where they’re just not otherwise afforded access.

In two thousand and I think it was seven, I basically posited and gave sort of a demonstration of a traditional inkjet printer, but said, you know, that we believe that in the future you’d take these inkjet technologies and you would replace them with chemicals that were part of making drugs, and people would print medicines, because the formulary could be done so precisely by these inkjet kind of technologies.

And sure enough, GlaxoSmithKline I understand this year is completing a two-year trial, printing medicines, multiple medicines at a time, on pills just as a carrier. In that demo we said they were little paper wafers, but, in fact, it’s being done in a university collaboration.

So, some of the things I’ll show you today may seem esoteric, but the message here is that these things are happening faster and faster and faster, and that if you don’t embrace the idea that these things are going to represent transformational technologies in the future of health care, then you’re likely to be left behind or at least not get a lot of advantage from it.

One of the things that I think we really want is a learning health care system, but I would argue that in the past the way health care systems learned was they learned through people. But I believe that one of the important changes in the years ahead will be that they will also learn through machines.

We’re now building computing systems in support of the things that we do every single day on the Internet that are just incredibly capable, and the scale of these things is really mind-boggling.

As Lee said, you know, I’m on President Obama’s Science and Technology Advisory Group, and I spent 15 months in 2009 to 2010 co-chairing the writing of a report on how to transform the U.S. health care system by more aggressive use of information technology.

One of the things we found as we assembled the panel of people to work on it is that the traditional medical informaticists kept telling us that we didn’t understand that their data was really big, and that was one of the reasons it was so hard to operate on. And those of us that were sort of from the Internet side and more general the information technology side said, no, you don’t understand that now your data is really tiny.

So, we actually did a study, and I think the largest single system that we could find in the United States in terms of the data that they had had was the Beth Israel system and its affiliates in the Boston area. They’d been collecting data for 27 years.

And to put it in perspective, today, every five hours consumers upload enough video to YouTube to completely replace the amount of data Beth Israel has ever collected in their lifetime.

A different way to think about it is every two days, people just loading pictures to Facebook, just the little pictures to Facebook, also reproduce the entire lifetime record of the Beth Israel system.

So, I just offer that as a way of trying to convince you that while, yes, medical data is sort of big and complicated, by today’s standards, it’s actually not very big. So, it’s important to get that in your head as you think about what we can do in the future.

So, partly what we want to do is figure out how we can use these new technologies of super scale analytics and machine learning, and apply them to the ultimate goals of improving outcomes and lowering costs in the health care environment.

Another thing that’s clearly true today more than certainly even 10 years ago is the degree to which the population at large, including globally, is more and more tech savvy.

Today, if you think of this little cloud picture here, what you see contained in it is largely the consumer device Internet, and it used to be personal computers, but now it includes phones and tablets and cars, and all kinds of new sensors. For example, all the new Polar heart monitors that you wear while you exercise, a bunch of the new bathroom scales that do your body fat measurements, all these things now transport their data automatically into these consumer data repositories and from them you can analyze them and pass them on into the medical system.

So, we started to see consumers who are not only interested in their own health but are more and more invested, either because of diseases they currently suffer with or because they want to focus more on health and wellness, they are technology enabled, and our ability to communicate with them, educate them, train them, analyze them just gets better and better almost by the week.

When we thought about this general problem of all the data that’s out there in the health care environment, one of the things that became clear is that the community has struggled mightily for more than 20 years probably to try to think about how it was going to standardize its data.

And one of the things that we believed, I’ll say those of us who came at the problem more from the Internet model as opposed to the traditional enterprise data model, is that the whole idea of standardizing records doesn’t really work. The Internet taught us this lesson, and most of us sort of abandoned the idea that you want to standardize particular record formats, because the way we want to do the analytics, and, in fact, the way we want to be able to merge and discover this information and the scale of it really precludes the idea that anybody could do an a priori analysis or normalization or naming of all these things. And yet that whole concept hasn’t really found its way back into the medical community, and mostly you still hear them talking broadly about the need to continue to standardize records in order to create exchanges.

And so I think that when we looked at this in this PCAST report for President Obama, and we said there’s really two things that you do want to standardize, one is you want to standardize the metadata description language, and we proposed a simple way to do that, again built from some Internet technologies. And with that you want to describe the provenance of all the data, and you want to describe the controls that have to be applied to the data.

Unlike the Internet at large, which doesn’t have any privacy constraints per se in what people publish, there’s a lot of derived privacy constraints you can now read about in the press every day, but here in the medical world it was clear we had to come up with an express way to deal with the privacy constraints.

And, in fact, if you adopt these kind of architectures and wrap them together in sort of a bundle and put a cryptographic envelope around it, you can have a fairly robust way to say that the ability to share data doesn’t in and of itself require that you see a diminishment of privacy.

And so with that in mind, you know, we said, all right, well, what would you do if you had these sort of data elements described this way, well, you’d say, well, then all the hospitals would be encouraged to get their data in such a form. And once you’ve done that, then you say, oh, I’ve got all these other constituents who care about the data or produce the data or in the future what to analyze the data, and you need a way to cross-connect these things.

Again, the Internet has sort of shown us a model in a very large scale for how to do this, and the thinking was that we could essentially interpose a set of Internet-like searching and indexing kind of capabilities to provide access, controlled access to this information, and through that you could link together the institutional requirements for the information access.

But, of course, we’ve now got these consumers sitting on the side generating all their own data with their scales and heart monitors and everything else, and, in fact, what they’re producing is what I call the continuous health record where the institutions are really producing the episodic record of care. And my view is that if you want to really look at things, what you want to do is merge those two things together. So, the obvious thing is to then treat the consumer data much like just another instance of institutional data, and bring all these things together such that there’s just one general data architecture that allows, with all the permissions appropriately attached, the exchange of institutional information and patient-generated information.

So, just as today on the Internet we see all this user-generated content, the things that get posted on Facebook and pictures and videos and everything else, there’s no reason to believe that in the future there won’t be a lot of user-generated content that will be medical content; and, in fact, the ability to analyze this in concert with all the data that’s produced in the clinical care environment I think will actually be quite enlightening.

In the area of machine learning and what its capabilities are, I think this is a very important new technology. And practiced at the scale of Internet-size machines and databases it’s an incredibly powerful technique.

In Microsoft Research we’ve actually been doing some experiments. We use this machine learning technology in many different elements of our business operations every day. When the Bing search engine runs, there’s a lot of machine learning that guides it and how it learns and searches the Internet every day. When we have to give somebody a particular ad that gets generated, it’s machine learning that essentially figures out what is the right kind of ad to show to a particular person based on what they’re doing. And the question is, how do you take these things and begin to apply them more in the medical space.

So, as a research project, working with our Health Solutions Group, we took a dataset that came from one of our development partners, which is a Washington medical center near D.C., and through that we had basically 10 years of data from 2001 through 2009, and it was all the data. It wasn’t just the clinical data, it was the billing data, the operations data; every kind of data that they had we had put into this big, giant data structure.

And we set about to answer the question, if you look at some of the things that are expensive in medicine, is there a way to not ask the doctors, if you will, or the trained people, what they think the answer to these questions are, but can you ask the data instead, and would you get a different answer.

So, this is just a snapshot of some output of evidential findings for people who were admitted to the hospital there that had congestive heart failure. What we wanted to figure out was could we predict, could we build a model that would predict who would be readmitted and why; and if that was true, then obviously you could seek to intervene.

So, there’s obviously many things that the medical establishment knows about why people get readmitted if they are there with congestive heart failure. What was fascinating, though, was when we just said, hey, machine, go look at 10 years’ worth of data and come back and tell us why you think it happens. Well, yeah, it found all of the traditional things, but it actually found several things that were not in the common practice.

So, here’s people with congestive heart failure, but if it turned out they had a gastric ulcer or had been given GI drugs while they were there being treated, their odds of being readmitted were dramatically higher. And at least within the community that was working on this, they had historically not known there was such a tight correlation with those three things.

There was another class that actually nobody said they really knew was tightly correlated, which is if they had been also diagnosed with a depressive disorder, they had a much higher probability of being readmitted.

These things are not part of the daily practice of prevention for readmission, and the idea was could we build a system that would allow a predictive model to run every day as part of the workflow that for each of these important diseases would be able to give you this kind of printout every day and say, well, you know, John Doe has the highest probability of readmission and here’s why; and therefore if you want to intervene now, maybe he won’t come back.

This is just goodness for the patient and goodness for the system from a cost point of view, and we think that this is just scratching the surface of the kinds of things that can be done using this machine learning technology.

Another thing that’s obviously happening now is — and it’s really been true for 35 or 40 years — is that it’s really the individual who makes the choices about what the technologies are that are going to become important in the enterprise.

Microsoft was really born of that phenomena where individuals decided that they wanted to have a word processor, a spreadsheet on their desk, and they went out and bought PCs and software and they ran them. Later on, it became much more highly used and populated.

The same phenomena has been seen now for cell phones, tablets, pads, you know, myriad other devices, and it’s our belief that we’re going to see that trend accelerate, if anything, simply because 35 years ago, there weren’t that many people who were really focused on these questions and were tech savvy, but now the whole population is becoming more tech savvy, and the penetration of these things, particularly through cell phone technologies, is getting very deep, not just in the developed world but certainly even in the more emerging economy environments.

So, what I want to show you is some new technology that we came out with at Microsoft after a lot of years of research, and we’ve applied it first in the domain of entertainment, but I want to show you because I think this kind of technology will be important.

We want to change the way people interact with computers. Today, you have to learn about the computer. It’s a great tool. And if you can master the tool, it’s like mastering a musical instrument, you can do some amazing things.

But we think it’s going to be — to be more inclusive and more powerful in terms of the semantic interaction between people and computers, you want to raise the level of interaction and make it more natural. And so we think of this as computers becoming more like us, and becoming more of a helper and less of a tool.

So, we built this thing, which we have bolted on the bottom of this screen here, called Kinect. It’s really the world’s first camera and audio sensor that sees in three dimensions instead of just two. We used it to create a whole new generation of games. They came out last fall, and I’ll just play a video clip for those of you that aren’t familiar with it.

Basically, there’s no controller; you are the controller. So, you just get up and move, and essentially the system, the machine vision system sees that movement and translates it into actions of the characters that are in the game on the screen.

And so this has become a phenomenon. In fact, it got the Guinness Book of World Records for the fastest zero to 8 million of anything ever made and sold.

So, it’s clear that it was more gender neutral, it expanded the demographic group who participated in this kind of environment, and we think it’s just the beginning of a capability.

Before this came out, which you can buy for $149 at Best Buy, if you were a researcher and you wanted to work in this space, a camera similar to doing this, the cheapest way to go buy it was about $30,000. So, when you go from 30,000 to 149, you just get this incredible explosion in the number of people who start to investigate what can be done.

We’ve been doing some of this investigation, too, and say, well, what happens if you don’t just want to put yourself in a game, what happens if you can send your avatar out to meet with other avatars?

A year ago, I used to give this talk and no one knew what an avatar was. Jim Cameron solved that problem with the movie. So, let me show you a thing that’s actually going to go live worldwide where you and seven of your friends can all sit at home and essentially have meetings in cyberspace.

(Video segment.)

CRAIG MUNDIE: So, you might ask, you know, what does any of this have to do with medicine? And so I’m going to try to show you some wild and crazy ideas that have already come from people about how would you use this in a medical environment.

One of the things is, of course, just to make it more natural for people to interact. So, here I’m going to show you sort of a hypothetical system that combines conferencing, collaboration, touch, gesture, speech input, all of the things that make the computer more like us; just you interact with it much as you would interact with somebody else.

So, if I go over here and I just — let’s say this is my office, I walk into the office, it sees me. It actually can distinguish one person from the next. So, if somebody else walked in, it either wouldn’t respond or it would show something different.

And so here it’s got a summary of the patients that I might be interacting with. If I approach the thing, because it sees in 3D, it says, oh, he’s closer, I can give him more data, and it gives more resolution to this. It gives me perhaps some calendars or something I’m supposed to do.

If I have someone I want to collaborate with, I can say, hey, let me have a videoconference with my partner here, and we can talk about these people. We’ve been trying to, for example, build up a trial around metabolics.

So, here I can use gestures. If you look at my hand, it basically animates that menu on the left, and I could select something just by pointing at it. I can also use speech.

So, here we’re looking at a diabetic population. So, let’s say, “System, show me the cohort with BMI greater than 33.”

So, here it filters them out, and selects them and presents them for me. What I really want to do with my colleague is essentially determine whether there’s any of these people that might be appropriate to introduce into this new trial.

So, I can say, “System, filter these people based on eligibility for the new trial.”

So, it might pick out these five people. And I know each of them, we may have talked to them about it, and at that point, I could say, “System, enroll them in the trial.”

So, it might collect up their data, anonymize it appropriately for the trial, send them e-mail to confirm that their permissions to do this are agreed upon and documented, and then I can just go on and do more work.

So, let’s say I wanted to drill in a little bit, say, “System, show me Lori Penor.”

So, here I get a chart, which is integration of a lot of information that I might have about Lori. She’s a diabetic, she’s had some recent problems in terms of maintaining the trajectory of weight loss, and I can essentially say, you know, show me her caloric intake.

Here we’ve got a lot of activity and other measurements that we think are being accumulated by her technology at home. So, that’s sort of flowing into the system on a regular basis. The system can analyze these things and flag things.

So, for example, you know, glucose levels have spiked here, her activity level has had a sudden dramatic reduction, and it annotates and says she sprained her ankle 27 days ago. So, if I want to see the effect of that or the correlation, you know, I might drag it up and put it on this chart and be able to say, well, is this really a factor or not a factor?

So, the ability to take big data, bring it together, have the system doing a lot of the analytics for you, and it’s like having a great assistant, and one that understands the context in which you operate.

But this is a person who we’ve been trying to treat for a while, she has some psychological issues, the diabetes is essentially a problem for her, and many people might have that similar problem.

So, one of the ideas about this Avatar Kinect that I showed you the tailgate party video a minute ago, one of the things we’re trying to do is to improve the quality of those avatars. Today, they’re just caricatures. And there’s a reason for that. It turns out the brain struggles if you get into what we call the Uncanny Valley. That’s where the realism of the avatar starts to get pretty good, but the behaviors don’t match human expectations. And that produces a level of cognitive dissonance that really is troubling for people. So, in the end there’s no practical operating point between caricatures and really photorealistic, highly well-behaved avatars that’s really comfortable for people.

So, in the research world we’re trying to build photoreal avatars, so in the future you can send somebody there that doesn’t just look like a caricature of you, it can be somebody that’s almost indistinguishable from you, and you’re animating that avatar in real time.

But when I showed this to some people in the medical community and told them that story, they said, hey, you know, but there’s a bunch of things we’d like to do where we love having this avatar, because, in fact, people want anonymity, and yet social contact is important.

So, we built a prototype of what that might be. Let’s say we had these people in these sessions. In an avatar-based world it’s not like having a video camera stuck in there recording something; people are just sitting at home in front of their television, interacting and being essentially projected into a three-dimensional environment where all their interactions, the spatial cues, the audio cues, they’re all correct. So, there’s a naturalness to it, even though you’re looking at what appear to be cartoon characters. But because it’s a computer-generated environment, you can look at it or replay it from any particular environment.

So, here you might have had the people that were in this session. This is essentially a support group of diabetics who get together and talk about their diabetes problem in a moderated forum. So, I can essentially replay what that was like. So, you could hear each person talk, you can see what their actual movements were. And interestingly, even though these are cartoon characters, if you get the animation of the eyes, eyebrows, some elements of the face and the mouth nominally correct, most of the major human emotions are accurately portrayed. So, if you’re trying to look at how people react in this environment, you get a huge amount of cues, even though you’re looking at their cartoon character.

Now, in this case I really didn’t care about this guy, I was worried about Lori. So, I can say, let’s move the camera. So, I can drag the computed camera and put her in the middle of this picture, and now I can essentially say play it again.

So, while I’m playing the same thing again, and even though the person talking, if it was somebody historically recording this with video, they’d probably be looking at the guy talking. What I care about is how is she behaving. And, in fact, you can see, well, she’s not particularly engaged. So, I might be worried about her mental condition.

So, at the end of this, I might think, well, I’m not sure this is working for her, and I might want to refer her to a colleague. So, I can touch on this, and get a menu of things, and I might refer her to a colleague or someone else.

So, I think more and more our view is that there’s going to be a lot of integration of workflow, analytics, communication, and collaboration, and, in fact, we may find interesting ways to use even this type of caricatured interaction.

One of the reasons that I think that these things are likely to happen broadly, and even at an accelerating pace is just the amount of infrastructure that is being built in the world in support of the general business and consumer use of the Internet. Mobile telephony and mobile broadband behind it is clearly an integral part of that.

If you look at these two graphs, the one on the left is sort of the developed regions of the world. The one on the right is actually exploded by a factor of six, but you can see that the shape is somewhat similar and that, in fact, the mobile broadband growth is quite dramatic, even in the poorer parts of the world. And because of that, we think that there’s going to be lots of new ways to think about the challenges of getting data, sharing information, and tracking some of the broad workflow issues and challenges that happen in this environment.

This morning, I was at the breakfast session here, and one of the comments that emerged there was that, look, you know, in some cases we already have a lot of these vaccines, we just don’t have a good way to get them delivered, that the world is a big and hostile place and we don’t know how to make all that happen.

You know, what’s in this picture is a thing I think the PATH organization calls “smart connect,” and it’s a way where they’re trying to extend the reach of the cold chain in a sort of knowledgeable and economic way. And as I understand it, they’ve got a freezer where they can freeze cold packs, and they’ve got a portable unit where they can put the vaccines to continue to the cold chain, and make it more mobile. But this doesn’t last all that long, and then they’ve got to come back and go through the process again. So, you could say they either have to extend the reach farther of electricity and other things or you have to find some other interesting way to do it.

I remembered a couple of years ago talking to Bill Gates, and he told me that this cold chain problem was really a significant challenge for vaccine delivery. And Nathan Myhrvold, who now runs a company called Intellectual Ventures, and was the guy who hired me at Microsoft almost 20 years ago, he does projects for Bill with his invention crew, and one of the things that they took on was this challenge. And, in fact, I noticed that they have little cards on all your tables here about this device.

Bill challenged them and said, you know, can’t we find a better way to maintain the cold chain. So, they went and they looked at how we insulate things in outer space and how we do cryogenics in the laboratory, and they took those things and designed essentially a smart sort of cryogenically insulated container that has a very simple way of putting vials in and out of it without disrupting the cold, but you fill this thing up with a cold capability. As I understand it, it keeps all those vials between zero and 8 degrees C in a 40-degree ambient environment for 90 days with no electricity. So, your ability to basically take the vaccines and put them in the back of a pickup truck and take them very far out there is just dramatically improved by this kind of capability. And they’re very interested I know now in showing this technology to people.

I think these are examples of unexpected solutions to problems, and I think in many ways I’m a big advocate for trying to bring the world’s engineers to bear on the problems of medicine. Today, medicine has been more or less insulated relative to getting engineers to work on these kinds of things. Engineers do great jobs in big companies building diagnostic equipment and other things, but there’s a lot of problems that engineers can tackle that I don’t think we’ve done a good enough job to bring together, and this is just an example of a positive outcome in that space.

Another thing, if you glance at a map, and here we color-coded it just red, green and yellow, green are countries that actually have a fairly high proportion of doctors per 10,000 people. And there’s a very small number of them, and generally countries that have some advanced society and a relatively small population. You know, the bulk of the world is in the yellow area, certainly the developed world, and, of course, a huge part of the world’s population are in the red ones.

So, one of the questions is, how many doctors would it take to move the red countries to the lowest level of a yellow country, and it turns out the answer right now is about 1.8 million doctors, and that’s a lot of doctors.

So, in my mind one of the lingering questions for some time has been how are we going to get the number of trained health care professionals that we need to service that many more people, and, of course, that doesn’t contemplate the idea that between now and 2050 we’re going to go from 6.5 billion people to 9 billion, and now estimates are that the asymptotal probably will be 10 billion people by the end of the century. So, that’s only going to exacerbate this problem.

So, one of the things that we’ve been doing research on is to ask the question, you know, can we find a way to substitute computers for some of those people. They’re clearly not a complete substitution, and what I’m going to show you is just a research exercise, but we said let’s take a specific problem, in this case diagnosing the 16 most prevalent childhood diseases of the poor, and can we create a computer system that presents itself as a robotic avatar that is completely autonomous, it interacts more or less like a human would, and it uses the same kind of inferencing system that a human person would do who was knowledgeable about these problems to ask questions and refine a diagnosis, and then be able to make a recommendation.

So, the last few years we’ve combined our robotics work and our Bayesian inference system and worked with some people who are indeed experts in this type of medical diagnostic capability, and we built this as a working prototype, and I’ll just show you a video.

(Video segment.)

CRAIG MUNDIE: So, we’ve got to do a little work on the bedside manner and the speech — (laughter) — but the important thing to realize is this entire thing — and we did this work about two years ago — the PC that runs this thing probably today costs less than a thousand dollars. And if you couple that with this camera that I just showed you, which sells for $149 instead of that big thing that’s on the top of it, you begin to realize that the normal progress of technology in this space is really going to drive the cost of these things down.

So, the question is, if I can put an autonomous system that presents itself in a more and more realistic way as a person, and I can really give it the world’s best databases regarding diagnostic capabilities, and, in fact, I can give it more and more automated diagnostic equipment — a few years ago, I was here and showed a thing that the Micronics company was developing, which was really just a prototype at the time, but again this year they’ve actually taken the molecular diagnostic part of that, and they’re putting it into production, it’s actually going into clinical trials.

So, as we get more and more inexpensive, automated diagnostic equipment, and you supplement it with this kind of humanlike interaction capability, I do think that we have the ability to potentially transform health care, improving outcomes, lowering costs, extending access, and that’s what the magic triangle is.

So, whether it’s clever engineering, computers that are more like us, or radical uses of what appear to be initially consumer technology but finding very, very interesting ways to apply them in the medical domain, I’m super optimistic about our ability to solve these problems. And while they are certainly very large and very diverse, I think that the thing that’s important is to recognize that you’re not trying to solve all these problems where technology has been frozen; you’re trying to solve all these problems in an environment where if anything, technology is accelerating.

Thanks a lot.

Related Posts