Remarks by Craig Mundie, Chief Research and Strategy Officer, Microsoft
University of Michigan
Ann Arbor, Mich.
Oct. 8, 2008
CRAIG MUNDIE: Thank you. I’ve had a very warm welcome here since I got here this morning, and I appreciate all of you coming to listen to what I want to say this afternoon.
I have a very cool job at Microsoft in that I’ve been there 16 years. I went there at the beginning to do non-PC computing. So, in 1992 Bill Gates thought that would be an important thing, Nathan Myhrvold, our first Chief Technology Officer, thought it would be important, and I came there on a handshake with Bill that said, all right, we’ll try to figure this out.
There really wasn’t a way to make a business plan at the time. You couldn’t go anywhere in the world and find anybody telling you, hey, I’ve got to have a smart cell phone or a car I can talk to or a television I can interact with, and that’s, of course, normally the case that people who have to predict the future are better off engineering it than trying to guess at what it might be, and one way that you sure can’t get it right is just go ask the population at large what they think it ought to be.
So, for 16 years I’ve enjoyed the privilege of being able to think about things that we think are going to be important, and to have access to resources at Microsoft that allow us to try to make those things a reality.
Ten years ago, when Bill Gates made Steve Ballmer into the CEO, I went to work with Bill and we began to more formally think about planning the company’s future on a long term basis.
Bill had long been a proponent of doing research, and today we’ve built the world’s largest computer science and software research activity that anybody has either in government or in an industrial environment.
Bill decided two years ago that he would retire and focus on his philanthropy, and we split his job in half. Half went to Ray Ozzie and half went to me. In a sense I think I got the really cool half because it included all the global research operations, and that gives me the ability to work with some of the world’s smartest people and think about how we can invent technologies that will be important as we go forward.
I’m really excited about a number of the things that I and the people who work with me are doing, not just in research but in building new businesses in Microsoft.
I have three new business divisions that are being incubated now, one in health care, one in education, and one designing products for the emerging global middle class, and to some extent the people below that demographically who require some type of welfare but have a deep desire to gain access to these important technologies.
I think we’re at a time now where as we look around the world it’s easy to see that society has many global challenges.
My own belief is that there really is no solution to these challenges other than through the efforts of engineers and scientists, that to some extent the thing we should demand from the political people around the world is to clear the way so that the engineering of these solutions can come forward.
Oftentimes that gets lost. We live in a society today that is more likely to celebrate athletes and entertainers than scientists and engineers, but our society has gotten where it’s gotten largely on the back of breakthroughs from engineers and scientists, and I personally believe that will be important in the future.
Today, we’ll do this in two parts. The first part, I’m going to show you some demonstrations, and I’m going to use that just to provide a thematic way of exposing you to some of the things that we’re doing at Microsoft that are not what the stereotypical Microsoft is thought to be.
We’re involved in a tremendous array of technical activities and business activities, and while many of you may know us as, well, you just do Windows or PCs or I use Word or Excel, in fact, the company is much, much more diverse and getting more diverse by the year.
I wanted to pick a theme that all of us could relate to, and yet which would allow me to talk about the integration of many of these technologies in the future, and so I chose education. So, in the next half hour I will guide you through a look at some of the things that we think will be important using that as a topical theme, and then we’ll use the last half of the program just to do Q&A, and I’ll be happy to talk to you about any subject that’s on your mind.
So, let me talk first about why I showed this video to open it up. It was recorded at the TED Conference, the Technology Engineering and Design Conference in early 2008, and Roy Gould, who’s a Harvard astrophysicist, recorded this comment while a guy, Curtis Wong, from Microsoft, who invented the WorldWide Telescope, brought it all together and operated it.
You can go download this thing. If you haven’t, I’d encourage you to do so. It’s quite a powerful tool.
Roy talked about how this was really changing the field of astrophysics and astronomy. It really grew out of the work that Jim Gray, who died a couple years ago in a sailboat accident, but was one of the pioneers in database technology and a fellow at Microsoft.
Jim set out a few years before that to try to accumulate all of the imagery that was available from all of the observing platforms of the sky, so the Sloan Sky Survey, all the Hubble Space Telescope activity, and built a database system that brought it all together, and that in itself started to transform the field.
Later Curtis decided that he would take the concepts that you could see as people would use Virtual Earth or Google Earth and say I can look at the earth and look at streets and aerial photography, but what happens if you turn the telescope around, to some extent, and you want to do the same thing with the heavens? So, he set out and built this WorldWide Telescope that laid on top of Jim’s work.
When people see this, they have a very powerful reaction, but almost universally people who see it say, wow, that’s the kind of thing I want my kids to have, and it is such a powerful learning tool.
But it’s interesting in that this is not just a video of what happened there. We brought the same tool and we put it here on this Surface Computer. We announced this about two years ago. It’s a large scale, direct manipulation, multi-touch computing system. Today they’re still a little expensive, but ultimately we expect that it would be normal for people to find this as a piece of furniture in homes, to play games and have other types of entertainment activities around it.
But today I can basically just use my finger and drag around this interface. If I want to zoom in on this, I can do that quite easily. I could basically go anywhere in the sky and see all the imagery. I can just click buttons and see the x-ray, the infrared, all the different types of imagery that have ever been captured about this, and it’s a tremendously powerful tool, and it’s really quite natural to use it.
When you think about the fact that before this existed, less than a year ago, there was not an astrophysicist or astronomer in the world who could see all this; just wasn’t possible. It didn’t matter whether you were an academic who wanted to pursue this further or you wanted to just learn or had an interest in astronomy, this kind of thing wasn’t possible. So, when we bring these kind of techniques to bear, it has a very powerful effect.
But one of the things that Curtis built into this — his background was in computer multimedia techniques — is he said, I want people who explore this to be able to share their exploration, to make it a vehicle for collaborative computing.
So, during the final testing of this he had a friend who lived in Canada, and he said, I know you have an interest in this; I’d like you to look at the telescope and tell me what you think.
So, he did, but he also shared it with his kid. I’m going to show you a guided tour of a particular part of the sky called the Ring Nebula. This is a multimedia tour, and it was created by Benjamin, and you’ll note, as he starts out, that Benjamin is just six years old.
(Video segment.)
CRAIG MUNDIE: When you see what happens when you put these kinds of tools in the hands of even young children, it’s just incredibly powerful.
I think back — I’m 59 now, and so 50-odd years ago when I was six, people thought, oh, this kid is pretty good, he wants to go around and he disassembles his mother’s fans and mixers and other things, but today six-year-olds can essentially look at an incredible array of information, and deal with it in ways that were really not possible before. And I think that it holds a great promise for us to change the model of education, and, in fact, to use tools like this to extend the benefits of education to a much greater part of the global population.
Some of the roots I think of our global societal problems stem from the fact that we have a huge disparity between the haves and the have-nots. So, there’s about a billion and a half people who sort of have and about 5 billion people who don’t have. The question is, how are we going to ever scale this up. You can’t take concepts of rich world health care or education and think that we’re going to scale them to the rest of the planet. So, we have to find some way to allow people to bootstrap themselves up into solutions like this, and I think that information technology, coupled to mass manufacturing of electronic systems like consumer electronics, cell phones, television, DVD players, somewhere in there is the basis of computer mediated support for self-education and do-it-yourself solutions to some of these challenges.
It will be a while before we probably get the robot that will set your arm in the village in rural India, but it may not be that long before we could actually put some type of robot there that would essentially guide you in non-acute medical care, and therefore compensate for the fact that we just as a community don’t have enough really smart people who we could put out there, and the cost would be prohibitive.
So, let me switch now to sort of a tour of what we might think it would be like to sort of be in your chair a few years in the future.
So, this is just a standard Tablet PC that you can buy today, and I’m going to use it to give you an idea of what it might be like to study in the future, where the computer systems, both the ones you own and hold and the ones that are available to you locally and through the cloud, come together to form a computing environment that supports your basic quest for education.
So, in this example I’m looking at a presentation that says, okay, I’m studying a number of things, anatomy and literature, and let’s say I want to go in and examine what would happen if I’m studying anatomy. Much as we can composite things together on maps or we can composite them together in the sky, what if we can start to composite a lot of information sources together to give you a different way of thinking about learning?
So, in this case let’s start with a skeleton, and it shows the scale and what’s on there, and I can say I really want to bring together the resources of the global knowledge base with what I can process locally.
So, I’ll say show me what the circulatory system looks like, so it shifts to show that, and what’s the muscle system look like, what’s the nervous system look like.
So, to some extent we have more graphical ways of both presenting and then navigating within a much broader amount of knowledge than you could ever have hoped to carry around in your own brain or in a few textbooks that might have been specified for a course in the past.
So, we can annotate these things, in this case with these little circles, to say in each of these areas there may be more information that you’d care to explore. So, let’s just pick this one and go look at it.
So, it says here I’ve extracted from “Gray’s Anatomy” some specific information you might find useful or as a launching point for further study of this. So, it might give you a citation from the book, but it also says I can give you this in a 3D view. So, I’ll click on that and say I want to understand this, but let’s say I want to understand it as it relates to the brain. So, I’ll drag this box up to the brain.
So, now I’ve got a 3D model. I can manipulate the model by direct manipulation. So, I can, for example, rotate it around. I can say I really was interested in the brain, so let’s go up and zoom in on the brain.
As that happens, we’re essentially assuming that the computer is going to go out and collect a lot more information about this.
So, for example, it says, all right, the nervous system, the brain is basically built on these synapses, and if you want to understand how they work, let’s understand that. So, I could call up information about details of synaptic fire, and say I don’t really understand how that works, show me an animation of what happens. It says, okay, this is actually what happens. These things are separated in space and there’s essentially a chemical that moves between them.
So, more and more your ability to take what you sort of have been trained to do today, which is search and follow a link, becomes something that is more contextually mediated by the computer than left as a task or an exercise for the student.
To some extent other things that you’re doing today like instant messaging and social networking I think will be built into these kinds of environments. So, for example, I might be finished with that and you might have a classmate or a colleague that says, well, look, I noticed that you were examining this, I’ve actually been studying it too, I’ve been sketching out my understanding of it, here I’ll share my drawing with you about this.
So, now you get computer generated content, but not just YouTube or some other things that you might do for entertainment; you might be able to really use this as the basis of collaboration where you have some real purpose or intent to do that.
And in each of these cases our vision is that there’s a lot of subtle cues that help you know where to go. So, in this case the things that are tinged in green are thought to be highly relevant, not because you predetermined it but because some type of semantic analysis or contextual awareness was applied as each information was presented.
The little boxes at the bottom are just abstractions, which I’ll show you in a minute, that are also color coded as a function of relevancy, and in this case it’s against the context of what you’re studying and who you’re studying it with, not just some broad collaborative filtering process that might happen in a generic search engine environment.
Another thing I think is going to happen is something I call speculative computing. Today, all the computers that we use are primarily engineered to just respond to you and to do it as quickly as possible. So, they just sit there almost all the time waiting for you to fall on your mouse or keyboard and then they say, oh, I’m supposed to do something.
In a sense we waste a lot of the computing capacity. If most of you took any notebook that you’d use today and looked over a period of a week at what its actual utilization would be, it would probably be low single-digit percentages, but that’s because we engineered it to be responsive but not predictive. So, I think in the future we’re going to be able to use both the local and the cloud-based computing resources to be able to speculate about things that might be useful.
In my job at Microsoft I have lots of people who help me plan and execute things, and they don’t wait for me to tell them, oh, you must do this today and you must do that tomorrow. We have a history together. They understand the tasks that we’re trying to get done. They speculate, they do things. And I think in a way computers should do that for everybody, not just people who happen to have the luxury of having a staff that help them do that as part of their job.
So, I think that the world is going to move quite a distance from what you know today as point and click or search into things that really help you solve problems and, in fact, may take a shot at solving them.
In this environment you can go out and say, well, what should I be looking at. What you might see in the future is a long list of these things that have all been organized, prioritized, sort of color coded with respect to relevancy, and you can just pick the ones that you might be interested in.
Or you might say, I want to filter this. So, for example, I want to look at the things that might be interesting to my study group, so I’d pick that. So, now you can combine the activities or interests of a particular group and narrow down the range of these things that might be interesting.
So, the idea is not to always focus your mind in a particular way, but to help at least eliminate the things that are likely to be not of great interest or import, but allow exploration.
So, for example, here the green ones may be directly on point with respect to your subject matter, but you might be interested in something maybe more fun like caffeine and your brain, so I’ll pick that one.
And it says, you know, hey, this is kind of interesting, you might be interested in this little simulation. If you don’t understand what part of the brain is affected by these type of stimulants, just look at this simulation.
So, you call up that. It goes out and basically shows you in this 3D model where that activity takes place. Here they made it into a little game. You can essentially move the slider up and down, up to a cup of coffee, up to some jolt or some energy drink, and kind of understand what part of the brain is affected by this and the degree to which it becomes involved as a function of how much of this you take. So, it helps people have a visceral understanding of these things that are really hard to put together any other way.
So, here again I could basically say I’m interested in other things related to this, I could scan back and forth on these things, and maybe pick the one that I think is the most relevant.
So, here we might go to another topic, and, of course, one of the things that you spend a lot of your time doing is somebody gives you a citation, you have to analyze it and extract the parts of it that might be the most interesting. Well, why doesn’t your computer help you a bit more with that?
So, in this case let’s assume that once we’ve selected a text, that behind your back the computer goes out and it does a semantic analysis of the text and figures out and creates a map of it, tries to correlate that with the stuff you’ve been studying or interested in or your recent browsing patterns, says, okay, this NIH stroke scale, that might be the most interesting to you, and from that you can say, all right — well, let’s say I pick that. You can go back and say I want to look at the text. So, now I can see the text, and I can see the whole document in sort of an outline form.
But here the computer has actually, based on the semantic analysis, figured out which parts of the text correlate most closely with the things you’ve been asking. So, it’s essentially scanned it for you and highlighted it so that you can find the parts that are the most relevant. So, let’s just take this green one in the middle and look at it.
You might, as you read it, you might be, for example, a student whose native language is in English, and you’d say, well, can I read this in another language. So, the answer is sure. We’ll just do a real time machine translation or bring you up the original document in this case, which was written in Japanese. And if I want to just change languages, I can do that. So, I think more and more our ability for both the written word and ultimately the spoken word to provide real time, in context machine translation will get better and better.
So, let’s say I decide, yeah, this actually is of interest to me and my study group, so I really want to take some notes about this. So, I’ll just drag it down here and drop it in my notebook.
So, what is your notebook? Well, basically this thing is your notebook. A couple of years ago we introduced a product called OneNote, and many people are starting to use it now. The idea was to give you a freeform way of recording anything that was of interest to you: traditional text, diagrams, handwriting, sketches, videos; you can just drop them all in there. But behind your back it tries to index them, make them searchable, and to some extent it produces a way of having a searchable record of all the things that you have found of interest.
Well, today you’re probably among the first generation that would have that kind of capability available, but, of course, these type of notebooks, they don’t run out at the end of the quarter, they don’t have a fixed number of pages in them, and, in fact, the storage capacities of the systems we have today are so large that there’s really no reason to believe that pretty much everything you ever wanted to remember can be recorded in these things and discoverable.
So, let’s just say, since this is the future, that people who are growing up like Benjamin is today, he’ll have a notebook. So, maybe he’ll in the future just go back to like the 8th grade, and say when I was in the 8th grade and I was studying biology for the first time, somebody told me about the spinal cord and the brain system, what did I know then or what should I remember from what I was exposed to then.
So, the ability to move backwards and forwards in time, and to bring forward all of the insights or recollections that you had I think will be broadly a computer assisted function in the not-too-distant future.
But let’s go back to the present. So, I’ll click here and move us out to today, and say, all right, here’s my timeline calendar for today. I’m taking three courses. Oh, by the way, there’s no reason I can’t actually have the computer understand the relationship between the courses I’m taking, which might be strongly coupled or not so strongly coupled.
So, all this notion of networking, the contextual analysis, the semantic linkages between these things can all be highlighted for you as an aid to education.
So, let’s say that, okay, this thing says I’ve got this test coming and we’ve got a group project, so let’s talk about how might groups actually collaborate together in a technically more advanced education environment.
So, I’ll come back to the table, the Surface, and what we’ve done is actually affixed on the back of this and a number of these other devices an optical barcode, sort of like you get the UPC code in the grocery store, but this one is essentially easily machine readable, contains a lot more information than you can put in a traditional 2D barcode.
So, this Surface Computer is actually not only a system that projects information up for you to read, but it basically has infrared and other cameras that allow it to see things that are on or near the surface, and to base the interaction on that.
So, in this case I’m just going to take this thing and put it down on the surface, so it knows, okay, now there’s this computer near me, and it spills out stuff that might be relevant for this task. So, I can drag these things around and do whatever I might want.
Well, let’s say either I or perhaps a colleague, they’ve been basically collecting stuff on their cell phones as they wander around. So, the cell phone has essentially the same capability. So, if I put it here, then I can get it to bring out the collection of things that were put there.
Maybe somebody went to the biology department and got the old brain model. So, this actually gives you some physical model of what it looks like, but it also turns out then to be a way of getting navigation. So, if I put this here and adjust it, I get again information that might be relevant to that study, and I can basically pull those things out and look at it.
So, we can sit here and talk or examine these things together in the library or a dorm room or a coffee shop, and the ability to essentially simply bring these together, allow people to interact and accumulate them should be a quite interesting thing.
So, if you want to publish these for the group to review, we’ve just got this little folder up here in the corner, and I can just drag things up there and drop them in that folder, and that basically allows them to be published for the group to work on collectively.
So, I think that there’s just a tremendous opportunity to come up with novel models of collaboration that really make it very natural for people. So, instead of being sequestered in a room or behind a single computer, having this more social experience be part of how we would collaborate is important.
But as I said earlier, it’s a challenge to think about how we would get all these kind of fancy technologies in the hands of 5 billion more people. We clearly have to look for aggressive strategies to use low-cost devices, consumer electronics type of things in order to make that possible.
And, in fact, even in a rich world environment I’d really like to for the most part take my cell phone around with me, but I don’t want to be constrained to doing everything on a two or three inch or even four-inch screen.
So, one of the things we’ve been working on is the idea of couldn’t we make screens that would be sort of adjunct displays to your cell phone or your laptop, so if you wanted to have a surface computing experience or collaboration environment on the fly, can’t you do that.
So, I brought to show you a demonstration of an actual sort of working flat panel display. This one is less than a millimeter thick and it bends. So, we took some parts of this demo I was giving and we’ve essentially put it in this display — there’s a little microprocessor strapped on the back. But actually our goal in the not-too-distant future is to create either a cable that will hook up to a cell phone or potentially an Ultra -Wideband radio interface, so when you turn this thing on, it’s just got essentially a flexible battery on the back and this kind of color electronic display on the front, and you can put this in your backpack, you might even be able to fold it in half or roll it up, and if you want to have a meeting like the one that we were having here on the Surface, well maybe in the future you can just stop anywhere and put this thing down and see it.
It’s a little out of focus, I apologize, but you get the idea that you’ll actually see the thing change from every few seconds to a different part of that display of the early part of my presentation.
So, the computers are becoming so powerful that all of these client devices, whether they’re as small as a cell phone, as big as a tablet or as big as a table, are going to have an incredible amount of computational capability.
As such, it’s pretty clear that the world is going to be one where the future platform is a composite platform where one part of it is essentially this array of intelligent clients and the applications that run local to you, and the other is the cloud. And, in fact, the cloud I think will actually evolve in a way where there’s a public cloud and some private clouds, and that the future of programming, if you will, will require that people are able to compose large scale, reliable applications, perhaps of the kind I’ve prototyped here, that seamlessly bring all these things together.
Today, if you want to search, you go to the cloud, you get your browser out. You want to go do something in OneNote, you get your laptop and you work on that thing.
Clearly we really need to do a better job and bring these things together. But if we can do this in a way where people have not just the traditional type and point and click model of interaction with the machine but we move to what we call a natural user interface where it’s no different really for you to interact with the computers than it is to interact with your friends.
I had a meeting this morning with some of the faculty. I showed them another demo I didn’t have time to show here of a thing we call the robot receptionist, which is a prototype of a 3D projected image of an avatar who just talks to people in our lobby and her job is to essentially arrange shuttles. So, as you want to go from one campus or one building to the next, you just walk up and you just talk to this robot.
It’s a completely natural interaction. I mean, it distinguishes who people are and what they might want to do by what they’re wearing, so visitors look like they’re dressed more formally, people who work there every day are dressed casually, and it can discern these things and it can interact with you in a very natural way. It has eye gaze. It knows how to interrupt you in a social way.
I think that when you realize the complexity involved in bringing that class of application forward, but the power of bringing it forward, it just has an unbelievable potential.
So, these are the kinds of things that we do that go beyond our traditional, hey, we do Word and Excel and Windows, and we have just an incredible array of these kinds of problems.
So, I hope that as you think about your career after Michigan, that you don’t just think about Microsoft but think about the fact that the world is not constant, that the stuff you’re studying here today will not be freeze dried and be the technological underpinnings of the things that you will help us solve in the future, that if you’re interested in many of these interesting challenges, whether they’re energy, environment, health care, education, either in the United States or around the world, that you really have to acclimate yourself to the idea that these are the fundamental technologies that underpin all of our ability to solve these problems.
So, I encourage you to think deeply about that as you make your curriculum choices, and I look forward to seeing you out there in the future.
Thank you.