Craig Mundie: College Tour – University of California, Berkeley

Remarks by Craig Mundie, Chief Research and Strategy Officer, Microsoft Corp.
University of California, BerkeleyBerkeley, CA
Oct. 9, 2008

CRAIG MUNDIE: Thanks. You know, I want to tell you a little story this afternoon. First, I have just a great job, and I love to have the opportunity to come and talk to people in the universities, both the faculty and the students, and I get a chance to do that on a global basis.

For the last few years I’ve spent a lot of my time in places like China, India, and Russia, and I’ve actually had a lot of opportunity to do this there. For many years Bill Gates also relished that opportunity, and specifically used to do it in the United States.

On July 1st Bill retired from Microsoft daily operations, and two years ago we split his job in half. I got half and Ray Ozzie got half.

In some ways I feel I got the really good half, because I got the part that manages Microsoft’s global research operations, and I added that to the other things that I’ve been doing for a decade or more now in sort of the geopolitics of Microsoft, and also a lot of the startup businesses that the company has.

I went there in 1992 to do startups inside the company, and in particular my job was to figure out what would happen when we put computing in almost everything.

So, in 1992 Bill Gates and Nathan Myhrvold believed that we would finally get to the point where we would, in fact, have software and computing in most devices, and I was charged with trying to figure out what non-PC computing meant.

And, in fact, we’ve made a lot of progress. It wasn’t that hard in the beginning to figure out what devices would be interesting: game consoles, watches, cars, telephones. But now actually 15 years later we’re really just starting to see all of this come to fruition. So, things that were really something people didn’t think much about in 1992 now all of you really can’t think about life without access to that.

And, of course, that’s been true of many things that become infrastructure in our society and change it along the way, and I think Microsoft is privileged to have been a company by virtue of its efforts and in some senses being in the right place at the right time to have changed the world.

I now have a role which combines this global look at the advances in computer science, the role of looking at what the societal challenges are that we have in many ways around the planet, and trying to think where is the intersection of those things and in particular where’s the intersection of them in a way which would sustain the company’s business on a going forward basis, and so we focus a lot of energy on that.

Today, I have a number of new businesses that are being incubated. The three largest of them are in the areas of health care, education, and building products for the emerging middle class and its applications to the welfare class on a global basis.

So, the company is really branching out, and I wanted to use an opportunity to talk to you about these things, to share with you the kinds of things that Microsoft does, to be sure that at least you have one man’s view of what Microsoft is that is different than the stereotype that many people have for the company or where their experience with us as people or our products is limited to using personal computers or running Word, Excel, or PowerPoint. Today, we have a very diverse company, a very diverse product line and a very diverse research agenda, and I want to try to share that with you.

Partly the way I’ve done that ever since I came to Microsoft was to try to build some strategic prototypes that in the vein of a picture’s worth a thousand words, some prototype that allows you to suspend disbelief a little bit and dream with us about what the future might be like is a powerful thing to do, and I have a group of people that come together to try to assemble those, and I’ve put one of those together for you today.

I wanted to pick a topic that would allow me to thread together many of the current topics and future topics in computer science, but present them to you in a context that would be not just the science itself but in an application of it, and so I chose education.

It’s an area, of course, that you’re all involved in one way or another as either a teacher or a learner or both, and yet I think it’s an area where the planet has a big problem. I mean, today we have only about 1.5 billion people who really have any access to these advanced information technologies, another 5 billion people who really would love to have access but we haven’t found any effective way to give it to them.

I think doing that is going to be very important, because I don’t think that there is any scalable way to take the rich world concepts of health care, education or improved productivity and translate them into that environment. Even in the United States, the richest country in the world today, although getting poorer — (laughter) — we have a tough time finding a way to finance universal health care just for Americans. So, the idea that we would find a way to finance it for 5 billion more people, even if we had all the other rich countries participating, is just a practical impossibility.

So, I think that the root of the solutions in many of these areas is, in fact, to couple high volume consumer electronics-type manufacturing with much more sophisticated computers and software, and we’re really at a threshold where I think that may be possible in the relatively near future.

So, let me start first by saying, you know, why did I show you this video. This was taped at the TED Conference earlier this year where we brought out for public review and access the WorldWide Telescope.

Many of you probably knew Jim Gray or knew of Jim Gray. A few years ago, Jim started a project at Microsoft to go to the world’s astronomers and gather up all their data, all the images they had, no matter they got them, and said I’m going to design a database, a repository, and we’re going to put them all in one place. And that was incredibly successful, and even in just that one form it had a transformative effect on the astronomy and astrophysics community.

But even there it was difficult for people to just comfortably interact with it, and so Curtis Wong at Microsoft Research set out to take the ideas that you could see in something like Virtual Earth or Google Earth or other things where people can look down on the earth and say I want to see the roads or I want to see the aerial imagery, and he said, well, what if I turned it around and I want to give somebody the same ease in terms of being able to look at the heavens. So, he built the WorldWide Telescope, and he put it on top of that database that Jim Gray and the world’s astronomers put together.

So, this video showed what it was like when we introduced it, but what I also wanted to show you is a different way of looking at it. That is not the video anymore; that’s essentially the same application running on this Surface Computer.

A couple of years ago, we announced and are now in volume manufacture of these tables that essentially are a computer. It’s essentially a multi-touch direct manipulation interface. So, now the WorldWide Telescope is here, and I could do things like zoom in, move things around, and I have a quite natural model of interaction.

I have all these filters I could apply. I could look at things in the infrared, in the x-ray range. And all of these things are just sort of you just touch on them and it becomes something that’s easy to discover and easy for people to manipulate.

As they indicated, as Roy did in his comments, that one of the real powerful things about this is it allows people to not only explore but to share what they discover with other people.

It turned out Curtis Wong, who did this, his background was in multimedia design, and he had done some of the early CD-ROM title work at Microsoft, and he said, well, I want to incorporate that together, so I’m not just exploring, I’m really creating an authoring tool at the same time.

So, I want to show you a guided tour, and this one wasn’t done by Roy Gould, it’s done by a little guy named Benjamin, who was the son of a friend of Curtis, who got a hold of this when it was in the beta test period. And Benjamin is six years old, and I want to show you something that is really moving to me, which is what a six year old can do when you put these kinds of tools in his hands. So, there’s Benjamin.

(Video segment.)

CRAIG MUNDIE: When I see the results of giving young people access to these technologies, it makes you believe that, in fact, we could make great progress by using computer systems as a way to supplement the traditional models of education. And if we can couple that with the advanced computing technologies that we have in order to give people an interface that brings these things together that promotes more do-it-yourself solutions to health care, to personalized training, to education, to productivity in rural environments as well as rich environments, then I think we really have an opportunity to create substantial change.

The group, as I said that I have, that does prototyping, we put together a demonstration. This is just a standard tablet PC of the current vintage, but what we built onto it is a prototype application, which comes back here hopefully.

Tim may have to come out and tell me where he put it.

So, in this application what we envision is an environment where the computer is really a partner to you in the educational process. Maybe I’m studying several classes: anatomy, literature, et cetera. So, let’s presume that I can use this as essentially a way to explore. Just like Benjamin was exploring the sky, I can explore sort of any subject.

So, let’s say I’m taking anatomy, and I want to learn about it. So, I can now composite things that aren’t the different x-ray images of nebulae, essentially I composite different elements of the human physiology. So, I can look at the muscle system, and I can look at the nervous system.

To some extent through this interface all of the world’s knowledge may, in fact, be available. So, I can start to annotate these things, like the little dots can be an indication that there’s more data there. It’s sort of a visual hyperlink, if you will.

But in this case what we don’t want to do is just go off and wander through space; we want the computer to help us more.

So, if I touch on that, it says, okay, here’s sort of the new version of Gray’s Anatomy, a classical textbook, but it can be put into context. It can have animations and models built into it, so it’s not just a traditional ink on paper kind of presentation.

The bottom may be a color coded scale where each of these things, as I’ll show you a little while, could be different types of links, but here they’re not just any links, they’re links that may have been ordered in terms of the contextual relevance that they have to the subject that you’re studying or people that you’re interacting with.

So, for example, I might drill in and say, show me the 3D model of this environment, and from that I can essentially say I really want to go up and look at the head and understand more about the brain. I can take the model and rotate it around. It’s a manipulatable 3D environment.

I can go up here and sort of drill down on the skull and into the brain, and essentially all the time I do this the computer can be doing things more proactively to go out and not just present me more information about the question I might have asked directly, but to prepare to answer more questions or to present more information that might be useful to me.

So, here I was asking about the brain, and it says, hey, you know, it all boils down to these synapses, and I say, well, tell me about those. It says, okay, I’ll teach you about the details of synaptic firing. In fact, it’s a biochemical process, and I could look at an animation that shows that, in fact, there’s really a passage of material from one synapse to the next.

But here many of the other techniques that I think will find direct application are things that you’re already experimenting with in a social sense every day, things like social networking, instant messaging. So, in fact, maybe other people in your class know that you’re present or that you actually now stumbled into a particular citation that they were reviewing, and he says, hey — Patrick says, “I studied this, let me show you what I did. I sketched this thing out, I have a schematic of it. Maybe we could look at this together over the weekend.”

Each of these things, as you’ll see as we go on, is color coded in this simple sense green, yellow, and red. The idea is to find some way, some simple way of having the computer help you more by analyzing these things and color coding them according to their relevance to the subject at hand.

So, one of the things that I think will happen is something I call speculative computing. Today, most of the computation has been designed and engineered and the applications engineered such that they just respond to your action. It waits for you to fall on your keyboard or mouse, and it goes off and retrieves the next thing and brings it back for your review. But most of the determination of whether that was good or bad and how to proceed next is really unaided by the computer, and to some extent you don’t know where you’re going to go until you’ve taken the first step.

But I think in the future it may be that much more speculative processing will be done, and it can go out and collect a lot of resources from around the Internet, analyze them, and it has the context of what you’re currently doing, what you’ve been doing, perhaps what the subject matter is.

And so at the bottom of this there’s essentially a lot of potential citations, which are in a sense links to other information that you might find useful.

And you could say, well, that may be too big or I want to see this with more context, so instead of all links I’ll say show me these as it might relate to the study group I’m in working on this topic. So, it reduces it to a smaller set. In essence it’s doing queries on your behalf, but where the query is essentially being formulated is a function of information you don’t expressly provide but is determined by the environment in which you’re operating.

So, I could pick green things, which may be very directly on point of the subject in question, or I could pick some red things to explore or to do something that might be related but perhaps more fun.

So, I’ll ask this one about caffeine in your brain. It says, okay, based on what you ingest in terms of how much it ingests, we can now see through this FMRI capability how it affects brains.

So, I’ll say, okay, well, show me this simulation of that. So, it goes back to the model and so now there’s a simulation. In fact, the guy who did this gave me a little sliding scale so I can adjust it and understand how much of the brain and what level might be involved, and what lobe of the brain is actually affected by this, and you can dial it in and perhaps get some visceral sense of this that might be otherwise hard to understand.

If you look at these boxes, I can essentially look at each of these things and it’s like rolling over links on your desktop, but each of them essentially gives me a way to pick something else.

So, let’s link to this NIH study. It talks about hope through research. You could perhaps say, okay, this may be an interesting paper, but how do I know, is there a way for the computer again to help me determine a better way to use this capability.

So, let’s assume that the entire text of this document is ingested, a semantic analysis is done, we create a graph of that, and it shows where each of the components of this document relate to the citations and references that you’ve done before.

So, you might take the NIH stroke scale and essentially go back and say, okay, I understand that. Let’s extract the useful information. So, I can click on this and have a way of navigating throughout the entire text.

In this case, much as you would do if you read the text, you might highlight the parts that you thought were interesting. Well, here the computer did it for you. It analyzed it, highlighted it, color coded it red, green, and yellow, perhaps as a function of which of these things were most directly related to the question at hand, and if I want to look at one of those, I could zoom in on it.

It turns out that if I speak a foreign language or maybe perhaps a citation comes in another language, I can look at it and ask for, for example, translate it into another language. So, we think that the day is coming where machine translation from one language to the next will be pretty readily available.

And once I’ve done that, I have the ability to just take this and drag it and drop it down here into say my digital notebook.

A few years ago, we started building a product called OneNote. Some of you probably use it. The idea there was to create a scalable notebook that would be able to hold essentially information of any type, so you could put text or sketches or videos or whatever in there, and people are starting to do that today.

And I think certainly by the time your kids grow up and go to school, it’s likely that they’ll have had one of these personal notebooks really from very early age. So, you have a personal record that becomes something that you can always fall back on. It gives you essentially the ability to add total recall to your own user-generated content, if you will, throughout your life.

Gordon Bell at Microsoft has been studying this for some time, and he calls it MyLifeBits, where if you really can have virtually all the things that you do and you read in a repository, is that going to be a useful thing.

So, maybe in the future if you were studying this anatomy now, and you’d say, well, I remember taking my very first course in this back in the 8th grade, and so I’ll just go back in time and say, okay, yeah, they were telling me about that stuff then, how do I recall what I learned and how it relates today.

So, let’s go back to the present. So, we’ll sweep out here, and say, okay, here’s a timeline, which is my calendar. And on this timeline I can see several things. One, let’s just sweep out here a little today, and say, okay, I’ve got an art course, an anatomy course, a literature course. In fact, there are some parts of this that relate one to the next, and so we can do a semantic analysis, if you will, of the relationship of all the things that you’re studying.

But perhaps out there there’s a test upcoming or a group project, and we’ve got to get people together to collaborate on that. So, let me show you a little bit about ways in which I think collaboration may occur.

So, I’ll go back to this tablet. Because this Surface Computer sees, it sees things that you can put down, it sees your hands above it, not just when you touch it, so it’s not like a traditional computer touch surface, and so one of the things it can see is objects. What we’ve done is we’ve taken an optical barcode system, sort of like an advanced version of a UPC code in a marketplace, and we’ve put it on these devices that we’re experimenting with so that the device can be seen by the Surface and it can do something.

So, when I put this one down here, it reads the barcode, understands some context, and it spills out some of the documents, in fact, from the notebook that I’ve been taking, and I can move these things around.

Perhaps a friend would come along or I’d take my cell phone where I accumulated some more of these things, and I can put it down here and pull out some more things that might have been there or somebody else can contribute to this.

Somebody in the group might have gone to the anatomy lab and gotten a little model so that people could actually understand some of what they were studying, and I’ll put that down and say, okay, that turns out to just be a hyperlink to another set of information sources. And like all the other things, I can ask “what are those” and incorporate them.

So, this gives me the ability to organize these things, to take them out, and just as we did with the WorldWide Telescope I can zoom in on them, I can look at them, we can collaborate and talk about these.

So, I don’t know whether you’ll find this in Starbucks, the dorm room, the library, where it might be, but it’s certainly likely that the ability to collaborate this way or even to use this as a way to include other people that aren’t physically present is clearly going to be possible.

So, in this environment we can have a folder where we just want to accumulate things that the study group agrees should be part of the whole project, and we’ll put them in there and they can just hide there and be communicated to other people.

I talked earlier about the importance of being able to think about how do we collapse the cost of these things dramatically through technological changes, and it’s clear that we’re making very, very powerful client devices and even all these cell phones in the future are going to be very, very powerful computers in their own right.

The question is, how do you take something that will be as ubiquitous, largely inexpensive and connected as a cell phone might be, and make it the basis of giving people who can’t afford to have a whole Surface Computer in their home or in their office as an adjunct to that environment.

So, I just want to show you another thing we’ve been playing with. This is a flexible display. It’s sort of like E Ink that you might see in a Kindle from Amazon, a book reader. This one happens to be a color version of that. It’s less than a millimeter thick. It’s flexible. And what we did right now for this is we just took and put a little driver on the back with a battery.

What we’re actually then prototyping is whether either through a cable or an Ultra-Wideband radio interface we could have your cell phone drive this display. So, you take this thing and you stick it in your backpack or your thing or maybe fold it up and put it in your pocket, I don’t know, but it gives you the ability to take a surface with you anywhere and hook it up to a computer that can drive it or connect it.

So, I’ll just set this down here and you can see it.

We took some of the information from that last presentation — it’s a little out of focus here — and we just have it rotating through this environment, and so some of the things that were in my presentation, we just have different layers of that stored in this little memory on the back and essentially just scrolling through it.

But this kind of display technology we think will ultimately not be all that expensive, and it gives us novel ways of thinking about bringing the computers that we have forward in a way where a lot more people will be able to use them. So, whether you want to use it at Berkeley in the future to collaborate or sit down or, in fact, type or speak something where you don’t want to try to read it on a three or four-inch screen, or, in fact, we currently have prototypes of these that Bill Gates and I talked about a few years ago called Phone+, which we’ve actually built physical versions where your little phone goes in a docking station, the docking station has some type of wireless wide area radio in it, and it has a wire that goes to the back of the television.

So, when you dock it in a rural village, for example, you can convert the cell phone, which everybody is rapidly getting, and all of which will be pretty smart in the next few years, to the televisions that are always present even in largely the poorest environments, and give people the ability to have some type of potentially inexpensive community based connectivity system powered by a cell phone but yet with a display technology that’s big enough for you to do more than you would want to live with in that environment. And whether that is games for learning, whether it’s education more broadly, whether it’s health care, you know, coaching for a pregnant mother, all of these things become much more tractable if you have the ability to compose these things together into working systems.

So, all of these things are very, very exciting to me. We have huge research efforts in many of these underlying areas, I mean the semantic analysis, the machine translation, and, of course, all the core work necessary to deal with the arrival of these many-core heterogeneous processors, and the implied challenges of trying to program all those things.

So, I think we’re entering a time now where this multidisciplinary approach to problem solving is essential. To some extent we have not yet had enough interaction between the historical computer science world and all the other science and engineering departments, and there’s a lot of effort going toward improving that integration.

At the same time, I think that we haven’t now got enough focus on advancing computing itself. We’re staring down the barrel of the biggest changes I think that our industry has seen with the arrival of these high core count heterogeneous architectures, and our ability to build them at every scale. And yet we don’t have a reliable way to program them, we don’t have a way to take people’s efforts and compose them into some high scale applications that we can reason about.

Not that far behind we’re starting to see at least glimmers of light at the end of the tunnel in things like quantum computation. We have a big program there; there’s clearly some interest here in that. But we’ve made really breakthrough progress in some of the quantum computation areas in the last few years, and people familiar with the work that I talk to think that we’ve been able at Microsoft to galvanize this effort in a way where the last three years have brought a level of progress that would have been really completely unpredictable a few years ago by even the leaders in the field, and the time horizon for at least thinking we might build one of these things may be shrinking from what was the generally accepted view maybe it was 50 years out to now it may be 15 years out or even less, and that is essentially a complete game changer.

So, we need more people to work on both halves of this problem, novel ways to get the technology we already have into solving a lot of society’s other problems, and I think we need to put a lot more science back into computer science, and be able to anticipate and participate in the creation of these new models of computation.

So, let me stop there and invite Shankar up with me and we’ll use the rest of the time for a Q&A. Thanks a lot. (Applause.)

MODERATOR: Thank you, Craig. This was really a fascinating glimpse, I’m sure you’ll all agree, into the future of computing, and also into the future of computing for the 5 billion people on the planet who need access to these technologies for better health care, better education, and I’m very glad you made these points.

Before I throw it open, I’d like to just do one really pleasant thing. I’d like to thank Microsoft and Craig for a few of the adventures they have led us on. So, first I’ll start with Microsoft as a whole, and certainly Microsoft and Jim Gray had a huge role in helping us put together the CITRIS, the Center for Information Technology Research in the Interest of Society, and then later through Microsoft, of course, they helped us kick off synthetic biology through the Gates Foundation. That was sort of two big things that Microsoft did for us, and then I want to now come back to Craig.

Craig was here three years ago, and at that time he challenged us to really think about grand challenge problems in computational science and engineering. This was a wonderful sort of stimulating visit then, and after that we were just sort of taking stock of what happened as a result of that set of grand challenges.

Some things that came out of it I think we thought about then, the Berkeley Water Center, which was very much I think in the model of the space telescope, which you saw, and that I think has very much been driven again through Microsoft. The other area was, of course, in terms of Par Lab, and so there that’s gone forward, and that really responds to your challenges.

So, now it’s such a great pleasure to — another area I should say was also about the idea of information and communication technologies for the developing world. And since the time, of course, you’ve kicked off Microsoft in Bangalore with the MSR in Bangalore, with whom you’ve had such fantastic interactions.

So, I just say I’d like to thank you for bringing all this excitement to us, and we had another faculty roundtable this morning, which I think generated another ton of other questions, a lot of which I think you actually alluded to during your talk, and I think all of us here are sort of privy to sort of where we are going.

So, without further ado now, I’d like to throw the floor open for questions.

QUESTION: I was very fascinated with the way you were moving your tablet around, and you have obviously a video link to the projector, and that’s something that we’ve been waiting for a long time. I was just wondering how you did that. (Laughter.)

CRAIG MUNDIE: Actually the way we do a lot of these things is we either link two computers together or we use the RDP, the Remote Desktop Protocol, which is sort of like what we use for remote logon to different systems, to basically get the video out of these things and put them up on the screen. We can do that now with Pocket PCs and cell phones, as well as the tablets.

QUESTION: That’s not over Wi-Fi?

CRAIG MUNDIE: That was not over Wi-Fi. The Wi-Fi connection was basically from this tablet back to a machine that was essentially taking the video and cloning it up onto the screen.

QUESTION: You were talking a lot about usability and accessibility and beyond the sort of click and drag and the Surface stuff. What are you seeing in the future of interaction between humans and computers for people who are say deaf or blind or elderly people who don’t have the fine motor control that we’re graced with?

CRAIG MUNDIE: Yeah, actually at the faculty session this morning I showed a video that I didn’t have time to really include here, but it was called the robot receptionist. We basically built a 3D avatar of a receptionist that we’re going to use in the Microsoft lobbies this year to test what it’s like to have completely natural spoken interaction with a computer to get some things done.

So, I actually think one of the big things is, in fact, speech but not just in the traditional command and control sense or where there’s such a confined domain of discourse that it’s not that much better than one of these interactive response units on your telephone. I think we need to get to this natural model of human interaction with the machine and where many of the sensory cues that people use to communicate with each other are honored and presented by the computer itself.

Now, that doesn’t get to I’ll say the disabled part of the question. We’ve been very, very focused for many years on trying to figure out how to change these things such that people who don’t have sight or don’t hear or who have some other physical disability can find a way to interact with this.

In fact, I gave this talk at NYU two days ago, and one of the people who came up was a PhD student who was completely blind. But he wanted to come up on the stage afterward, and he wanted to essentially see this. He may have very, very limited eyesight maybe. But he could see it and he was saying, look, you know, we have to work together, he said, to figure out how these kinds of things are able to be provided to the people without sight or with weak sight.

And we’ve done that in the traditional PC with things that will read the screen and convert the Braille and others, but we really have to go farther.

You know, one could ultimately ask, are there more real physical connections that we need to make between the computing and the people. People are doing that today in the biomedical sense, and maybe there’s a way to take the people who really have these significant disabilities and create a more direct link. I think all of those are open for discussion.

QUESTION: Word and the Microsoft Office suite and Windows are more or less what Microsoft is known for worldwide. A few days ago, I heard a talk by Ram Shriram of Google, and he was talking about a lot about the cloud and maybe where computing is headed five to 10 years from now. What’s Microsoft’s like strategy going forward as far as these so-called like desktop apps going online and going upwards and outwards?

CRAIG MUNDIE: My view is that we’re sort of at a meta-stable point in evolution of our industry from its last major platform step to what will be its next major platform step. My belief is that this cyclical evolution, which is like a linked set of S-curves, has always been present, and that the cycle time for a major cycle is about 15 to 20 years. The transition from one platform to the next platform is always driven by a killer app or two.

So, Microsoft was the beneficiary of not inventing the spreadsheet or the word processor, but deciding to make a graphical version and couple it to Windows and put that out there, and our primary business franchise and the stereotypical Microsoft, that was what it was.

I think that there’s — and mostly the things that people compute today and the model which they program was all derived in some sense from using that platform after it was diffused into the society on the back of the killer apps.

I think along the way — and each generation sort of stands on the shoulders of the one before it. So, along comes the Internet, and I believe there were really two killer apps that drove the Internet broadly, which were e-mail clients and Web browsers. Each of them had enough ubiquitous appeal in a class of applications that that connected world and essentially server-based computing and client-based consumption was essentially established, and it allowed us to start to build these huge infrastructures and, in fact, bring forward new business models.

But some people would tell you, hey, software is a service, it should just emanate from the cloud, and you just need dumb terminals. It’s sort of like 3270 reborn.

We thought deeply about that question, because you could say it’s primal to our business, and then looked at what was going to happen in these client devices in the capacities that they would have, and we believe strongly that the future is really a composite platform, which is a family of clouds, both public and private, that are integrated with a family of client devices, all of which aren’t much more powerful than the ones we have today.

When I talked about this speculative computing, the whole idea that the computer today is poorly utilized and is really only a proxy for a person in terms of accessing these remote capabilities is just the latest form of time sharing. So, the only reason that time sharing at any scale ever worked was that you had a low duty cycle user at the end.

And if you posit that in the future computers will not be low duty cycle, then, in fact, they’ll always be trying to do things, or, in fact, you posit, like in the demo I showed them this morning with this robot receptionist, one of the reasons I showed it is this thing is just barely turned on and it’s rendering features, it’s listening, it has machine vision, it’s doing all the semantic analysis, it generates speech, it does interesting things when it’s idling. In other words, if the robot doesn’t see anything new in the image, it uses 40 percent of the compute cycles of an eight-core machine, and we haven’t really gotten it going very fast yet.

So, it’s clear that when you want to move to that level of computing or that model of human interface, I contend that even if you thought that broadband network costs zero and had infinite bandwidth, you still have the latency problem and the idea — and it isn’t going to cost zero, certainly not on a global basis.

So, the idea that I would take all these sensory inputs and multiplex them up the wire to the cloud so that it could monitor it all the time to make the decision to have something come back and then it would tell me, okay, here comes the answer, somebody walked in, that’s just not going to happen. I mean, it’s not economically practical because the cloud is built out of the same chips as the clients. It’s only the ability to use them either in some integrating way or in some multiplexed way that brings a scale economy to it.

So, our strategy, if you will, and our belief is that what we really are moving toward is this composite platform which is the cloud plus the clients, and that but what we need is a new way to program that as a unit.

So, the whole question of how you build distributed applications, the highly distributed concurrent applications at every scale, whether they operate locally or essentially globally, is a problem we have no matter where you think the computation goes. And so we want it to be quite flexible.

In the next week or two we have our big developer conference, and Ray Ozzie, who’s my counterpart inheriting the other half of Bill’s job, he’ll give a lot of the keynote talk there, will be about the way we think about making this programmable cloud component an integrated part of the platform.

And our genetics actually make us more inclined to make that available as a platform people can program than not, and so we’ll use that infrastructure to provision our own services and then we’ll make it available to other people as part of this composite platform in the future.

QUESTION: That was pretty much the question I was going to ask.

CRAIG MUNDIE: Okay, next question. (Laughter.)

QUESTION: But do you think from a — it seems that what we observe from software as a service is the incredibly rapid rate of churn, of change of the software, because —

CRAIG MUNDIE: Churn was a good word.

QUESTION: What?

CRAIG MUNDIE: Churn was a good word.

QUESTION: Churn was a good word.

So, is that — do you see that as — I could imagine that maybe from a business perspective might accelerate the rate of innovation from the server side in the software as a service. Do you see it that way?

CRAIG MUNDIE: No, I think — well, at least the way we think about it is the idea that you install software as sort of a static thing or it comes to you on a little shiny disk and you put it in, that’s going away. So, it doesn’t matter which piece of software runs at which end of the wire. The whole model of sort of discovery, click to run will be applied.

So, many of the things that have historically been browser plug-ins or other things is just the tip of the iceberg in terms of a more generalized model that says, hey, whatever I want, wherever I get it, and however I discover it, the ability to just use it should be innate. So, at Microsoft we’re moving everything we do in that direction.

So, in that environment I think we welcome or certainly don’t fear in any way the idea that there can be a lot more going on, and a lot more stuff that can be put there. I mean, the real value of our Windows business is only in part because you can also run Word, Excel, and PowerPoint on it. You can do that on a Mac, too. What makes Windows a valuable franchise, in fact, was that in the last generation, if you will, there were millions of people who wrote programs, I mean hundreds of millions of programs exist.

So, I don’t think that diversity is bad in any way, and if we can, in fact, get the diversity to happen faster or more people can participate, but still have some unifying way of getting it to work — my biggest concern right now is that we don’t have — we as an industry, we, academia included, we don’t have a way to write high scale, reliable software, period. So, the complexity problem is going to kill us. It already is killing us. And all of these things that people want are now moving into the class where they think that they are infrastructural for the society.

The idea that maybe it works today doesn’t work tomorrow or I downloaded something and my machine doesn’t work or I downloaded something and now I’ve got a virus, all those things had better go away.

So, on one hand we want to make the palette something people can paint any picture they want on, but we’d better come up with some way of allowing them to do that that actually is safer and has some coherence to it. The value of the Windows environment and its ecosystem in large part is that there is some uniformity to it. So, I think absent that uniformity, you just get chaos, and that turns out to be a limiter in terms of the rate at which broadly it can be taken up.

So, there’s always this dynamic tension between the free for all mode and some coherence, and I think we welcome the breadth of this, but we worry a lot about do we or anybody have a good way to allow even more people to party on these platforms and reason at all about what you’ve got at any given moment.

QUESTION: Hi. I think it was great you shared those cool new databases that gives you access to the information, but I think the problem we are having as graduate students is that it’s still difficult to find quality papers and books that’s worth reading. Even Google search doesn’t give you that much great results yet.

So, can you talk about Microsoft’s approach to help solve those problems?

CRAIG MUNDIE: Well, in this demo I was trying to get at that question that says, look, the idea that you just put a few words into some texting and it says, oh, here’s 100,00 hits, the rest is an exercise left to the student, is not that helpful, because the amount of information is getting too broad.

So, I think many of these things like synthetic reputation systems, semantic analysis, the ability for the computer to analyze those things, one of the things we’ve done already and which you might just try experimentally, we think one of the things that will happen is domain specific search systems, and we built one of these for our health product line. So, if you go to MSN Health and Fitness or HealthVault and say I want to do a search related to health and wellness, what we’ve done is we’ve essentially built a front-end onto the back-end search systems that not only include the traditional Internet searches but include a lot of information that is not actually published on the Internet, that’s available by contract or other means.

But because it knows that the context is health and wellness, when you put in a query, it takes the links, it analyzes everything that comes back, it puts a taxonomy around it, and then it presents it to you. If you type in sore throat, it says, well, here’s where you can learn about the definition of sore throats and how many types there are, here’s the doctors in your area that can treat sore throats, and here’s the kind of medications that are effective against sore throats.

So, you put in the same query that you did but in essence even with that basic query we can analyze what came back because we have some context.

One of our dreams, and again I showed a dream about how we might do this in education, in HealthVault the idea is that you end up with a free repository where people can put all of their personal health records and wellness information. In essence that’s a private personal context against which you could essentially augment any query in that space. So, like if I had my HealthVault record and it looked all the way back, it would know, yeah, I get sore throats but they’re almost always strep infections.

So, whether I know that or can remember it, if I get one tonight and I type in sore throat, we believe we should be able to analyze my health record and say, look, let’s take some of that and add it to the query and that will further refine what’s likely to come back or at least change the odds of what should be presented, and then we’ll analyze what comes back and presented in that context.

So, I think you’re going to see this richer and richer feedback loop where the analysis isn’t just left as an exercise to you, nor is the query, that the query itself will be augmented behind your back and that’s what this education demo was trying to show by saying, well, if I refine it to my study group and you look at my pattern and you know what subject I’m studying, can’t you go out and find me some links and qualify them as to highly relevant, less relevant, and then let me pick them on that, and I think that that in many areas is going to be what will happen.

QUESTION: Craig, what do you think is at the intersection between Bill Gates’s notion of creative capitalism, Microsoft’s commitment to education, and the fact that you’re a heavy participant in the game industry? So, specifically what would it take to get Microsoft or anyone else in that industry to work on developing a game that was as compelling as Halo 3 or World of Warcraft, but was aimed at say middle school math and science, given that the current market for educational software from a commercial point of view is not particularly attractive?

CRAIG MUNDIE: So, well, as you said, the first two parts are really contextual. I mean, Bill and the foundation are clearly focused on the broad issues of health and education.

The way we approach the now in a very simple sense is we say, look, in every country just think that there’s three countries in a country. There is a rich country, there is a middle class country, and there is sort of a welfare class country. The boundary between the bottom two we define as above the line you have some disposable income; it’s non-zero. Below the line you’re just subsistence. So, you can go across the world then and you can look at it and you say all you’re arguing about any country is what the relative size of those components are.

So, we know that the technical approach in the welfare component will require philanthropy, government, NGOs and others to try to do it. Our job is to figure out do we have a solution for them that could be deployed at scale.

Today, those NGOs and governments take whatever money they have and they really treat it as welfare. This kind of goes back to the old adage, you know, if you want somebody to be fed, you’re better off to teach them how to fish than to give them a fish.

Right now I don’t think anybody has a way to, quote, teach the people to fish for themselves, so we’re still sort of in the mode of giving them the fish.

What I hope will happen with the leadership of the Gates Foundation and hopefully some of the products that we’re doing, and our ability to distribute them broadly, is essentially give people the opportunity to be taught how to fish, if you will, and that could be true in health care, do-it-yourself health care, I think it could be true in components of education.

Now, as to the gaming question, I started six years ago a group at Microsoft focused on the question of games for learning, and actually we made some interesting progress. I was even focused at starting younger, at age four, so before they were even in a traditional concept of school, could we get them and educate them. That’s why this Benjamin thing is so interesting to me is this kid is six years old. Hey, I thought I was good. They thought I was a child prodigy because I could take my father’s fan apart at age six. This guy is talking about the different gases in the sky and the galaxy and he knows about the Internet and Wikipedia. I mean, it’s just like a completely different game. Why? He has access.

So, the work we did proved that at least on an ad hoc basis we could show that you could use gaming concepts to be a motivator for kids to learn.

It was hard at that time a couple of years ago to actually figure out how to make a business out of that at that moment in Microsoft, and I actually ended up spinning the group out of Microsoft to allow them to continue. So, we took some equity in their company and there’s a little company called (Sabi ?) in Seattle now that’s actually continuing that work as a business.

But if you read the papers this week, when I was at NYU on Tuesday, we actually announced this new alliance called the Games for Learning Institute, and the idea of that, we provided half the funding and the schools pitched in the other half of the funding, is to do a three-year formal academic analysis of the various game techniques, if you will, and their applicability, exactly what you said, middle school science and engineering teaching.

One of the big concerns that we get, and I think we have to dispel right now, is — and we were talking about this in actually one of the student roundtables before this — is that we just are seeing the latest generational gap between what parents of the day now grow up with or did grow up with and what their kids are actually going to be able to have.

One place that manifests itself is how parents feel about computer games or videogames. Mostly they say, hey, you know, you’re playing too much of that. Now, there are some people who say, no, no, you can’t play enough of it, because it’s actually wiring their brain up differently, they’re going to be tremendous at discovery and problem solving, but it’s so foreign to the way parents think about it that they actually don’t go that way. And, of course, if you go to the teachers and say, oh wow, bring on the Xbox and we’ll teach math and science, like forget that. (Laughter.)

So, part of the Games for Learning Institute is to basically put some academic rigor around proving that these things can be beneficial, and certainly we would welcome other people to participate in that.

I think that there is great promise because of the discovery capabilities, the instant gratification and closed loop feedback kind of things. I’m a big proponent of it, and I think we need not only sort of the rigorous analysis of is the technique effective, and that’s as much for the parents and the policymakers as it is for the teachers or the developers, and clearly we need to figure out how to take that forward.

If we can do things I’ll say like this robot receptionist, but we make it essentially more into a game like environment where you can interact with that person, get specialized education and help, make it part of a gaming environment where you can suspend the game and go ask your robot teacher, you know, hey, what about this, I didn’t quite understand that, and so it becomes an expert system who knows how to answer questions in the context of what you might have encountered in a game, I get very enthused about that kind of thing, and I think there may be a way as we’re able to expand the platform beyond the traditional game console business model — I mean, our problem with Xbox, for example, is you can say, hey, look, you know, the consoles are virtually a break even proposition and if people aren’t willing to pay 40 or 50 bucks a game and buy five or six of them, it’s a non-business at least in the historical sense.

PC gaming, of course, was a completely different thing, and if we could get the equivalent of PCs in the hands of kids, then you could do that. But if you say, hey, look, all the gaming is on the cell phone, I don’t think you have the immersive capability or sort of the screen real estate to get the kind of effects you want over time. At least I don’t think so. That’s why I showed this little flexi tablet thing, and the whole idea of Phone+ where you’re using the traditional notion of a bigger screen or a television as a place where you can take even your entry level computing capability, which will be pretty big in the future, and get it into that environment.

So, somewhere that’s where I hope that the Microsoft work, the foundation work, the traditional welfare component can come together, but somebody has got to bring them I’ll say two things: the game and the proof. Because without the proof, the parents and the policy people are in the way; and if there’s no game, even if we get them out of the way, somebody actually has to come up with the game. So, all of us, it’s just another great thing to work on together.

Okay, thanks.

MODERATOR: Thank you very much. (Applause.)

CRAIG MUNDIE: So, I guess I have two announcements and maybe the dean has one more. On those little tables next to your chairs, on two of them there’s a little yellow cards that if it has the word Zune on it, you get a free Zune, and you can pick it up outside, right over here from this lady. So, just look at your desk before you leave.

And the second is for those of you that hadn’t had a chance, we have put some of the Surface applications up here, and you’re welcome to come up on the stage after we’re done and play around with this if you have any interest.

END

Related Posts