Craig Mundie: New Software Industry Conference

Transcript of Remarks by Craig Mundie, Chief Research and Strategy Officer, Microsoft Corporation
New Software Industry Conference
Mountain View, California
April 30, 2007

ANNOUNCER: Now I want to introduce Craig Mundie from Microsoft. Craig is a 14-year veteran of Microsoft, currently serving as Chief Research and Strategy Officer. He’s one of the two senior Microsoft executives chosen to take over the company’s technical leadership when Bill Gates retires from his day to day work in July of ’08.

Craig’s primary focus is on charting Microsoft’s three to ten-year horizon, the long cycle research, innovation, and business incubation that will impact the future of technology.

He frequently meets with global leaders and government, industry and academics to help shape how computing can positively impact fields such as healthcare, scientific research and education.

Craig’s early career and contributions to Microsoft include driving the development of technologies that are now widely used in mobile phones, Pocket PCs, cars, and interactive television. He also spearheaded Microsoft’s Trustworthy Computing initiative, which transformed work on security and privacy across the company.

Before joining Microsoft, Craig co-founded Alliant Computer, a pioneer in the field of massively parallel computers.

So, Craig, along with Microsoft, is our host today, so please give him a warm welcome. (Applause.)

CRAIG MUNDIE: Thank you, Jim.

Well, while you eat I’m going to try to talk a little bit about software with a much longer-term horizon. A lot of the discussion this morning was about software as a service — are services going to replace the need for software? Somebody just asked me, you know, one of the questions people are asking as an educator, “What is software, should I teach it anymore, are we still going to need it?”

And my view is that no matter whether you believe in delivering some of these things as a service, as a component of an overall product, or as software more or less as we’ve known it in the past, underneath it all there is software, and arguably going to be a need for more of it.

One of the things I specifically wanted to talk about today is sort of the pendulum swing that I think has happened ever since the birth of the mainframe where a particular model of computing emerges, matures, and yet the underlying technologies continue to evolve, in particular communication and computation. The model tends to oscillate in terms of where think it should be done, should it be done personally or locally, or should it be done in the center? And, of course, time sharing started in the center, and had relatively dumb terminals, and as things evolved we ultimately moved a lot more to the edge.

Today, there’s a huge amount of computing capacity that’s already accrued at the edge, and yet the model of the applications that we’ve had for 20 years or more is essentially quite mature. And so I think there’s a very natural trend toward saying, well, why don’t we just recentralize everything, it will be an easier way to maintain it, it will be a more cost-effective way, and allow us to introduce new monetization.

In fact, I do think that there is a permanent shift in terms of the communication capabilities reaching a level where, in fact, whether it’s on a phone or in your car or on your television or your personal computer, that it is possible to complement those things with services that exist in the cloud. And whether they’re provisioned at the platform for many people to use or, in fact, they’re a way in which an individual enterprise may present it to you, clearly there will be services in the future. And so the real question is how does this pendulum like swing play out in the next few years?

While arguably for the last decade there’s been this tremendous focus on services and the cloud-based platform, and if you get a conference like this where huge amounts of dialogue is about how services are changing the software business, there is a quiet revolution that continues to go on underneath the basic model of computing itself.

In fact, if you look at the client computer, almost in any of them, whether it’s a desktop personal computer, the laptop, or your phone, one would argue that it’s actually an underutilized computer, and, in fact, it’s this underutilization that results in two things: People still wish it did more than it appears to do for them, and at the same time the low-average usage makes the cloud computing that much more appealing — the idea, well, look, it doesn’t take that much to compute these things by today’s standards, and so why don’t we just put it up there in the old timesharing business. And, in fact, the basic model of timesharing depends on a low duty cycle of computation or a slow rate of consumption on the part of the consumer at the end of that wire.

And so one has to question, you know, is this the way — is this really going to be the way it is for a long period of time, or is something going to change? And I argue that, in fact, there is a new world coming, it will arrive in nominally five years, plus or minus a couple, and this model is going to change in a more profound way what the mass world of software development is like than perhaps anything that we’ve done in the last 20 or 30 years.

You know, on this chart going up to the edge is sort of the model that all the world’s programmers have been trained to believe, and, in fact, this has been true for a long time, which was that the computer just gets faster, and you write code and it just gets faster because the computers get faster, and all of the capacities increase over time.

And so we all grew up — in fact, somebody wrote me a piece of e-mail last night where they were reminding me what the speed that the Cray was, and it had a 12.5 nanosecond clock, and that’s roughly 8 megahertz. And so today we’re up to 3 gigahertz nominally as the clock rate of a microprocessor.

But a few years ago, the computer chip manufacturers started to have a problem, which is the only reason we’ve been able to get the clock rate to go up that fast without the thing melting is we were able to lower the voltage, and those things have a complementary nature. We can’t lower the voltage anymore, and as a result the microprocessor industry has sort of hit a wall where Moore’s Law, which was really not about clock rate but it was about transistor density, it continues unabated, so we’ll have more and more transistors. The problem is the system design on the chip can’t be clocked at a higher and higher rate.

And so everybody has for many years assumed, well, okay, I get it, I get one core, and it’s just going to get faster and faster. But the reality is there won’t be a free lunch for the software people anymore, that the industry is going to take a big right turn; in fact, arguably it did take a right turn a few years ago where we started to see constant clock rate essentially, and then adding more cores.

The world of microprocessors is going to move in a direction that says if you want the thing to go faster, it has to be parallel. And you can get that parallelism in any of a variety of ways. If you’re on a server it’s pretty easy to say, hey, it’s all about lows. But if you’re talking about devices that people have in their hands, it’s going to require that everybody begin to take on the class of problems that historically were reserved for the people in the technical computing space.

If you wanted to solve the big computing problems for many years you had to figure out how to make it go in parallel, because you couldn’t get enough scale up out of one machine to solve that problem.

But for the rest of the world you didn’t have to solve this problem, and so as business, as academe, we have not solved the problem, we just let it ride.

And this is really a profound issue now for the industry, because these chips are going to emerge. You can see the dual-core and the four-core chips are already out there today. It will obviously become eight-core at some point in the not too distant future. And you can see a trajectory out over a 10-year horizon where literally there will be hundreds of cores of the kind that we know today as a computer on an individual chip.

In essence it’s sort of like “Honey, I shrank the datacenter.” It’s now on a die and it’s in whatever computer I happen to use it. And yet the applications that we have when they’re not in the datacenter are going to require a new model of programming, and without that new model, a lot of the benefit that we can see here is really not going to benefit us.

And so it gives us an interesting challenge. And if you’re in academia, I would tell you that most of the computer science departments in the world have been basically training people how to do more applications of the old model of computing. But if you look at the range of research that is being done in-house to effectively develop these systems with a distributed nature, asynchronous construction, highly parallel, there’s very little that actually has been done of late, and yet one could argue that the world is going to need a lot more people who are able to build applications that will avail themselves of this power, and without that it will all go to waste.

Now, I happen to be quite confident that we will solve these challenges, and that therefore one has to assume in the five to 10-year horizon that the average client computer will let’s just say be 50 to 100 times more powerful than the one we know today.

And so that poses an even bigger question that says, well, what if I told you that all that power just went to make Word, Excel, and PowerPoint better, faster? Well, it doesn’t really need to be much faster than it is today.

So, clearly there has to be some new concept of an application, and there has to be some balancing of the roles of the computational assets that are going to exist near the ultimate consumer, and how do those things operate, and what their role and relationship is with the services that are provisioned in the cloud.

We also face another challenge, with or without this change to a requirement for parallel processing, and I call it the complexity challenge. Here’s a couple of great figures. These actually don’t include Microsoft efforts, they were done by third parties. But, in fact, one could argue that our problem is even worse, if you will, in that if — and we suffer some of the same challenges as the graphs imply.

The graph on the left as a function of size of code from 100 lines of code to 10 million lines of code basically shows what the outcomes are for many of these projects. And as you get to bigger and bigger projects, you get higher and higher failure rates. In fact, only 13 percent of the projects complete on time, 64 percent of the projects are never completed at all at 10 million lines of code.

You know, right now Microsoft products between Windows and Office, based on how you count, are arguably between 50 and 100 million lines of code, and we integrate those into the system.

And it turns out we throw a huge amount of technology and human capital at the problem of dealing with these same kind of complexity challenges. And, in fact, we’ve had our own struggles at times to be able to predict exactly when things will be completed. And we all suffer the challenge of complex systems and making them correct.

In fact, on the right-hand side, if you look at where people spend their time, there are four things. There’s the documentation, coding, support management, finding and removing defects. And again even at systems of a million lines of code that were studied, by the time you get out to that size, you’re down to where only about 18 percent of the total time and effort is spent on actually writing the code, more than 35 percent debugging the system. And these really are not linear functions, and as the complexity scales up, it’s only going to get worse and worse.

In essence, what we’ve got, as somebody this morning was talking about whether software was an art or a science, I contend the big issue isn’t even whether it’s a science, it’s whether it’s an engineering discipline. And I would tell you that in most cases when things have been around for a while, and people depend on them and have to build bigger and bigger versions in engineering, ultimately you have to become a system of composable sub-modules. And yet software as we know it today largely has no formal composition.

And so the reason that this complexity barrier is a high one is that we haven’t created either the tools or the trained people, or, in fact, the organization discipline for people broadly to think about how to compose very, very complex, large scale software systems. And so we essentially try to pass reliability and correctness into systems that, in fact, we can’t reason much about from a formal sense, and where the tools are increasingly inadequate to deal with the very large scale asynchronous distributed nature of these things.

So, while people sit and talk about we went from Web 1.0, which was largely the only Web, as someone said, to Web 2.0 where the Web becomes more programmable, to one where you are even going to posit that there’s a lot more intelligence in this environment, and coupled up to these large scale systems at the edge, we’re really talking about building systems of a complexity level that perhaps no one has ever built before, and certainly we don’t have a model to know whether they will do what we expect them to do in any reliable way.

So, tomorrow we have another question, which is does it just stay this way, do we take what was a largely idle client, add all this capability to it, and just find it even more idle. And I contend that we won’t do that, that there is going to be a need to harness all this power, that in at least all the years I’ve been in this business, and if you look back, the one thing programmers ultimately don’t do is they don’t let all the computing assets lie fallow. Somebody will figure this out, somebody will decide that there is something to do with all of these cycles, and they will figure out how to overcome both the programming model associated with concurrency challenge, and the complexity model associated with building the complex systems and more distributed architects.

So, going forward I think we need to deal with more effectively with concurrency and complexity, and, in fact, I contend that some type of system for verifiable composability will have to be underlying all software that you write in the future. And I don’t care whether that software is a large scale Web Service that is being infrastructural in nature, whether it’s a domain specific set of software that captures the essence of a business or knowledge or capability, or whether it’s the kind of software that we historically have put on the client, whether those clients are desktops, laptops, phones, cars, televisions, game, consoles, each of these things becoming a large, complex system in its own right.

What’s interesting to me is if you look at these adjectives, which I contend have to describe these systems of the future, they’re almost exactly inverse of what we know today as the way that we design and build software systems. Today, most software is designed monolithically as a tightly coupled construction, most of it is synchronous in nature, most of it is not designed to be highly concurrent. It may get concurrency, particularly for those that have essentially moved to a protocol or message-based construction. They’re largely not composable. Again, some of the protocol-based systems are moving in a direction where you can do that, but the way in which a lot of the synchronization is done actually defeats the ability to build large scale, composable architectures. They’re mostly centralized in nature, and the certainly aren’t resilient. The brittleness of many of the large scale software systems that we have, despite huge efforts to make them otherwise, tend to not be what we want to be yet.

And so all of these things represent a change, and if you were trying to teach people this, or do research, or you were trying to invest in companies that were ultimately going to own the platform of the future, one would have to think that you could invest in things that were going to move to solve some of these problems.

So, in fact, the model that I have in my mind, and no one really knows yet what the applications really are, but I made a list of what I think the attributes might be of what you could call fully productive computing, fully productive in the sense that you aren’t going to waste most of the cycles that exist at the edge in some idle loop, just wait to be responsive to a keystroke; you’re going to take this capability and you’re going to start to move to add capabilities that qualitatively change people’s experiences in dealing with the computer systems.

So, first I think we want to make them more reliable. And so finding ways to use all this incredible computational capability to improve the reliability of the underlying system, to construct them in a way where they don’t fall down, you know, today people build skyscrapers and other things, and then largely unless some bad thing happens, like a plane flies into a building or a truck burns up a bridge this morning, you know, we have fairly good characterizations and methods by which we construct these large, complex physical systems. We’re going to have to be able to be as reliable in our ability to design, construct and operate these very, very large software systems as we have in other engineering disciplines. That in itself would be a step forward.

I think they’re going to have to become more predictable. In essence, this is becoming an underlying capability, a mission critical system for living. There’s just no aspect of science, technology, business operations, or even entertainment and communications, which doesn’t have a huge amount of software embedded in it, ant anything from micro code on up, and yet these systems are going to be more and more predictable as people become more dependent on them.

Another thing that is clearly true about the systems today is as comfortable as many people are using the personal computer or Web browsers as a client, more people have already taken up cell phones and have taken up personal computers, and even so there are about 4 billion people on the planet who never used either one of those devices.

And one of the things that would make people, whether you’re among the people who haven’t used it yet or the people who already have, that would make it better is if it were just more humanistic, if you could talk to the machine and have it talk back, if it tended to understand more about what you wanted as opposed to just being able to interact with some predetermined set of capabilities, [inaudible]. And, of course, science fiction writers usually get it right, so if you think about Star Trek and what it’s always like in the movies, they just talk to the computer and say what they want, and the computer does what they ask. And we ultimately will move in that direction, because that’s what people would expect.

These systems should be highly performant. They should just be incredibly responsive. You shouldn’t have the kind of frustration that we frequently have today where something is not as it should be or where it should be, and the system doesn’t perform as expected.

Perhaps one of the most important things I think will happen, and again somebody mentioned this earlier today, I call it context awareness. Today, the computer is largely a repository for data, but there’s been no model of the knowledge that the user has, its history of interaction with the user, and, in fact, all the things that are going on around you. And one could argue that if you had a way of modeling this, that’s the model-based concept, and you had a set of these models that reflect the context in which the system operates, that you’d have a new class of services that a platform could provide and that people writing applications could benefit from in terms of being able to make this application do things that would be more interesting.

As such, I think that the system becomes more and more personalized, that they will become intimately aware of the things that are important to you, the people that are important to you, the tasks that are important to you, and increasingly we’ll find a way to surface this understanding in a way that would allow people to write new applications and thereby making the system generally more useful.

I think the systems will become a lot more adaptive, that as you go on and interact with it, whether it’s for communication or line of business applications or entertainment, the system will continue to adjust its behavior and what it presents to you as a function of what it observes and what your interests are.

I think that full presentation will become more immersive, that one of the things that will also change in this time period is the whole concept of the display and how we present information. Part of this will be in a humanistic sense, part of it will be just that there will be more and more services that will become auxiliary displays, and so you’ll have this complete array of things from your wristwatch to the stadium-sized displays, as well as other forms of human interaction. And through this rich visualization environment there will be a tighter coupling between human intuition and what these things are like.

One of the ways I think that this will be done, and you could say this is really speculative, is that the machines will, in fact, do a lot more speculative execution. At the instruction set level we’ve been doing this for years to just try to pipeline the execution of individual instructions in order to get more performance out of an individual machine.

If you think about even how people write programs to play chess, by and large, they’re basically speculatively trying to compute out into the future, and weight all the possible outcomes, and when the time is up have it make a move, it takes a guess and does that.

We’ve even started doing this based on models we’ve developed within the new Vista version of Windows. One of the things that people recognized is that when you’re working and you click and you want to start the next application, one of the most frustrating things that people have is waiting for it to start. And so we said what can we do about that? As machines get more powerful, you’ve got a lot of memory laying around, and they’re idle a lot of the time, what if we knew what the next application was that you were likely to run, we’d just sort of run it ahead of time.

And obviously you can’t do that for all applications, so we actually built a Bayesian system that basically models the behavior of people, built it into the system, started it with that basic model, and Vista itself now guesses at what the application is you’re most likely to run, and preloads a lot of it.

In fact, I saw the first statistics now that Vista is sort of out there in the wild and at some scale, and 90 percent of the time it guesses what the most likely applications are, and 90 percent of the time the next thing that people do is in that list.

And so as the resources of the machine get bigger and bigger, our ability to use those things to essentially do things in anticipation of what you might like may, in fact, benefit you in ways as simple as having things appear to start more quickly, or, in fact, maybe actually doing a lot more comprehensive speculative execution of things that might be useful. So maybe I come in, in the morning, and the thing says, well, I know it’s Thursday, and every Thursday for the last year I noticed that you always had this meeting, and before you went to this meeting I gathered up all the kinds of things that are related to the people, and I sifted them and sorted them, and here are some things you might find interesting.

And in a way it lets the computer or the software become more and more like a great personal assistant, someone who you’ve worked with a long time and is able to add, at least for me, a tremendous amount of value in preparing for a meeting or a trip or whatever, because they can speculate about what I’m most likely to want.

And I think as we live in a world where there are these incredibly deep webs of information and relationship, and essentially the ability to remember virtually everything, if you can see that much and you can remember that much, if you could find a way to model it and present that to people, much like we’ve presented historically lower level functions of the operating system, would people be able to create this as a conceptually new qualitatively better model of application development.

I would argue that if you can do any of these things, you’re going to have to do some of them locally, one, just to get the computational asset, and two, to be able to maintain continuously the context in which it makes these things relevant.

So, that’s not to say that there isn’t then a role for services as a point of integration, much as we see today, and, in fact, I was very happy with some of the morning presentations because it kind of concluded that no matter what, you’re likely to see a model that is software plus service, that you’re not likely to see a world that’s just software as we knew it in the past.

But reciprocally, and certainly we’ve been thinking this for some time at Microsoft, we don’t see that the pendulum is going to swing and get pegged over there to say, hey, it’s all just a service, that there’s no value for local computation, and that there’s no value, and that the software may, in fact, run locally.

In fact, you know, as we listen to the discussions this morning, it’s interesting to point out that service is bit of an overloaded term. I think somebody called it at one point an accounting error, and that we aren’t quite sure what we want to call a service. Is it people doing outsourced work for other people? Is it running an application program in the cloud just as a presentation model at the edge? And I think it’s going to be very important to tease things apart.

And so at least in this context when I talk about service, I’m not talking about the rendering of human effort for a fee, I’m talking about software provided as a service through the network as opposed to outsourced program development or anything like that.

And the one thing I think I believe in quite strongly is that we will see a bimodal distribution of where the computation and storage occur, that you’ll see part of this exist in the client, and you’ll see part of this that exists in the cloud, that there will be a class of things that people will decide to port to in the absence of perfect connectivity. The appetite for consuming the computing I think will grow globally at a much faster rate than essentially perfect connectivity would. And even if you look at where we have connectivity in the emerging economies, the cost of that from a connectivity point of view is still very high as a percentage of both GDP and personal income compared to the rich world, and it isn’t a level of connectivity that would allow these types of rich applications to be done.

And if I’m right that these new attributes allow you to utilize all of the computation that exists at the edge, I contend that it’s provably impossible to say that the whole thing could be run as a service for everybody, because the services in the cloud are built off the same microprocessors as the ones at the edge, it’s the same chip. And the only reason that cloud services are attractive is if you assume that it’s a low-duty cycle utilization.

So, if, in fact, as this market grows up in terms of utilization at the edge, it becomes basically intractable in either cost or scale-out capability to build that and say I’ll just provision it for everybody.

And so that’s why I’m quite comfortable in predicting that we will solve these problems of composability and complexity will largely come through composition, and as such we will be able to build software systems that are incredibly sophisticated, that, in fact, take a leap beyond what we have known as support for information work or playing back your media or setting up your phone call, and that these things will move to a much, much higher level of utility.

One of the things that we’ve experimented with, and I’ll just offer to you as a place that’s sort of somewhere in the middle here, last December we released a new product at Microsoft that was a robotic SDK. And you could say why get into that? Well, one, we look at robotics as a global activity, and we think of it structurally and in terms of demand about like the PC was in the early 1980s. There were a few of them. There were, in fact, lots of different manufacturers, different ways to think about building and operating programs, and there needed to be consolidation.

Now, this is an interesting potential business if you have a very long term view, but relative to this question of how you build composable, complex systems that are distributed in nature, and also have potential services that they relate to, the robotics is a fascinating example. And what’s interesting about this kit, if you look at is, is it starts to introduce new models of how you deal with all of the historical problems of writing software systems. At the top it has a visual programming model. And so people will start out by describing the interactions of the subsystems of the robot and their interactions with external sources, much like people would if you’re on wiring diagrams in the past. And then you can take those things and compile them down, that the runtime is a model that is essentially a full composable architecture, there’s no traditional notion of locks, and yet it’s a very highly concurrent distributed asynchronous system, and so it uses a completely different model of how you build these systems that interact together.

And as we built it, it has given me confidence that these are solvable problems, that we will create models that literally hundreds, thousands, and then millions of people will adapt to as a new way to write software. And with it, I think we’ll see young people on a global basis, given these tools, rise to the challenge of using these incredible computational assets, and the corresponding platformesque Web Services to create a new level of capability that the world will come to depend on.

And so I think that when people today earlier also talked about is the platform evolving, is the old platform model really going away, I would argue that the answer is no, the one we knew is not really going away. There have been many people who have argued that this notion of the core operating system and other things, this is, to quote a famous quote, “a poorly debugged set of device drivers on which a Web platform runs.” And that didn’t quite prove out, and I don’t think that it’s going to prove out that the machine control is going to become substantially easy. And I don’t care whether that’s at the Web Services end of the thing or down on these devices that are near the people, there’s going to be a requirement for people to write the sophisticated code by one means or another to use that computational capability.

And so the traditional concept of platform I don’t thinks is going away, but I do think what is happening is that there will be new ways of using the network to make it easier to buy, deliver, install, and maintain software. So, a lot of the agony of dealing with the traditional models derived from flopping disks and then in CD-ROMs of what it meant to install software, that I think will actually go away. And that’s not intrinsic to either of these models.

And I do think the other thing that will emerge is that there will emerge Web Service platforms. The Web Service plumbing has largely been established by Microsoft and IBM, Sun, and many other companies, but there is the class of the infrastructural services that are not specifically the global Web search and others, but things like identity, presence. It turns out it doesn’t really work for everybody to decide to build their own presence. It doesn’t really work for everybody to write, well, I’ll just have my own identity system, even if you do then have to federate them in some way.

And so I think there are going to emerge, and this, of course, is a large part of what we’ve been trying to do with the Live platform, is to say two things at Microsoft. One is that every product we have, whether it was a platform product, a tool product, or a server product, or an application product, will have a service component in the future. And therefore when the platform takes on a service component, it too becomes a platform.

And so many of the things that programmers of the future will depend on, whether they come from us or other people, just as they depend on the APIs of the local machine in the past, they will depend on, if you will, the Web APIs, the services that they can invoke, because they’re eager to do it, that there’s value in doing it at scale.

And so I think what we’re looking at is essentially a persistent hybrid model. It’s important to understand the difference between the application and Web Services that are built on top of that environment, and it’s very important to distinguish that and the monetization that that allows from the traditional non-scale economic service components, which are basically selling people services for a fee. Both of them are important, both will be business models that will persist in the future, but I think that the critical thing to think about is how does this parallel, bimodal environment emerge over the next five years, what will become the infrastructural services, and what will become the future platform tool on which people will build the qualitatively different set of applications.

Thank you very much. (Applause.)

I guess we have time for questions.

QUESTION: (Off mike).

CRAIG MUNDIE: So, the question, if you couldn’t hear, was we had uni-processor machines, we’ve kind of had dual-core, many people have them already, and we’ve sort of absorbed some of that capability. Now you’re getting quad-core. When is the software going to arrive, in particular a Microsoft application like Office, do we get any benefit from it?

I think we have been moving incrementally, even in things like Office, to anticipate the approval of these. So, for example, in the new version of Office and in Excel the recalc engine was basically parallelized. So, every independent calculation within a spreadsheet would now actually be scheduled as an independent computation; in the past it wasn’t, it was just a single thread of valuation of all the calculations in the spreadsheet.

In that one thing for people, like in the financial services sector and others, that well-known tool becomes immediately a way of taking advantage of more of this computational capability. In fact, it goes one step further in that you can also buy the high performance compute server model itself, and you can point it at that thing, and it will transparently go remote that calculation up to a cluster, get it and bring it back. And so somebody can sit there and think they’re working on a spreadsheet, and then use essentially anywhere from four cores to hundred cores, based on the complexity of the model. Also to support that, they took all the limits off the size of the spreadsheet, and so it’s essentially an unlimited space in which to collect data and [inaudible].

I think in other ways the ability to mine the information out of the history and interaction I think will be the next thing that will come. That’s not in our product as it’s known today, but those are the kinds of things I’m particularly interested in.

Another thing that I think will come and will increasingly use this, in my attribute list that I called humanistic and model-based, both of those things have the potential to use wildly more computation, to be better than they are today. If you wanted to really do perfect dictation into Word, Word is still a long, long way away from that. And yet as the computational model gets better, and we just have more power to throw at it, I don’t know whether it’s when we get to eight times faster or 20 times faster or 100 times faster, somewhere in there we’re going to break the back of that problem, and the ability to recognize freely spoken speech and to get almost perfect in-context dictation done will probably happen.

Specifically, having a machine that can synthesize things and talk back to you (off mike) model of speech-based input and output I think is a potential user of that.

The model like the one I talked about where it predicts the next program you’re most likely to run, when you start to do something that’s even a more complicated task than that, the model essentially the more you compute the better they get. And it’s sort of like playing chess. Every time you take the next move, it says, oh, there’s another piece of data, let me go back and re-compute that thing again, and then each time I do it gets better again.

And so these are the things that I think will move us from sort of capable computing, which happens when you fall on your mouse and keyboard, to continuous computing that will actually continuously refine the context in which the machine reacts to what you do.

QUESTION: (Off mike).

CRAIG MUNDIE: The question is, you know, is it possible in the future you’ll go to a Web site, Microsoft.com, and give you the equivalent of a Start button and you click it, and there’s all these applications running.

I think the answer is yes. The company sort of embraces the idea that we want everything from local execution to local infrastructure to Web-based and hosted execution of things. Some of historical apps are better than others in terms of their hostability.

So, we’ve already launched Office Live, which is basically the Office application suite and all the communications stuff for small businesses as a fully hosted environment. So, in that environment it is sort of click to start on the Web.

It’s also where I distinguish strongly between the idea of how do you buy and deploy software in the future. If you say that I showed you the green button in the sky and I click and say I want to run something, I think this notion of click to run will also be generally present. But as a function of what you’re doing, what asset you’re on, whether you own it or don’t own it, you may get more or less local execution. Today, you can have personal services and companies like Citrix, you know, were built around this idea of server-side hosting of traditional apps and just presenting the interface. And so there’s really no reason that you can’t dial this thing in any way you want as a function of the organization, client, and infrastructure capability you choose to maintain.

What’s important is that you have all the different monetization, transactions, subscriptions, ad-based, and what you’d really like is a perfect matrix where you have all the different ways to pay as a function of your business model interest, and that of the consumer or customer you’re trying to deal with, and all of the services or applications that people want to have, and the ability to just pick the one you want in the form that you want, and you could say that over time we will have most of our products available in a fairly complete array.

QUESTION: (Off mike).

CRAIG MUNDIE: The question is, you know, when you talk about centralized versus personal computation, how does that affect or become an advantage relative to the privacy, and increasing concerns that people have about everything being known about you by your actions.

I really think there are two parts to that. Five years ago, when we started this Trustworthy Computing initiative, one part of it was security, the other part was privacy, and we did a couple things. One, we concluded that what people cared about with respect to privacy was notice and choice. And so it didn’t matter, you know, if you were going to put stuff in a centralized environment, as long as they knew what you collected, and they had a choice as to whether or not you were allowed to do anything with it, and then there was some perfecting in terms of believability of that, then it turned out the consumer really didn’t care as much about where it was.

If you don’t give notice and choice, or you haven’t perfected that relationship at the service side with the consumer, then, in fact, there’s a tendency so say then I just want to keep it myself. And so whether it’s super sensitive in that regard or not, clearly putting it on the client, and saying I’ve sequestered it there is important.

Ironically though, if it’s that important to you, you want to make sure you don’t lose it. And so that means you either have to have a service that redundantly stores it on a bunch of assets, all of which you own, or you actually have to be able to store it in the cloud or the equivalent of a digital safety deposit box. And so it turns out that we did a lot of research now in what we call privacy enhancing technology, and so whether that is anonymizers relative to identity or the ability to have Byzantine algorithms that essentially put your data — you know, break it up so no one can tell what it really is, but they keep enough copies of it around and they never really lose it.

So, I actually think that it’s going to be another area, a great area for research and development of both service and local technology, but I don’t think that the idea that you’ll just keep all your important data locally will be the answer to the privacy problem.

Okay, I guess they’re telling me I’m done. So, thank you very much. (Applause.)

END