Remarks by Craig Mundie, chief research and strategy officer for Microsoft
Microsoft College Tour
University of Washington
Seattle, Wash.
November 5, 2009
CRAIG MUNDIE: I’ve been at Microsoft 17 years, and throughout a lot of that time, I’ve had the benefit to be able to visit many universities around the world, and throughout all that time I’ve always found it extremely valuable to meet with the faculty, and in particular the students, and understand what’s on their mind, you know, what are the hot issues of the day.
I think in every generation, students have a blend of altruism and optimism and sensitivity that tunes them into what the issues that the society is going to have to come to grips with are that are emerging. And for an old guy like me, it’s important to come and get grounded in what the issues are that are happening every day in a place like the UW.
And so for all these years, I’ve taken these opportunities to do this. Bill Gates did the same thing, and he had made a decision that each year in the United States, he would spend one week and do it in an intensive way.
So, when Bill decided to retire, which was actually about three and a half years ago, and we started to make those changes, as he really did step down a year and a half ago, I took on a number of those responsibilities that he had, including running research, but I also decided to continue his tradition, and dedicate a week to going around and doing that. Last year was the first time, I did five schools in a week, and this year I’m doing four.
I get a chance to do this outside the United States, and so I have similar interactions globally, and this year I’ve had similar sessions in Korea, China, India, and actually one last year in Russia, too. So, I think it’s great to be able to pull all this together.
One of the things that I hope to do in this is spend a little time sharing with you my view of how computing itself is going to evolve, and in that evolution how we’re going to find a way to use it to help solve many of society’s biggest challenges.
The planet is clearly under stress, and I think will continue to be under stress, not just in economic or geopolitical terms, but in more scientific terms as well. Today, the planet has about 6.5 billion people. It’s pretty easy to predict in 30 or 40 years that number will probably go asymptotic at about 9 billion people, and we don’t really have a strategy as a society globally to figure out how we’re going to bring those people sort of online and make them inherently productive, and able to care for themselves.
It’s clear to me that when even the richest country in the world, like the United States, struggles to find a way to provide health care to all of its citizens, there’s no way to tax the 2 billion rich people on the planet to a degree sufficient on a welfare basis to handle another 7 billion. And so something’s going to have to change, and I think the key to that change is going to come from science and engineering.
I think in every generation there’s always been challenges, and I think more times than not the society gets from one generation, or one year to the next, on the back of the inventions of its scientists and engineers. And so being a scientist and engineering sort of guy, I think that we will see this community step up and do that again.
But I think to do that the problems are at a scale that we just haven’t really had to deal with in the past, and I also think that computation, in many forms, is going to be at the heart of being able to harness some of these other incredible technologies like the new biology, and nanotechnology, and several others.
But it is interesting, and it was actually confirmed more in even the three visits this week at other schools, that for many people, computing almost seems to be passé, that, hey, it’s really cool, we all have it, it’s in our phones and cars and game consoles; it kind of must be getting old now. And I actually think, and I think Bill Gates always said, we really think computing is in its infancy.
So I think there’s a real risk right now that people become infatuated with some of the other technologies, and I think as a matter of policy it’s going to be important to continue to invest in the evolution of computing, itself, and without that I think we’ll struggle, frankly, to try to find the solutions that we really need in all these other science and engineering disciplines.
There are many policy-related challenges that will ensue from trying to find solutions in difficult problems like energy and environment, climate, health care, and education, but I think ultimately all of those need to be addressed, even simultaneously.
When I think about computing itself, I think there are several changes on the near-term horizon that put more flux in the overall computing system than we’ve had in 30 or 40 years. So, despite the exponential growth of capability in the microprocessor and memory systems and storage systems and bandwidth of networks, I think that some of the biggest changes, and frankly from a computer science point of view some of the more challenging changes, are really in front of us now.
First, it’s important to look back and understand what are the things that drive us generationally from one era of computing to the next. And I think while there’s a great many things that contribute, there are really two things that sort of move the society to a new level of utilization, of computing capability. One of the things that sort of is a prerequisite for that is some fundamental change in the computing hardware itself, something that creates a new level of capability that we didn’t have before. And what that does is it allows the software people to take on — either raise the abstraction level of what people can do and how they do it or to take on a class of problems that they hadn’t before. And one of the most important problems to take on is the man-machine interface, and moving it generation by generation up to a more powerful form of interaction.
So, today, as I look backward a little bit, Microsoft was sort of born on the back of a pair of these transitions, which was the arrival of the microprocessor, and then the creation of the graphical user interface, and essentially its exploitation in very popular applications, and now in virtually all applications, and that’s trickled out into the fundamental way people think about interacting with computers, whether those are now in our phone or our car or on our desktop.
But I think that that has still been a limiter in terms of the way in which people get benefit from computing, and as we think about another 6 or 7 billion people that have to ultimately get benefit from it in order to get health care or to get educated, I think we’re going to have to move beyond the traditional point-and-click of the graphical user interface.
So, the next big thing that I think changes is also display technology, and with that, we start to have a richer way for people to interact with the computers. Clearly, all the science people recognize that the cost of sensors has come down dramatically as well, and so there’s a happy coincidence now where the ability to endow a machine with many of the same senses that people have — sight, touch, hearing, speech — are now more and more possible.
The thing that I think is really going to galvanize this change to a new model of human interaction will be driven by the arrival, probably around 2012, of the high core count heterogeneous microprocessors. With those, we can look at a potential performance increase as a step function almost that’s maybe between a factor of 10 and 100 beyond what we have today.
So that really forces you to ask the question, what are you going to do with all that power? Just running Word, Excel and PowerPoint 100 times faster is probably not enough, and so we’ve been thinking, what is it that that will allow? And I think while we’ve done a lot of great work in each one of these sort of human-like interaction things — handwriting recognition, voice recognition — it hasn’t really been good enough to be used by most people, and frankly, we haven’t had enough computing power to use them all together.
So, for the most part, where we have introduced them, like on tablets and on touch screens, or iPhones or anything like that, we tend to use it as a way, an alternative way of operating what is really still the graphical user interface. And I think one of the big changes comes when we agree to move beyond that, and in a more holistic way integrate these things together in what I’ll call the natural user interface.
So, our recent computing history was all driven by the graphical user interface, on the traditional sort of uni-processor microprocessor architecture, and so in the next few years, we’re going to see these high core count capabilities, heterogeneous acceleration facilities in there, and with that, I think the ability to move to this natural user interface.
But rather than talking gory detail about each one of these technical components, I want to talk about how I think the whole system evolves, and then how people might actually use that.
One other big change that is happening around us is the idea that people now call the cloud as a computing paradigm. If you actually go around and ask most people what they think the cloud is, it’s a bit like asking a bunch of people in a room who each hold a part of an elephant what is it you’re holding, and none of them can really accurately describe the elephant.
I think we’re going through that phase, in the definition of what the cloud is, but I think of it as the high-scale, often data-driven computing assets that are being built now, and I think that the next big thing will be to think of the cloud and these intelligent clients as one big distributed system, not two heterogeneous things that we’re sort of forcing to talk to one another.
One of the big challenges that we haven’t really stepped up to in the computing field yet is the fact that the society now thinks all the computing and software things that we’re doing, they are critical infrastructure. They expect them to work every day, 7 by 24. Yet the history of computing and computer science really was toward building people tools that they could use. If it helped them solve their problem, great; if it didn’t work exactly right, they’d figure it out, or they’d reboot it and start over again. Basically, people aren’t going to tolerate that anymore.
Yet, none of the real tools that are popular today — procedural programming methods and the way that we write and synchronize code for parallel execution or concurrent execution — none of these things are well engineered or architected to deal with these large-scale, distributed, concurrent environments. So, I think that even if we only wanted to do what we’d done with computing, but now have the entire society depend on it every day, we probably should go back and kind of rewrite everything with a different eye toward these questions.
But of course that won’t be practical, and so what will really happen is we’ll try to figure out how we drive these things on the back of some generational change in what people expect to do with computers. So, the cloud-plus-client environment I think is one which will demand that we think of these very high-scale distributed concurrent problems as a key part of the solution space.
So, to talk today about how I think this will be, I had some people at Microsoft in research and a prototyping group build some prototypes. So some of the things I’ll show you today are really working code that we have in research or prototypical form. Some of these things are essentially assemblies of test rigs or other things that we have. Each represents a technology that at least one at a time we have working somewhere in Microsoft, in a lab or product group environment, but I want to show what I think will happen when we start to bring them together.
So let me first start with a demonstration, and this one will be sort of GUI-based. The second one I’ll show you will be more NUI or natural user interface based.
What you see on the screen is a desktop that the people who build it, which is Microsoft Research Cambridge, in England, call a Science Studio. Many people are familiar with our programming environment, Visual Studio, and the question is, how do you build something like that, but for scientists instead of programmers, and how is it that we could give people the ability to couple their insight and scientific activities more directly into one of these systems?
So in order to demonstrate what this might be like and why I think it’ll be important, not just to scientists, but potentially, for example, to policy people, they chose and in the last two months worked with some people at Princeton, some plant biology people, and built this model.
What you see in the major part here is a map of the world, also cloned in the little one. I’m going to take this and zoom in it a little bit, and when we run the models primarily just look at the United States.
On the lower panel you actually see a map of the Amazon, and what this is, is it’s essentially a climate model that is trying to help people understand what is the relationship between the deforestation rate of the rainforest in the Amazon to the overall increase in temperature in half-degree by half-degree blocks, across the entire planet.
Why is this an interesting and important question? Well, I mean, if you look today, as all the governments approach the upcoming meeting in Copenhagen, people talk about policies — are we going to have cap and trade, or some other type of carbon tax, what is the question about carbon offsets, should rich countries pay poorer countries that have rainforests to leave them rainforests — these are important questions, and one where the science is not fully baked, and frankly, where many times policy people today are being asked to make decisions, and they really don’t have a good way, other than intuition, and running around and perhaps asking experts what they think about this.
So the way the team approached this is to basically create a more graphical programming environment. So, first they lay out a logical model of what they want this simulation to be, and this is sort of the block diagram version.
A key idea here is recognizing that many, many of the datasets that you need as a baseline for this have been computed and exist and are available on the Internet. So, for example, it starts up here in the corner with the Hadley Model, which is a big climate model that’s been computed around many parameters in the U.K., and they can take any particular part of the planet and look up for any particular year in the next 100 what they think the climate baselines will be. They brought in a vegetation model and a deforestation model. You link them together, then you figure out based on how much impact they have on CO2, how much does the ocean take. You update the model, and you cycle the whole thing back around.
The question is, in the past, if you were a science group and you were trying to do this, you’d really have to be as much a programmer as you’d have to be a scientist to put this together, and oftentimes in recent years, when I go around and I talk to scientists, whether they’re physicists or chemists or biologists, and you ask a group of them working together, well, how many of you are really just programming and how many of you are really doing the science, and it’s not unusual to find that half the people who are sort of credentialed scientists in one discipline are really just morphed into programmers to try to be able to make this direct translation of their understanding into a code.
So, this approaches it in a slightly different way. It builds on some of the work we’ve done in developing like a visual programming model, which was part of our Robotics Studio that’s been out for a few years, and it comes to address directly this question of how do you define and build large-scale, distributed, concurrent computation systems.
So in this what you have is essentially modules. In this case, these are datasets that are out there on the Internet. They each have sort of points that you can use to wire them up, and all the related metadata. You basically build different processing modules or control steps, and you hook all these things together. As you wire them up, you’re basically taking the work of other people, reflecting your own knowledge, and perhaps altering a single part of this, in order to get an answer.
So, when they actually built this, and the typical deforestation rate today I understand is about 0.7 percent per year in the Amazon. So, I’ll run this a couple of years just to get it started. So, here we are, probably out about today. And what this shows is on a temperature scale, relative to average temperatures that were measured in the year 2000, is that particular geography predicted to get a lot hotter. Where you get red, that’s maybe up near five degrees C delta up, or blue would be five degrees cooler.
These are really important, because swings of even that amount, relative to growing crops, has a material impact. And so if you look at sort of the breadbasket of the United States, even in this intervening decade, the average temperatures, at least by this model, would be predicted to be up slightly. The little white boxes here show how much of the rainforest is essentially being chunked out and eliminated.
And so the real question is, if you’re a policy person, how much would it be worth to you to not have this thing go up to 3 percent?
So, maybe you could say, well let’s run it out half a century, and even if you just leave it at the levels it is, it’s clear that in the United States, but also if you look at northern Europe and northern parts of Russia, you’re starting to see some significant changes in the overall temperatures.
So you might ask yourself, well, what if I was able to reduce this, if I was let’s say paying the Brazilians not to cut down the trees? Well, you can actually see that over a 50-year period, if you could really get them to lower that, the rainforests would obviously be a lot smaller, and the temperatures would potentially be quite a bit less.
They vary actually quite a bit year to year, and one of the things that people noticed when they built this model is that because no one really tried to assemble this stuff in the past at this scale, that most of the people who did the modeling of the forests just considered the forest a constant. But the plant biology people said, well it’s not really a constant. Trees die, and forests actually change over the years.
So, just as an experiment, they said, all right, let’s go and build a forest model. So the plant biology people said I’ll build this, and then you can just substitute it for the constant, and see what happens.
So, here’s another 100-year model that you can run, and here it actually takes into account over each time period what the morbidity is and the mortality is of the trees, and as a function of that, how high is the canopy, and what kind of growth lives below it, and it turns out that that actually has a material effect on how much carbon is sequestered in that forest, and when it tends to move it to release it.
So, over time you end up looking at graphs or you want to look at models and say if you’re a policy person, you know, if you think there’s a level you want to achieve or you want to have a different balance between what you think the soil and vegetation components are and how that relates to what you think the oceans are going to do, you get to at least look at these things over some longer period of time.
What they actually did was they build a side-by-side model of the Amazon at .7 percent deforestation, and as you run this model, you can actually see that the one on the left tends more toward the red, which means it doesn’t hold as much carbon in the forest, and blue in this case is good, it means it’s holding more carbon in the forest, and you can see that over time the one on the right now tends to stay blue a lot more than the one on the left.
So even though the amount of trees that are cut down is the same, the amount of carbon that it holds is actually indicated to be different, simply because the model is better.
So, I think of this as the beginning of a time where many, many experts will suddenly find it’s a lot easier to come together and collaborate on these things than it was in the past. You don’t have to be able to build or manage your own entire science model, and, in fact, you can share these things across the Internet in an easier way.
So, the next demo that I want to show you is one where I think that we’re going to change substantially in terms of the way in which we will interact with the computer.
So the next one I’m going to focus more on natural user interaction, which will couple handwriting recognition and speech and vision into how I might operate as a scientist who had a workbench environment that was a bit like this one.
One of the things I think is going to be quite important is the ability to change the whole concept of scientific publication. The traditional model of journals I think is rapidly becoming obsolete. Many scientists that I talk to in universities sort of acknowledge that they frequently now look to online archives and online publication and pre-publications for the most current research, but they still have a dependence to some extent on paper publications for a variety of reasons I’ll just define as legacy.
But I think that those legacies are going to have to be reconsidered in terms of what does it mean to be considered preeminent in your field. Do people just look for citations and papers anymore or will we find that things like synthetic reputation like we have on eBay or Amazon will ultimately come to determine what people think is really the impact that an individual professor or a research group might have on the community?
But setting that aside, I think that we’re already on a path where our ability to bring these things together and allow people to share not just their thinking in the form of words but the models that back them up and even the datasets that are essentially to either recreate that or build a derivate work from it, they all need to come together and be part of how the Internet works.
So, let me go to this next demo, and I’ll lose the mouse and sort of take this stylus, and what I’ve got here is — and I’ll show you two configurations. This is a prototype of what I think a more futuristic desktop environment will be, but I think it’s one where touch and many types of displays will become more commonplace.
In fact, my personal prediction is that the successor to the desktop computer will be the room, and you’ll sort of be in the computer, you’ll have display surfaces around you and in front of you, and that these things will all come together to support telepresence and a lot more visualization capability than even we have today on the traditional displays.
So, here this tablet is one that’s high resolution so I can write on it. So, the next demo I’m going to think that I’m basically an energy-related researcher, and I’m trying to study different zero-carbon sources. So I just want to start and sort of do a search, so I’m going to start writing “zero carbon” here. And I think much as happens today when you start to fill out an e-mail address, it knows kind of who you’ve been writing to, and it gives you these suggestions, increasingly the computer will become more proactive and less reactive. Today, it’s just a great tool, and it’s evolved from a very simple tool to one that’s incredibly sophisticated, but it’s still a tool.
And, in fact, if you look at most people’s computers today, they’re idle most of the time, because they only wait for you to do something, and then they react.
I think that will change a lot, and computers will increasingly try to anticipate what might be of interest to you and anticipate what you’re trying to do and help you complete those tasks.
So here I might just use another gesture like circling it to say “That’s OK,” and maybe write a picture of a magnifying glass to say “Go do a search on this.”
So, here I might go out and bring back say 5,000 documents that are many different forms of things that relate to this.
Now, of course, at that level of resolution it’s not very useful, but I think the other thing that increasingly is happening is we have more and more metadata, and the ability for the computer to analyze this metadata at a scale that would be impossible for me to do it gives is the potential to help me more.
So, I might say, “Computer, organize it for me.” So, here it can sort it and cluster it into different groupings based on research or colleagues that I work with, news stories, and allows me to essentially start to drill in on the things that are more interesting.
“Computer, zoom in to research.” So, now I’m getting a little higher granularity activity, and in this workstation we have cameras that actually track me and my position, but they can also start to do eye tracking. So, I can sort of just look around and it will pop up individual documents, and I can stop on one. Happily this one is the University of Washington. It’s amazing how that happens. (Laughter.)
So, you know, in this environment I can use the pen and click in. I might be tracking stuff that’s going on here, and related to energy research. But I also track a lot of other things. So, I might use my pen or vision to essentially scan around here again, but I know that here’s a blog, in this case it’s one we made up, but it relates to ones that actually are out there called Zero Carbon News. And so I go here to see what people think is interesting and happening these days.
So, I want to look at this nuclear research review, and here there’s a story about Traveling-Wave Reactors. It turns out this story is actually a real one. There’s a company here in Seattle or Bellevue called TerraPower. It was funded and started by Bill Gates and Nathan Myhrvold in a private capacity, and I’m friends with many of the people that are involved.
Here to me is a very interesting story, and if we look at it, Edward Teller and some people who worked on the original Manhattan Project and built bombs, in late life when he was at Lawrence Livermore came up with the idea that there were other ways to build nuclear reactors, and they would not exhibit most of the problems that we have with nuclear power today. But no one really had the capacity to pursue them at that time.
But Nathan and Bill and this man John Gilleland and some others realized that in that intervening time since the early ’90s, computing had gotten so powerful and so inexpensive in relative terms, that you didn’t have to be the weapons lab to think about these things anymore, you could do it in your basement, so to speak.
So they actually hired a bunch of people and decided to embark on an exploration of this other alternative form of nuclear power. I think the work is at a personal level I find it very exciting and quite promising. It would certainly be a silver bullet for the society if it could be built.
So, here I might know that John, in fact, has published a video clip that talks about it, and so I might just make a play sign here, and listen.
(Video segment.)
CRAIG MUNDIE: So, I find this a particularly interesting concept. One, it turns out that what it uses primarily for fuel is the waste that comes out of today’s nuclear power plants, and all the stuff that goes into building their fuel assemblies.
Today’s nuclear reactors only get low single-digit percentages of the energy out of the fuel, which is why we have such a great amount of highly radioactive waste.
These things can get about 90-plus percent of the energy out of the fuel, and they burn sort of as he says in one pass for about 60 years. So, it’s sort of a use once and discard. So it becomes its own burial cask for the little bit of radioactive waste that remains.
So, the waste that we have in this country today, if you could power the whole country with just these kind of reactors, would provide zero-carbon energy for everything the U.S. wants to do for several thousand years.
This is an example of what I talked about earlier where I said, look, I think the science and engineering community, they will step up and offer the society alternative ways to solve these problems, but to do this obviously requires many people to get involved and think about how that would work.
So let’s think of this as a place indeed where we were examining and looking at this published research, and I want to take it and sort of build on it.
So I’ve got this idea I showed you a minute ago of the Science Studio or sort of a science workbench, and here I might just actually take the diagram as a proxy for all of the metadata underneath this, and drag it and drop it into my Science Studio.
So now what I’ve done is I’ve taken the model and all of the data related to the model, and I’ve loaded it up into my own workbench. So, now I can try to understand this thing better or maybe I can offer them some suggestions.
So here the model has a variety of parameters that are tracked: power flux, burn-up of the fuel. And if I want to understand how does this work over a particular period of time, I might start the simulation, and I’ll say, OK, I want to run from zero to 45 years, and go ahead and do the simulation and let me see how it propagates in there, and what happens at the end.
So here the model actually keeps track of these various parameters, and indicates on the right that three out of four were sort of within the objectives we had but flux was not. So I can look at that and say, “Hmm, all right, we’re below where we wanted. I wonder if there’s a different geometry.”
Now, in this case while these have all been simplified for the purpose of this demonstration, these are actual models that John was good enough to give my group in order to show what it might be like to work on these kind of problems.
So, here if I think, just as they did, about having a lot more computer power available, how can I get that computer power to be even more proactive, if you will, in helping me search for solutions in a space where really nobody has ever done it before and there’s no intuitions? And, of course, that’s what’s modeling and simulation are all about.
Particularly when you’re trying to figure out how well it works over a future 60-year period, you can’t wait around and build one and try it, because it takes 60 years to burn through. So, simulation is really the only way.
So, here I might actually take some of these parameters and say I want to have the computer, a lot of computers do an investigation of different reactor geometries. So, I’ll say, “Computer, run a parameter sweep across five reactor geometries.”
So here it might go to a library or just randomly generate some variations on the different geometries, load up the different parameters and materials, properties, and start to compute how the burn would take place starting in different ways in these different reactor geometries, and try to refine them.
Now, here hypothetically it says this is going to take you six hours, and I say I’m impatient, I’m giving a speech, so I’m going to say, “Hey, what, got any of the cloud capability out there?” And so I might be part of a consortium of universities or the National Science Foundation may provide some of these assets, but I can essentially add a few more computers to this, maybe drop it to an hour or maybe I get a really big cloud and I drop it to 11 seconds; very convenient for a talk like this.
In reality I think this is the kind of thing that we will do more and more, and that as you’re able to compute many of these things and then evaluate them across a very large parameter space, you might, in fact, come up with one that looks like it meets your goals, and it may be counterintuitive.
So, this is actually a real output of a run that John’s people have done, and it is one real configuration. So here is sort of a three-dimensional model that is sort of looking down at a cutaway view of the reactor core for one geometry, and I can take this control and I can essentially start to slide it along on the zero- to 45-year simulation, and I can look at the different parameters and how they unfold in the course of the time where this reactor burns through the fuel.
They are investigating all kinds of different ways to do this where the geometry dictates whether you light it in the middle and burn out, do you start it at the outside and burn it, do you run it along a cylinder. Nobody really knows what the right way to do it is, but we know that the traveling-wave concept looks extremely promising.
So the ability to have other people take this and add their own intuitions or add more experiments or add more computing capability I think are all indicative of the way that we’re going to have to come together to solve some of these super hard problems.
So, let’s say I’ve done today enough on nuclear, but I’m a zero-carbon guy, so I want to look at wind. So I’ll say, “Load my recent wind farm research.” So, here I might be in the business of trying to optimize the power production from a wind turbine. So I load up from the Internet the terrain maps. I can put the topography on top of it, and I can put in models of the wind farm as we have historically put it in.
And we can now I think easily assume that all of these things are instrumented with a lot of sensing capability and the ability to record things over long periods of time. So this gives me live data that I can start to try to play against these models if I wanted to optimize what the wind farm would look like.
Here we can see vectors that indicate, look, the terrain really does affect how the wind flows around these things, and are there things that we could or should be doing.
Some people are actually looking now at the idea that you can actually have the blades of a wind turbine that actually alter their aerodynamic shape dynamically as a function of wind velocity and other parameters. So there are many complex questions around this.
So, let’s say “Load my flow models,” and so here I’ve been working with some people on a variety of different aerodynamic flow models, and I want to use these things to study one of these and see if we could make it any better.
So, here I’m going to tap one of these to say I’m going to use it as a proxy for the others, and kind of simulate what I think the work environment of the future will actually be like where your desk is actually a sort of control-and-touch surface, and you’ll have many different types of displays available.
Here this piece of Plexiglas has been specially treated and allows me to essentially project images on it, and even though I can see through it, I can also see images that are placed there.
I think of this as a proxy for probably a three-dimensional display that will be available in the next few years where I could really look at this model in a stereoscopic three-dimensional way. And so just as people today go to the movies and are starting to see movies in 3D, I think increasingly scientists and engineers will find that to be a common practice, too.
So I’ll take my pen and just sort of flick this model up onto the screen, and here I want to use gestures as a way to sort of give me a natural way of interacting with this three-dimensional model.
So, here we expect cameras like the Natal camera will be used to do these kind of projections.
I’ll use my hands and basically say I want to tilt this thing back a little bit so that it’s upright, and I can essentially zoom in on it and stretch it out, and look at the wing and see what the airflow over the wing is.
So the model allows me to change parameters in a couple of ways. In this way I might actually just circle it and say I want to change the pitch angle from 4.1 to say 10 degrees, and that will change the model and tilt the wind differently.
It shows there’s sort of a visual control on there, that blue thing. Here I’ve improved a little bit, but it isn’t quite what I want.
So, the next thing I’ll do is say it’s very hard for me to describe in parametric terms what I think optimal laminar flow over this blade might be, but visually I might be able to determine that or at least have some pretty good intuition about it.
So, I’ll basically use gestures to essentially grab a hold of the control in this environment, and essentially sort of start to change some of the parameters of the blade. Clearly that got worse. So I’ll basically push it back down a little bit until I start to get the thing to be mostly green, and I’ll say, OK, that looks like the best I’ve gotten today.
At this point you’re given natural ways of interacting in this three-dimensional world, and so I can say, all right, let’s take that and assume that that’s the way we’re going to do wind power on this thing today.
So, I’m going to take that and I’ll sort of drag it back down into the model, and that will apply it uniformly to all the different things, and I can replay this simulation and determine whether, in fact, I got the kind of power output that I really wanted.
This whole question of natural user interface and gestures and things is actually not as far in the future as you might think. One of the things I meant to show earlier and I didn’t, but I’ll run it now is in May of this year we introduced a new capability which is a camera that allows computers to essentially see depth. Most video cameras just flatten things out into a plane. And if you’re really trying to do things like these kind of gestures, it’s very hard to make that work.
Yet we believe that being able to couple yourself into computer simulations of various forms would be important, and the first place we decided to bring that forward will be in the gaming environment.
So we announced a project called Natal in May at the games show, and demonstrated this to people to get an idea of what it would be like when people in the gaming sense where you are the controller. So, we took a video, which I think the team could probably still play for me here, and this is kind of the reaction that people had.
(Video segment.)
CRAIG MUNDIE: So, when we give the beginnings of this world of the natural user interface to people, even in this environment, they get very enthused.
When you think about it, gaming is an example of a place where if you’ve played enough and you’re diligent enough, you can train yourself to map your movement in the 3D environment into those little joysticks and things on a controller, but it’s hard for most people. But when you give people this kind of capability, it becomes completely natural and immediate for them to operate in this space.
So it doesn’t matter whether you want to play games, whether you want to essentially navigate in what you might think of as “first life” instead of Second Life, which is a sort of cyberspace equivalent of the real world, or whether you want to do this type of scientific endeavor, I think this ability to couple people more directly in to what they’re doing and have this natural interaction model is going to be extremely powerful.
So, when I think about how all these things are coming together, the new microprocessors a few years out, increasing display capability, not just in size and lower cost, but in three-dimensional stereoscopic presentation, there is just such a huge array of opportunities. And when you couple that to these super-scale computation and data assets that we’ll be able to build, I do think that we will be able to solve many of society’s toughest challenges, but it will only come from a great deal of collaboration on a global basis, and it’s these technologies that I think will make that possible.
So, thank you for your attention, and I’m happy to spend some time doing Q&A now; we have about a half an hour. (Applause.)
END