Remarks by Craig Mundie, chief research and strategy officer for Microsoft
Microsoft College Tour
Harvard University
Boston, Mass.
Nov. 3, 2009
CRAIG MUNDIE: Let me tell you a little bit about why I’m here and what I hope to accomplish in the next hour and a half. First, for many years at Microsoft – I’ve been there 17 years – I’ve always had the opportunity to visit many universities, in fact all over the world. And every time I go someplace, I like to spend some time talking to faculty and students, and also to try to give some demonstration of what we think the future will be like in terms of computing.
Increasingly it’s become important to talk about how computing will play a role in solving many of society’s more difficult problems. Engineering and science has always been at the heart of solving society’s problems generation by generation, and clearly in this highly connected world we live in today there are no shortage of big problems to work on.
But I think that it’s going to require a much more multidisciplinary approach to do this than we’ve often had in the past, and that computing will be an integral component of how we ultimately take these next steps.
So, what I wanted to do this afternoon is share with you a little bit about the kind of work that I do and the people at Microsoft do, and then give you some demonstrations of some prototypes that we’ve built that will help you understand some of the changes that we think are coming in the next five to 10 years.
Let me begin by telling you a little bit about my job and why I love it.
I went to Microsoft in 1992 to start working on non-PC computing. Bill Gates and Nathan Myhrvold hired me at the time. Even then they were convinced that we would ultimately see computing put into almost everything, and today, 17 years later, that is indeed coming true.
So, we’ve spent a lot of time working at and thinking about how software and microprocessor technologies will evolve and find their role in these tasks that are not traditionally thought of as computing, and I think that that will continue through time.
I’ve also had the luxury of working in the technology policy arena. For more than 10 years I’ve been Microsoft’s liaison to many of the world’s largest governments, both in the established economies and for the last 10 years also in the emerging economies. I spend a lot of time talking to leaders in China, India, Russia, Indonesia, Thailand, and other places, and it gives me an interesting perspective about the differences in the way that governments and societies look at some of these challenges, and also a good understanding of the different state of preparedness that the various university systems of the world have in readying their populations to deal with these challenges and opportunities.
When Bill Gates decided to retire, which was about almost three and a half years ago, we divided his job between Ray Ozzie and me, and the half that I inherited included governance for Microsoft’s research operations.
So, today, we have the world’s largest computer science research activity. It’s about 950 PhDs around the world, who do pure research in the field of computer science and some adjunct areas.
That is complementary to the development work that the company does, and in combination this year I think Microsoft is the world’s largest single commercial research and development organization.
So, we’ve always believed – and Bill has believed since more than 20 years – that it was only the willingness to make this combined investment in the long cycle research and as well as the necessary day-by-day product development that would sustain the company in this rapidly evolving environment built around computer technology and in particular software.
So, today, we continue to make those investments, and in fact have sustained that through these difficult economic times over the last year, and we intend to continue to make those investments.
The other aspect of my job that has been quite interesting to me has been to do startups inside Microsoft. One of the big challenges of all organizations as they get large is to continue to try to figure out not just how do you bring forward new technology but ultimately how do you enter new businesses.
So, while many people think of Microsoft as the company that brings you Windows or Office, in fact, the company has become very, very diversified in the course of the last 10 years, entering the gaming business and the server business, and now increasingly different forms of the online service businesses, the mobile phone business, cars, embedded software. So, pretty much everything that we see computing becoming active in, the company has tried to develop some capability for, and we’ll continue to pursue that.
But because of the interest in doing startups, I have a very strong interest in things like health care and energy. That has become increasingly relevant not just in commercial terms, but this year President Obama appointed me to the PCAST, which is the Council of Advisors on Science and Technology, and there we see an important need to bring together these capabilities of computing and the evolution of computing itself, and to intersect them with many of the other science and engineering activities in order to look for solutions to many of these difficult problems.
So, today, this is the second day of a tour that will last the better part of this week where I’ll go across the country and speak at four universities. Yesterday, I spoke at Cornell, today at Harvard, at Illinois tomorrow, and University of Washington on Thursday; and that complements the stuff that I do on a more ad hoc basis when I travel around the world. But usually I’ll give at talk like this at least in China, India, and Korea and Russia in the course of a year as well.
So, the goal is to reach out and learn about what people in the university environments are thinking about and worried about, and also to share a little bit about how we think the technology is going to evolve.
Computing is at a point in time – and in fact, I think at one point the title was up here, I call this call “Rethinking Computing.” Obviously, we’ve all seen computing expand dramatically in the course of the last 10 years or even longer, and we think, well, we kind of understand it.
But, in fact, it’s at a time where I think the flux in computing overall is as great as it’s ever been, that in the course of the next few years we’re likely to see a fundamental change in the nature of the microprocessor, and with that a demand to make things like parallel programming and large-scale distributed concurrent systems something that is mainstream, not just reserved for a corner of the industry or a part of the scientific community that has focused on computational modeling in the past.
At the same time, these changes will bring a level of computational capability, even very inexpensively, that may be 10 to 100 times more powerful than the machines that you’re using today, and that will likely start to come to the marketplace in about three years, maybe 2012 or so.
So, with that increase in computing capability you have to ask yourself the question, well, what do you do with it, because just making traditional applications run a little faster is clearly not going to harness it to the fullest extent.
And so the other thing we think is emerging is a concept that we call the natural user interface. And this is a time where we will have the computational capability to start to bring together many of the almost human sensing and interaction capabilities that we’ve been pursuing in computer science for quite a few years. We’ve often used them one at a time to augment the kind of traditional graphical user interface that is prevalent today, but increasingly we think that this natural user interface will be a completely different, albeit adjunct model, to the graphical one. And with it we may introduce the capability for literally billions of more people to be able to get a benefit from what this evolved computing environment looks like.
And so let me just stop for a second, and show you a video. It’s one that we took earlier this year in May at the game conference when we introduced a project called Natal. Natal was the codename for some work done by Microsoft Research and our Xbox game business group to develop a new camera technology that would allow us to see and recognize things in a three-dimensional space.
Normally when you think about cameras, they can look into a room or a space, but they just flatten it into a two-dimensional image. So, unlike people who have stereoscopic vision and can have some depth perception, it’s been hard to get computers to have depth perception at the level necessary to do some interesting things. So, a new class of cameras have been produced that essentially give us that capability.
So, with this we have just one component of this natural user interface, one where people are able to interact with the machine in a completely natural and sort of gesture oriented way.
So, to give you an idea of how people react to that, let me just show you a fun video that was taken earlier this year.
(Video segment.)
CRAIG MUNDIE: So, this is a pretty typical reaction to when people get a chance to experience what it’s like to stand in front of a screen and just start to interact in a completely natural way.
If you compare that to what it’s like to have to pick up a game controller and navigate in a three-dimensional space, kids who do this enough really develop a skill for it, but the average person really has a tough time telegraphing what’s very natural in real life, which is to move yourself around in a 3D environment, into specific actions of a controller. So, you can take people who can’t do that at all, and you stand them in front of this camera, and immediately they have absolutely no problem engaging in things that happen in this three-dimensional space.
So, the question is, how do you take these kind of technologies, couple them to machine learning and speech recognition and speech synthesis, and the evolving world of new displays, in order to create a completely different model of how people will engage with computers, and that’s, in fact, what we think this natural user interface will be about.
I’ve recently given some talks where I showed what we call the office of the future, and instead of using this kind of technology to play games, we basically put it in a room. Now, in fact, I think that the successor to the desktop computer is the room computer, that in the future you’ll be in the computer, it will see you, it will sense you’re there and other people are there, it will be able to listen and record, and in some sense it will become a lot more of a helper and less of just a reactive tool, and I think that that will be an important evolution.
There are two other things that are going to work to allow this to happen. One is we’re going to see a rapid expansion of the display technologies that are available, up to and including a lot of different types of three-dimensional display capabilities, and the other thing that’s happening now is this evolution toward what people call the cloud.
What I think is going to happen is the cloud emerges by adding programmability to the thing that people have called the Internet. The Internet was largely a publishing medium, the Worldwide Web, and yet people clearly recognized that as we build these very large scale facilities in the backbone of the network, it gives us the ability to have high-scale computing and storage capability that would be complementary to what we can do on these devices, the client devices that are near us.
So, the world we see is one where we expect a combination, a hybridization of the cloud computing capability with these evolving client computers, and this client plus cloud system as one thing, not two disjoint things, will essentially be the place where a distributed, concurrent, programming architecture really starts to take hold, because it’s essentially a requirement to make that all work at scale.
So, what happens when you start to think about bringing this cloud plus client architecture to bear on a number of these interesting problems?
So, today, I’ve brought some prototypes that we’ve built. Some of these things are real working code, some of these are essentially simulations or emulations of things that we have working in the lab, but we’ve assembled them into a demonstration to help you look several, you know, five to 10 years into the future and understand how would these things be used, and how would they help people solve a new class of problems.
So, because it’s a hot topic today around the world, I chose two areas, one which is sort of environment and climate as a question, and the second is energy, and I’ll show you two different demonstrations that relate to how we think scientists and engineers might engage in those things in the future.
So, here I have let’s say my workstation of the future, and what you actually are looking at is a scientist workbench, essentially built by the Microsoft Research people in Cambridge, England, and they call this Science Studio.
For many years, we’ve offered a product called Visual Studio that is for programmers to essentially assemble codes and debug them and operate them, and these guys recognized that scientists really want to have a way to assemble large-scale simulations, but they want to write a lot less code. And with the data assets that exist at scale in the Internet, there needs to be more of a way to bring these together and compose them.
So, they built this modeling environment, and let me just explain what you’re seeing.
In the main pane we just have a map of the world, which is sort of duplicated in this upper left pane. Just for this demo I’m going to zoom in on this part of the map, which is the United States. And this other pane is essentially just honed in on South America.
What I asked them to do was to think about how we would use these kind of tools to build a model that would be interesting to both scientists and policy people who have to address some of these questions, for example, what is the linkage in the environmental sense between the activities and policy choices that are being made, for example, in South America regarding the rainforest in the Amazon, and how does that ultimately affect the climate over a period of a century in the United States, but also the rest of the world.
And so to do that, you basically start to build these models. So, you essentially can make a graph that shows logically what the model structure is, and that’s sort of depicted in this diagram where the disks are sort of datasets and the boxes are things that you want to process against that dataset.
Their goal was to ingest a lot of things, like this Hadley climate model that was done in the UK by the Met Office there, and then tie it to a vegetation model, and then add a deforestation model, which had not actually been done before, and then run it through the rest of this system and determine what do you think the CO2 levels are in the atmosphere, and then how is that going to relate to the climate.
So, the way you can take this and draw it up is essentially by not writing a lot of code, but by wiring up a data flow diagram where the boxes each represent either a dataset or some processing element, a sub-model component, and by just literally graphically dragging them around and making the connections, you’re essentially building a large scale computational model of this system.
And the idea is that it brings together a way of more naturally allowing people who have domain expertise to link together their understanding of the problem with very, very specific knowledge or understanding that people might have in casting to specific programs that represent the modules that you can bring together or adjust yourself.
So, they started with this sort of visual workbench model about two months ago, went out and worked with some people at Princeton who they knew who were sort of plant biology people, and said, you know, let’s see if we can build this model.
So, while it’s fairly coarse, they actually have a hundred-year simulation that takes in all of these datasets and combines them.
So, down here we have a scale, which is the deforestation rate in the Amazon. Today, that’s actually about .7 percent per year, and the question is how big an issue is it if you leave it at that level, should Brazil or the rest of the world work with Brazil to incent them to perhaps not cut down so many trees, or what would happen if you just let this thing be governed by development interests, and it got to be a much higher level.
Let me just start this up and run it a couple of years so you can start to see what happens.
What you’ve got is a scale, shown in the top right corner, where green is sort of the average annual temperature in every half degree by half degree block on the whole planet, and an estimate of what its temperature is relative to the average temperature in the year 2000 that was recorded. So, as things get hotter you tend toward the red, and as things get cooler you tend toward the blue.
And, of course, this is a real issue for policy people. In fact, one of the discussions that we’re actually having in Washington as part of this science environment is the question of carbon offsets, and what role do they play, how should they be modeled, can people really understand and predict what their effect is going to be.
Today, whether you’re a policy person around the environment at a domestic or international level or a scientist, these are questions that are actually quite difficult to answer, particularly when you’re trying to look out so many years in time.
What you can see in the global map up here is sort of how things are happening. Even at this intervening few years, Europe has gotten a little warmer than it was in the year 2000, and you can see the white boxes here are chunks of the forest that essentially are basically being eliminated by these development activities.
Your ability to slide this scale around and look at it over any given period of time and at any particular given rate of deforestation becomes quite interesting. If you say, well, if the deforestation rate gets a lot higher, what happens? So, if you move it up to say 1.5 percent, you can see that the center of the United States nominally 40 or 50 years from now is pushing toward 5 degrees warmer on average.
This could be a real issue. If you believe that the crops that we all depend on for food grow in that part of the country and are actually not very happy if the average temperature gets several degrees warmer, then unless you then assume that we’re going to do something else like genetic modification of the crop, what’s likely to be the impact on the food supply, both for the U.S. and for the rest of the world? And, of course, you can look at some of the rest of these maps and realize that northern parts of Western Europe and Eastern Europe and Russia are all warming up, too, and, in fact, the higher you are in the latitudes the more telling the effect becomes.
You can see that a huge chunk of this rainforest would actually have been eliminated, and therefore your chances of deciding to look back and want to fix it later get to be a lot harder.
You know, if you obviously lower the rate to some lesser level, you lose less of the forest, and you have clearly less impact on the overall climate.
And so the ability to do this almost in real time by combining some pre-computed models and then this happens to run now on an eight-core machine that’s right here, we can composite these things together pretty much in real time.
Now, clearly here we’re only linking one thing, the Amazon, into a general model and then computing its effect, and, of course, this problem is going to be far more complicated than that. But even to be able to build a model like this with real datasets and be able to give people a tool to do this in two months from a standing start by people who historically were not experts in this field, you know, I think is a way of thinking about what will happen when we give these much more powerful tools to people.
One of the other things that is kind of interesting is as they built this, they understood that mostly people who do these models assume that forests are actually constant, but, in fact, they knew from talking to the plant biology people that, well, forests really aren’t constant.
So, they decided to go take another model that was in development, which is shown here, as a sub-model of this thing, and say if you actually implement a function of the mortality of the trees and then you understand the impact on that, how does the carbon sequestration that happens within the tree itself vary as a function of the age of the real underlying vegetation.
So, they built a model here where as you run this and look at it year by year you realize that the forests over time actually change their shape and composition quite significantly. And when you have a lot of young growth, it doesn’t actually hold as much carbon, as I understand it, and as you get older growth it tends to hold onto more of it.
So, the question is, well, if this was actually a better and more accurate model, and you went back and put it in the first one, would it actually make a significant difference in the outcomes.
So, they did that and this is a side by side comparison of the two models, and in this particular case what you’re comparing is what the annual forest carbon is – so blue is good in this case and red is less stored, and you can essentially do the same thing, you know, run this model over different parts at different deforestation rates – in this case I think they ran it at .7 percent – and begin to see year over year how does the model actually change, and is it material.
You can see that there is actually some significant differences in the out years, particularly as the overall size of the remaining forest changes, that one model indicates that it’s going to be not so good and the other model indicates perhaps it’s not as much of a crisis as people might expect.
So, this just shows that there’s a real incentive to get a lot more people thinking about the details of these complex models, and beginning to bring forward the world’s collective knowledge about these things and put them together.
So, creating this kind of workbench where people can share these models, can essentially alter them and play with them I think is a way of accelerating understanding and developing better capabilities in this space. And even if you start with coarse models like this, arguably it may be better to help inform people who have to make policy decisions in the next few years than just telling them, well, all you’ve got is your intuition about whether you think these things are really going to be important for the country or not.
Like so many other things, like giving people spreadsheets, having these tools gives you all kinds of new visualization ways. Here you can tell it to run a family of curves at different deforestation rates and see what happens. So, as we get more and more computing power, it becomes more and more powerful to look at these things.
On the right you can see a curve that says, well, even though temperatures and CO2 vary year by year, are there trends that you can look at over time, and here you can see there’s two major components to the model, the soil component and the vegetation component, and you can see how they start out in relative terms and when they cross over, over the time period of the simulation.
This is one example of creating a new model of programming in a sense where it’s sort of doing for scientists in a large scale what Excel did for businesspeople 20 years ago, which was give you a way of expressing problems that’s more natural to you, and without having to go learn about writing programs in a traditional sense. I’m quite enthused about what this implies, and we’re going to continue to engage with people in the science community around that.
Now, to do this demo I still used a traditional sort of graphical user interface and a point and click model, and I want to show you a little bit about how we think you might move into this space as we are able to add these natural user interface components where we can do a variety of things that include speech recognition and eye tracking and gestures and handwriting recognition, and you bring all these things together in order to give people a new way of interacting with the computer.
So, the second demo that we’ve got is one that is focused on energy. I’m going to start and go through a series of steps where I’m basically doing research and looking for information about the energy topic in question, and then I’ll show you two different kinds of model and how we could play them into this kind of visual working environment in order to make this all come together.
So, if I swap my mouse here for a pen, the desk here is outfitted with some cameras and microphones. The touch surface that is in front of me, which I’ll also show you in a laying-down configuration, is a high-resolution one that is sensitive to the pen and stylus. And the little icons at the top sort of indicate whether it hears me speaking and to some extent when I get within range it may indicate it sees my face.
The whole idea here is to begin to explore what a natural user interface would be like, and also to presume what would happen when the computer begins to have context to have more memory and to bring those things forward to improve the interface.
So, for example, let me just start and write – since I’m pursuing zero-carbon research, I can start to write that, and it says, OK, that’s probably what you’re doing. So, just like today it helps you fill out the names on your e-mails, there’s no reason that given appropriate context it can’t anticipate what you want more, and just finish things for you. So, I’ll just circle that as a gesture to indicate that it’s correct, and I’ll say I want to do a search on that.
So, I’ll go out and basically use the Internet to collect a set of documents that relate to this or that I’ve been accumulating regarding this. And, of course, at this scale and resolution that’s not very interesting.
But I might say, computer, organize it for me. And we can take the metadata that’s there and cluster these things around different parameters that might make it easier for me to navigate, clustering around research or publications that are relevant, news stories, colleagues’ work.
So, I’ll say, computer, zoom in to research.
So, here you get another map of a lot of these documents, and I can do eye tracking now to basically just sort of look around and look at the screen, and each time I look at a different place, it sort of pops up one, and we’ll end on this one.
So, here’s a document. Happily it’s from Harvard. (Laughter.) And I might basically just click on it with my pen to open it. So, I might read about what’s going on in terms of the views around energy at Harvard, and then I’ll say, OK, well, I want to go in and I’ll use my pen or my eyes and I can look around some more.
But there’s one over here that let’s say I know about. This one we made up hypothetically, but you might call it the Zero Carbon News.
This might be a blog that I track regularly to see what’s happening. In this case there actually is an article here that I’m familiar with, and in particular – and this one is not actually made up – there’s a story here about a Traveling-Wave Reactor. In this case this is actually a real company in Seattle called TerraPower that’s actually funded and put together by Bill Gates and Nathan Myhrvold and some friends of mine, and John Gilleland is the CEO. They actually are designing a new type of nuclear reactor.
And their ability to do this, some of the concepts for this reactor started, were developed by Edward Teller and some people back in the National Labs in the early ’90s, but no one really pursued it. And in part it wasn’t part of the traditional energy or weapons programs, and at the time people didn’t have enough computing power that was generally available to startup companies, that they could go and work in this space. But as things have evolved, in fact, a small company like TerraPower has been able to get enough computing power together to begin to explore this alternative form of power.
So, I know John has posted a video. I might just make a gesture on the screen to say go get it and play it.
(Video segment.)
CRAIG MUNDIE: So, personally I find this a really exciting piece of work in that they’ve been able to take a concept that no one had really explored adequately, and converted into an idea for how to create nuclear energy that doesn’t have many of the traditional problems around proliferation. There’s no fuel cycle, there’s no maintenance, there’s essentially no refueling requirement, and as input it actually consumes the waste fuel, the waste that comes out of existing reactors.
Today, most reactors, as I understand it, only get a small, single digit percentage of the energy out of the fuel, and it’s why we end up with such a large amount of high energy waste or highly radioactive waste. So, this particular reactor is able to take that as its fuel and burn it, and it basically consumes almost the entire fuel. And that’s why, as John said, they’ve calculated that if you could power the United States with these kind of reactors, the waste that’s sitting around in casks waiting to find a home would basically provide electricity for the United States for the next several thousand years.
So, these are the kind of technologies that I think are somewhat silver-bullet kind of things, and where science and engineers and computer scientists have to come forward, explore them, and if we can make them work, then of course they represent a real discontinuity in a quest for high scale, zero carbon energy sources.
So, if I was working in this field, the way we think that things will work in the future is a diagram or a document like this is not just sort of the ink on paper or the images, but in fact it probably links to and has underneath it the actual models that the community perhaps shares.
In this case I’m going to take this model and I’m going to drag it over here to my science workbench, and sort of have it ingest all of the things that the TerraPower people have made available to their collaborators or people that want to work in this space.
So, I might have different outputs or controls on this model, burn up, flux, power output, and if I want to understand how it works, perhaps understand what the wave propagation is in this particular cylindrical form, I can say, OK, I want to run this from zero to 45 years, and go ahead and run that simulation.
So, it does that. I can observe what happens, and it can be checked against a certain set of parameters that I might be interested or might be an expert in understanding.
In this case the model basically says, you know, there’s a certain set of these things that are out of spec relative to the goal, in this case flux. And so I can say, well, what was the flux, and you can look at the parameters.
You can look at essentially the parameters that are variable in a model like this, and I can say, OK, look, I really don’t know what the answers might be, but how can I use these things to make that better.
So, as we have more and more computing capability, our ability to have it do a lot of the heavy lifting or explore these spaces computationally gets better and better.
In this case I might say, computer, run a parameter sweep across five reactor geometries. So, it can go out and load these things up, and create different geometries that we might have indicated look promising, and go out and start to run the same kind of computations.
Now, in this case it could take a long time. Here it says, well, maybe it takes six hours. And you can say, well, I really don’t have that long to wait today. So, it says, do I have any other computing that I can add? So, I say, OK, we’ve got these cloud facilities that people are putting up and making available to support this kind of research. So, I might bring another little one online and say, OK, that might reduce it to an hour. I’ve got this really big one. If I bring it online, it gets it down to 11 seconds, conveniently short enough for this speech.
So, the idea here is to think what if you actually could run models like this at very large scale, and make conclusions from it.
So, in this case what we actually did do is take a model that was the output of a computation like this from the TerraPower people and they loaned it to me. So, here I can actually look at this model and tilt it as a 3D image. In the future, I think you’ll see more and more 3D displays, and I’ll talk more about that in a minute.
But really what’s been loaded into this is an output of a simulation run like the one that I just simulated, and I’ve got this control that actually allows me to look at a visualization over that lifetime of the reactor, and actually look at the different parameters in terms of burn-up and flux and power output, and understand as they would operate this particular reactor geometry and explore whether or not you only light it in the middle and burn out or whether you kind of light it at both ends, what kind of things would happen in this.
So, it really brings visualization to the fore in terms of helping people understand these concepts, and I think the key to doing this is going to be having these very large computational capabilities.
But, of course, people are looking at other models for energy, and so let’s go on and look at a different one, which would be related to wind.
So, I may be a different person or in this case a researcher that’s trying to understand what the wind farm research is like. So, I can say, computer, load my recent wind farm research.
Here it goes out and basically gets a terrain map off the Internet, basically looks and annotates it relative to the terrain and topography, and then actually places on this a set of wind turbines that are already there. My job might be as a mechanical engineer or an aerodynamicist to try to understand is there something that we could do in this particular location that would improve the output of these turbines.
So, what we can assume is that everything is getting instrumented with sensors, and this both gives us the ability to record history, for example, the wind patterns that tend to flow over this particular topography, and how they intersect the wind farm, and it also gives us, at least in theory, the ability to have turbines and blades that actually might alter their aerodynamics dynamically as a function of the wind that’s available.
I might want to look at different models. Computer, load my flow models.
So, here I might have a set of these that I’ve developed myself or gotten from other people. In this case I want to look at the aerodynamics of the blades on this wind turbine and understand what happens as the wind intersects them, and what its impact is on that.
Let’s say I want to take this one turbine and use it as a proxy for the rest, and think about how I can alter that or see if there’s a better combination.
In this case I’m going to take this tablet and lay it down. I think that what will happen in the future is that people’s work surfaces will actually be built where they have a set of displays that are these type of vertical ones. Increasingly I think these will be three-dimensional in terms of their capability to display. And all the surfaces of your workspace and even the walls will increasingly be display surfaces and have input and sensing capability, and we’ve been working on that.
So, I’m going to take this and essentially flick this model up, and put the wind turbine up on this other display, because what I want to do is use this and my body as a control mechanism, and then make some adjustments.
First, I’ll basically tilt the thing up. So, I’ll make a gesture that basically rotates it up, so I can see it. I can basically take my hands and essentially say I want to sort of zoom in on one of these blades to understand it, and then see the calculated wind patterns as they go across this blade.
So, here on the model I have the geometry of that blade, and I might want to understand if I change that geometry what it might be.
Here I’ve got a pitch angle of 4.1 degrees. I can say I really want to change this angle, so I want to cross that off and write 10 degrees there, and say, OK, let the model take that input and change the pitch angle of the blade. Now, I can see that may alter the wind flow in some way, but it isn’t exactly what I was hoping for.
One of the things I think is powerful is the ability for people to interact with models and make a judgment, an engineering judgment that may be very hard to describe if you had to write it down parametrically, but if you can observe the whole thing, the visualization may allow you to make a judgment that is otherwise difficult to describe.
So, in this case I’m going to use my hands as a way to essentially alter the geometry of this and see how it affects the laminar flow of wind across the blade, and hence what the lift is, and try to find a better configuration.
If I basically grab a hold of this model, I could put a control on it, and I can sort of lift it up a little bit, and I can see, well, that actually made it worse, and I’ve got more red.
So, I’ll basically say, OK, I want to push it down a little bit, and I’m basically just trying to watch to see when I get it to a level where it’s mostly green. So, I can say, OK, I like that and I can leave it there.
And so what I’ve been able to do is find a way to, just as we were doing in the Xbox game, you’re sort of coupling yourself into the game, here you’re essentially coupling yourself into the simulation in a fairly natural way.
And if I think that that’s OK, I want to do that, then I can essentially get gesture and drag this model back down to the tablet, and play the whole win pattern against it, and look to decide whether or not I really got the kind of result that I was looking for.
These kind of things where modeling, and simulation, and the ability to interact, and to share I think are really going to be an important part of how science and engineering gets done in the future, and I think it may be, in fact, a way that we can couple enough of the world’s IQ into some of these difficult problems to really yield some answers.
So, with that, that’s the last of the demos I’m going to give. Just in summary, in terms of the formal remarks, I’m extremely optimistic now that we have in front of us in the next, say, 10 years a number of fairly dramatic changes in computing. We’re going to see a dramatic increase in the capacity of a local device, and with that, I think, the arrival of this natural user interface concept.
We’re going to see improved communication capability, both wired and wireless, and the emergence of these very, very high scale computing facilities, a/k/a the cloud. And by coupling them together in a new distributed concurrent programming architecture, I think we’re going to be able to harness these to a set of tasks that so far we haven’t been able to do. I think we’re going to see the arrival of new display technologies, including three dimensional displays, very large scale displays. Not like just the one that’s up here, which is fairly low resolution, but pretty high resolution things that you might be able to just paste on your walls, or will be an integral part of the work surfaces of the future.
And so, the desktop will be replaced by essentially a computer that you’re more in than in front of and with that I think people will find novel ways to couple their intellect into some of these problems.
And then, I think that there are some research problems in computer science. Even many of these problems, I think, can’t be computed even at the scale of these very, very large data centers. And so pursuing things like quantum computation, which also is beginning to look enticingly possible, I think represent really interesting ways for us to develop a capability that is literally a quantum leap beyond what we can do even as we envision these other facilities today.
But how to link those things together, the implications of how you would program these things, and feed them data, and take the results back, nobody is really working on these now. So, partly what I want to encourage both faculty and students to do in these institutions is to begin to realize that there is a discontinuity coming, it isn’t that far away. We can see certainly three or four of these things on the horizon, no question, and I think their disruptive potential is high, both in the sense that what we’ve been training people to do doesn’t adequately prepare them to use them, and it’s also potentially disruptive in the sense that it may give us a set of tools to solve some of society’s toughest challenges that we just historically haven’t had.
And so, I want to encourage all of us to think about that, and collaborate together to try to prepare for this and solve some of those really hard problems.
Thank you very much. (Applause.)