Craig Mundie: Cornell University

“Rethinking Computing”
Cornell University
Craig Mundie College Tour
Phillips Hall, Cornell University – Ithaca, New York
November 2, 2009

CRAIG MUNDIE: Thank you very much. Good afternoon, and thanks for coming to join me this afternoon.

This session is one, actually the first of four I’ll do this week, and each year either Bill Gates or I for many years has gone around and visited some university campuses. Each time we go, try to meet with faculty and some students and give some lecture like this in order to share with you some of the ideas we think are going to happen in the computing world and its applications.

As the dean mentioned, I have a very interesting job and one that I really love doing, because it combines sort of the best of computer science at the bleeding edge with many of the business challenges, just because Microsoft is sort of a big and international company, and then also increasingly the policy and societal issues that I think will be increasingly important on a global basis are certainly the focus of a lot of the policy work that I get engaged with, both with the Obama administration and many of the other governments, in particular China and India.

I think as all of you graduate and have to leave this institution and go out and think about what you’re going to do, I hope to convince you that a lot of the challenges that the global society faces are ultimately going to require a lot more focus on the part of engineering and scientists collaborating in a multidisciplinary way with many people from the other life sciences and other disciplines. So, the demos that I’ve put together today will hopefully let you understand a little bit about how I think that will happen.

So, let me start first by talking a bit about how I think computing itself is going to evolve. All of you in a sense know about computing. You’ve grown up increasingly with it in many different aspects of your life, not just on a laptop or PC but increasingly on phones and cars, game consoles, televisions; all of these things are computers. Fifteen, 20 years ago, when I started working in this space that was a pretty avant-garde idea. Today, for all of you it’s just accepted as the way it is and the way it’s going to be.

But all of the computing that we know today has been built out of the same basic sequential machine architectures that were invented by Von Neuman and others, and built in this sort of exploding volume environment by the arrival of the microprocessor.

Not much has changed in that regard. We’ve changed the abstraction level of programming, making it higher and higher level, we’ve given people better tools, but at the end of the day we still tend to think about the architecture of the machines the way they’ve been for several decades.

But all that is really going to change in some fairly profound ways. In about 2012 the microprocessor industry will make sort of a big right turn, and all the chips will essentially begin to be heterogeneous architectures, high core count microprocessors, and that brings with it a whole host of both challenges and opportunities that the computer science world has not really had to embrace in a wholesale way.

There have been components of the community that have had to do this, particularly the high performance computing or supercomputing cadre, and those who’ve focused on high scale scientific or computationally oriented problems, but by and large we’re entering a phase where the data, our ability to sense and collect data and to organize it, is as much a driver in the world of engineering and science and certainly in the world of business as modeling, computational modeling was in the previous decades.

So, we really face a challenge now of trying to figure out how the challenges of parallel programming and large-scale distributed systems become mainstream, not just a corner of the computing industry and computer science environment.

Another big thing that we’ve all gotten very used to is the graphical user interface, the GUI as we call it. There are really two things that really drive the industry forward in these major cycles. One of them is when we get a fundamental hardware change, like the arrival of the microprocessor. Large scale broadband connectivity environment is sort of a hardware change that has brought us the Internet. And the other thing that drives it is a change in the paradigm of use, the model by which people interact with computers.

So, in the early days of computing it was all a text mode interface, and along came the graphical user interface, and that really drove the world to a level of acceptance of computing as we all know and use it today.

And the question is, what is it that succeeds the graphical user interface, and in part what we think that will be is called the natural user interface. And in this environment the computer becomes a bit more like a person. It develops the ability to see and to listen and to speak and to understand. With those capabilities we expect that we’ll be able to let computing embrace or be embraced by literally billions of people today who don’t get to use it.

So, I just thought I’d start today and help you understand a little bit about what this natural user interface is like, show you a video about one component of it, which is the camera that we announced in May that we’re going to introduce next year some time as part of our Xbox gaming system. This is one step of bringing a different model of human interaction to computing. So, let me run that for you.

(Video segment.)

CRAIG MUNDIE: So, this camera is one that was invented at Microsoft in the combination of efforts between our research groups and our Xbox gaming group.

The difference between these and a traditional camera is when you think about any normal video, it just takes everything it sees and it just flattens it into a 2D image. Of course, when we see with stereoscopic vision through eyes, we can sense depth. Today, computers don’t really have a good way to do that.

So, this is a particular technology that allows the computer system to see and understand things in depth in real time. So, this camera will actually allow in its first incarnation four people simultaneously to be standing in a room, and have all of the major joints, 22 major joints computed into a skeletal model in real time. So, you can then map those activities into anything you want. You can animate a character on the screen, you can essentially drive a car, you can bat balls, you can do pretty much anything that people want to imagine; so people are getting very excited about that, but it’s only one component of trying to think about how will people in general interact with this.

Recently, I’ve given a talk where I talk about the office of the future. In my view, what will happen next to fixed computer, which is sort of the successor to the desktop, is that the computer will be the room and it will be outfitted with cameras like the one that we just showed here on the Xbox, as well as array microphones and many other components. Through bringing these together and these new high performance microprocessors, it will create an environment where we really will move beyond the graphical user interface.

Today, we’ve experimented for quite a few years with handwriting recognition and touch of various forms and speech recognition, but in reality each of these has been either too computationally intense to do really well or too computationally intense to do multiple at a time in real time, and yet that’s what you want if the computer is going to take on the properties more like a person in terms of how you interact with it.

But I think that this movement toward a natural user interface is going to be critical if we’re going to be able to get computers to be much more of a helper for people and less of just a tool.

Today, the computer, even though it’s extremely robust, is still a tool. It’s largely reactive or responsive. You can do your apprenticeship and learn a lot about it and do some amazing things, but at the end of the day it’s still a tool.

So, the question is, how do we use this evolution to the natural user interface and all this computing power to move the computer to be more of a helper, to be something that is more proactive than reactive, and I think that all of those things are in the cards. They do represent a real discontinuity from the world that we know today, but one that I think is really important.

Today, the world faces a great many difficult challenges in energy, environment, health care, education, and that’s true whether you look at the richest countries like the United States or some of the poorest countries in Africa, for example. So, no matter what government you go to, they all want to know, how are we going to solve these problems.

Today, the planet has about 6.5 billion people. It’s pretty easy to forecast that over the next 30 to 50 years that number will probably go asymptotic at something around 9 billion people. And the question is, with all those people arriving, there will be more stress on the global ecosystem, and certainly more of a requirement to find a way to not only feed these people but to educate them, to deal with their health care, and ultimately to make them productive within their own local environment.

And my own belief is that there is no way to do that unless we can find a way to harness high volume consumer electronics and computing together in order to be able to address these problems. So, that allows us to think about bringing together robotics and other types of systems, and a model of global connectivity to begin to address these challenges.

The other big change, of course, that we all see is the arrival of the cloud. Everybody talks about it; actually very few people know what it really means. So, I’ll give you my definition.

What happens when you add essentially a programming model to the Internet, which was largely a publishing medium? In simple terms you get the cloud. What we’ve been lacking for the last decade is the ability to really think holistically about how the computing you can do at the edge in this emerging class of smart devices will be complemented by the computing you can do at scale in the cloud, and so we’re now beginning to solve that problem.

The other prediction that I have is that over the next few years we’ll increasingly begin to look for new ways to program the clients plus the cloud as one single, large scale, distributed computing environment, and that we’ll want to compose solutions across those capabilities that will use the local computing and low latency characteristics of the devices sort of in your hand or near you, in conjunction with the data storage capabilities and large scale processing capabilities that are in the Internet, in order to come together to solve those problems.

So, in order to show you a little bit about what I think that’s like, I’ve got a couple of demos that I brought, and these are prototypes that people who work with me in research and a prototyping group assemble in order to be able to help people, both inside and outside of Microsoft, understand a little bit about what we think the future might be like.

So, this first one that I’m going to show you is essentially a computational science laboratory. It’s actually been built by Microsoft Research in Cambridge. And it represents a way for people who have large scale data assets and large scale modeling problems to be able to compose these models without having to write code in the traditional sense or at least not to the same degree. Somebody has to write code, but they can be very specialized talents, and the people who have an understanding of the ecosystem that’s being studied can use it more directly.

So, let me just give you a little bit of a guided tour of how this works.

So, what you basically see here is a map of the world. It will be continually reflected in the top part. This particular model I’m going to show you is going to be focused on the climate interaction between the rain forests in the Amazon down in Brazil and the rest of the world, but with a particular focus on North America. So, we’ll go over here and zoom in on the United States.

Now, the way that this is actually built is it’s a bit like having Visual Studio, which is a toolkit for people writing programs — these guys call this the Science Studio, because the goal is to allow people not to write programs in the traditional sense but to compose large scale models together for scientific purposes.

So, this is essentially a circular system, starts in the top left corner. This HAD stands for the Hadley model. It’s an environment model that’s currently computed in the United Kingdom.

What this does is it looks up data from this large scale thing, it produces a climate analysis. This model introduces a vegetation and then a deforestation model, which are depicted on the right side here.

The model understands the interaction between climate, vegetation and CO2 either sequestration or release into the atmosphere, and then it essentially computes how much of that CO2 is captured in the forest, taken up in the oceans, updates the model, and goes around and around.

Now, the way that people would have had to do this in the past is they would write a lot of programs, and they would build this typically as a very large, monolithic model. Much of the climate modeling that has been done in the past, for example, for weather forecasting builds up these very large monolithic systems, and we run them on big computers.

Here we want to allow much more freedom of expression, if you will, or the direct incorporation of the knowledge of knowledge of more people, and so what you see here is essentially a data flow diagram that’s essentially a visual programming environment where you can essentially functionally compose these models.

So, you have different types of building blocks. The one on the left there are essentially different data sets that have either been brought in from some other place by virtue of the Internet or computed locally or pre-cached. You have different types of modeling components. And if you wanted to change this model, which I’ll show you in a minute, from including one particular model of deforestation to a different model of deforestation, someone could just essentially unplug one component here and plug it in and rewire it up, and the whole thing would essentially run again.

So, let me show you what it might be like, and why I think these kind of tools are going to be important.

As the dean mentioned, I’m actually on the PCAST for President Obama, and one of the actual projects that we’ve been asked to look at is this question of carbon offsets. So, this program I had the team put together, because I think it does speak not just to the science question but to the policy questions that many people have to address.

So, the question about carbon offsets is one that said, you know, whether it’s within a country or on a global basis, should one set of carbon producing activities be paid to reduce that activity by people who either produce less and have a right to produce more or broadly if they have a bigger concern, like, for example, the policy people who say I’m worried about food production in the United States over the next 50 to 100 years, what should the United States do about the actions being taken in other countries.

So, as everybody goes off to the Copenhagen discussions in a few weeks, many people are sitting around at the policy level wringing their hands, because, in fact, there is no good way to answer these questions, and there’s not really very many good ways to bring people who understand these issues together and in any quick way determine whether they should or shouldn’t take a particular policy position.

So, the way this works in this model, if you look down here at the bottom left there’s a deforestation rate. And, in fact, if you look at this rate today, 0.7 percent annual deforestation is actually about what’s happening in the Amazon today. In this model, which I’ll just let run just a year or two, so here we’re probably out now to almost about where we are now, approaching 2010.

The way this model was constructed, we took all the average annual temperatures for the whole planet as they’ve been recorded, and the year 2000, and that’s essentially green in this color code in the middle.

So, if it turns out — and this model runs in one year increments, so for 100 years. You can basically look at any year and within a half degree square in terms of latitude and longitude, and it offers an approximation, albeit fairly coarse, of what the temperature delta would be from the year 2000 to that time.

In this box on the lower left you can actually see the little white boxes appearing, and those are essentially estimates of the actual deforestation. There’s that part of the forest has essentially been cut down in the Amazon basin. And if you’re curious, you can see the whole world map up here and see what’s happening.

You can already see that even a few years into this we’ve started to see an increase in temperature in parts of the United States and certainly in Western Europe, and as we go forward a few more years — I can either run it or just drag this along — you start to see that there are, in fact, annual changes as a function of how this computes. But if you’re out here now to maybe 2050, you can start to see that just the deforestation of the Amazon has maybe the potential to convert the U.S. to a country where Cornell of the future in 2050 will basically have the climate of South Carolina.

That sounds good here — (laughter) — but it might be problematic for the farmers farther West. Most crops have evolved over a very long period of time, and they don’t actually have a lot of latitude in terms of what the optimal growing conditions are. So, without presuming some other scientific breakthrough, perhaps genetic modification of the crop or some other strategy, if we actually started to see this level of elevation, it could be really problematic.

If we take a look at this farther and farther out, it just gets redder and redder over time, and so it really gives you some cause to worry.

Now, because you can change this model, you can say, well, what happens if I change it to .3 percent or if I actually say development is completely unconstrained and it just gets more and more of the forest, and you can see it doesn’t matter what your policy is for protecting the Amazon if there isn’t one left. So, these things really make a big, big difference to people who are doing policy work.

Now, of course, one of the big challenges when you build a model like this is to know whether or not it’s really accurate, and I think one of the things I wanted to encourage people to think about is the fact that so little real work has been done to harness all the things that people know in the life sciences, in the biology of plants and forests, and to have it directly representable in models like this.

So, another thing that the guys did — in this case they were working with some people at Princeton — is they actually built a different model.

Most people, when they build a climate model like this, they just kind of assume, well, forests are forests, and that they don’t change over time. What we actually know is that in large scale forests they actually change quite a bit over a relatively interesting period of years.

So, here they built a different model of a forest that implements a mortality model for the trees themselves. So, instead of just solving one set of partial differential equations for the traditional model, here they can actually develop a more sophisticated one and then plug that into this simulation.

I’ll just run this for a little bit, and you can see over a period of 100 years this sort of depiction of how the forest actually changes year by year. And sometimes you can see that the canopy gets very tall, and when it’s tall it actually has a significant effect on the different kind of trees that live underneath it.

Interestingly, as the forest gets older and the wood gets bigger, it actually stores more carbon. So, what you really like is for the forest to get big and stay healthy as long as it can. But it’s very hard to understand on an annual basis how do these things interplay.

So, what these guys did is they actually took this model and went back into this diagram, and they took out the deforestation one that assumes a constant forest, and they plugged in the one that assumes you’ve got this dynamic forest model, and we have a different model.

So, here’s just a side by side look at the Amazon. In this case you want to be blue, because blue means that the forest has stored a lot more carbon, and therefore it doesn’t get into the atmosphere.

So, again you can essentially start to run this model, which is essentially at a level of I think assuming a .7 percent deforestation rate, and it turns out that it makes a substantial difference year by year in how much of the forest is releasing carbon — that’s red — versus absorbing carbon, which is essentially the blue and green area. So, everything else was the same in the model. All we did was change how we model the growth and death of trees within the forest.

So, this just shows that not only do we have to get these models to be more robust, but we are also going to have to find a way to take all the new understanding we have about the new biology and apply it in these environments. And if you’re a policy person trying to make investments, do you decide to go down there and pay people in the Amazon not to cut down the trees or would you be better off to spend the money in planning for genetically modified crops? So, in a resource constrained environment, policy people are now being asked to essentially make these questions, and they really do play for big stakes in these things.

To me the interesting thing about the work that I am doing with the PCAST is that here it’s recognized that you really can’t guess at these things. Absent some scientific analysis that has to become increasingly sophisticated, it’s just too long a cycle with too complex a system to believe that you could just intuit the right answer out of these things. So I think these are very important things for us to consider.

If you go back, there are other analytics that can be built in like these graphics where you can run a whole family of curves and understand what they might be like year by year. You can look at the one on the right, which actually recognizes that there are two things acting here. One is soil and one is vegetation. And even though they change a lot from year to year, you can actually look at the relative slopes of these things and understand when they cross over or when you want to depend on one more than the other.

So, this is actually a real model that was built. It’s a coarse model at this point, but the goal was to create a tool where people who understood the problem were able to work in this space directly, and didn’t have to essentially become big time computer programmers in order to be able to get these problems solved.

We’ll go ahead and give them a minute and they’ll set up the next demo, and I’ll tell you a little bit about what that one is going to be.

This model is one where I was still interacting with it in a traditional graphical user interface sense. I had a mouse and cursor and drove it that way.

The next one I’m going to show you I decided to take energy as another topic that’s hot these days, and talk about how a natural user interface might be brought to bear in conjunction with the traditional man-machine interaction models in order to give me a more robust way of doing that.

So, in the next system the way this desk is set up, even though it’s all sort of prototype gear, we have some cameras that actually observe me locally and perform an array microphone capability to listen. There’s a tablet here which gives me a very high resolution pointing capability, because I’m actually going to write things and interact with handwriting with the computer. Because it has a vision capability, it can kind of detect my position and know where I am within this environment.

The big curved glass screen you’ll see later in the demo is actually a prototype of a new type of display technology where we think people are going to want to be able to have sort of a surface that they can interact with from a control plane point of view, and then a display surface that they can use to observe models much like when you look at the big screen and I’m driving it from the desk over there.

But it also anticipates the arrival of three-dimensional displays. Today, if you go to some of the new movies, they give you the glasses to wear in the theatre, but you actually see 3D display. That’s going to become very, very commonplace over the next five years.

So, another big question then is, how do you have a multi-display environment integrated into your work environment where part of the time you want to look at something in a 3D display environment, and the rest of the time you may want to deal with it using more conventional approaches.

So, as we start this, what I’ll show you is we’ve chosen two things to do. One is to just start to take the Internet and explore using both handwriting recognition and voice input commands to control this, and the second thing I’ll do is then I’ll select a particular model. Much as we did with the climate model, I’ll take that and put it into a workbench, a modeling workbench.

In this particular case what I’m going to be focused on is a model of a nuclear reactor. I think that’s one of the promising technologies. In this case the one I’m going to show you a little bit about is actually being done by a company in Seattle called TerraPower. It’s actually started by some friends of mine, and financed by Bill Gates and the guy that hired me at Microsoft originally. They basically have looked out and said, you know, the world is clearly going to need a lot of power, a lot more than we have today. We face this simultaneous challenge of trying to reduce the carbon footprint, and so it gives us a real question about how do we solve for those things simultaneously, and I think that nuclear power may be one way to do it.

The second thing I’ll show you then is a little bit of modeling about wind power. But what I hope you’ll take away from this demo is that it will be a very different world when I can interact with the computer in a much more natural way.

So, for this one I’m going to get rid of the mouse and take a pen. So, what you see on the screen is just a blank screen, and I’ll essentially start to write.

What I want to explore, what my area of interest has been is zero carbon sources, and so I can expect in the future that since it’s known by everything I’ve been doing that that’s true, just like today when you try to address a piece of e-mail that offers you the names of the people as you start to type them, here I think the computer will be able to offer you suggestions of what it is you probably are intending to do or ask questions about. So, I’ll just use a gesture like a circle to say, yeah, that is right, and I want you to do a search. So, it goes out and does a search.

So, here we’ve got about 5,000 Web pages that actually relate to things related to zero carbon energy. And, of course, at that level it’s not very usable and it’s a lot of stuff.

So, here I might use a voice command and say, computer, organize it for me. So, here based on the metadata we can start to do some macroscopic groupings of these different Web pages. So, I could have it organized by my colleagues I work with, places it relates to, publication, research activity.

I’ll say, ‘Computer, zoom in to research.’ So now I’m going to use some eye tracking, because there’s a lot of things. So I’m just going to start to look around at different places on the screen, and each place I look I can essentially have it track along. So, I’ll get up to here, and there’s old Cornell. Amazing how that happened. (Laughter.)

So, I’ll basically take and select this page and look at it, and I can understand what’s going on here today, and whether there’s anything related to it.

But maybe I’ll take the pen and I’ll actually just scroll around to a few more, because I know there’s a blog over here which is actually related to energy called Zero Carbon News, and it’s something that I frequent.

So, here there’s actually a story about in this case it’s hypothetical, I made this up, the Nuclear Research Review, and it talks about the innovators, and I’ll zoom in on that particular one.

Here it talks about traveling wave reactor updates, and this is, in fact, a real story about John Gilleland who is the president of this company, and this traveling wave reactor is actually the real deal. Their goal is to come up with a new source of nuclear power that would run for 40 to 60 years without refueling, doesn’t have any of the issues associated with traditional nuclear power because there’s no fuel cycle, no refueling, and mostly no waste. So, if this could be built, it would really be a tremendous achievement.

So, here I know that John’s posted a video that updates it, and so I might just make a gesture on the screen to go search for and bring it up and play it.

(Video segment.)

CRAIG MUNDIE: So, if I was working on this and I was tracking it, I think one of the things that will happen more and more in collaborative scientific research is that we won’t just publish papers or journal articles the way we did, or even publish them online in the archives, as we’re frequently doing with the pre-publication stuff today. I think more and more the actual models, the underlying data, and if you think back to the prior demonstration, the large-scale data sets will essentially also be made available to people, and you’ll be able to essentially build on each other’s work in a more direct way.

So, here what I want to do is essentially take the model that is implied behind this diagram and all of its related metadata, and I’ll just drag it over here sort of like I do to OneNote today, but I’m really dragging it into my scientific workbench.

So, now what I’ve done is basically brought in the same model that Gilleland and his people are using in order to explore different elements of this reactor. If I want to understand different elements, I can essentially — there’s four key parameters: the burn up of the fuel, how we control it, what the flux is that’s generated, and how much power output I get.

So, if I want to see how this might work or how the wave would progress through this particular structure of the thing, I can actually just, for example, say, OK, I want to run from zero to 45 years, and show me what that’s like. So, let’s make it go.

You can see that as the video implied, you actually get a wave front that propagates through the fuel in this cylinder, and what it’s essentially doing is the front edge of the wave actually converts the material into fissionable material, and the back edge essentially is consuming it.

So, in the end it’s a bit like a traditional nuclear power system in that it produces intense heat and you make steam, but what’s different in this particular design is that it consumes almost the entire fuel.

Today, nuclear power reactors only get about 2 to 4 percent of the energy out of the nuclear fuel, and so one of the reasons there’s so much waste, nuclear waste around, and we have all these policy issues like do we have Yucca Mountain for storage or not, is simply because we get such a low yield out of the total critical material, both in the manufacturing process and in what comes out of the reactor.

Reactors like this hold the promise of getting that initially up to maybe 60 percent, and at least theoretically it seems possible to get it up into the 90-plus percent range, and that would be obviously compelling.

The other thing that would be quite interesting about it is that this thing can take the spent fuel from traditional reactors and use it as fuel. And as Gilleland’s folks have computed, just what we have laying around in waste containers today would provide power for the entire United States for several thousand years if you basically could use this kind of reactor to do it.

But, of course, this thing is still in development, and, in fact, it wouldn’t have been possible, even though Edward Teller and some other people at the national labs came up with the basic geometry and nuclear physics of this in the early 1990s. No one really could contemplate how you would begin to pursue the design of such a thing because the computational resources weren’t there. But courtesy of Moore’s Law and these very large-scale computing and cluster capabilities, even small companies like TerraPower now have been able to assemble their own computing resources sufficient to be able to do this modeling.

Now, in this case the model has been run and indicates that the flux level is not what’s deemed desirable, and so what I might want to do is explore this. So, I can look at what the flux level is. I can take some of the parameters on this side, and identify the ones that I want to run a parameter sweep on.

In this case I can just say I want to look at different configurations, so let’s go run them all in parallel. ‘Computer, run a parameter sweep across five reactor geometries.’

So, it loads up the models and the data, and it goes off and says I’m going to start doing this.

The challenge is as it begins, it estimates it’s going to take six hours. I get bored easily, so I say, hey, is there any way to speed this up? So, I can say, how about some cloud computing facilities? Have you got any of those lying around?

So, let’s assume we do. I’ve got a little one here that gets it down to one hour, and this really big one here gets it down to 11 seconds, just enough for this speech.

So, I think this is, in fact, a lot of how this will happen in the future. And while this is obviously just simulated, what I think will happen is you will be able to on-demand bring in these very elastic computing capabilities in order to get these problems sped up.

Now, what we actually have here is a model that’s actually a model that has come out of a real simulation that the TerraPower people have done, and here I have a 3D display. If I really had a 3D display, I’d be able to look at this thing in three dimensions, but here it’s a traditional 3D into 2D mapping.

At the bottom I’ve got the ability to drag a slider around over this sort of 45 years worth of burn, and understand how this particular configuration behaved.

So, if I actually take this control and drag it around, you can actually see that the way that this configuration would work is they actually light the fuel in the center of this particular design, and over time they can essentially either move rods or start and move some of the rods to the outside.

So, this would imply a mechanical design that cycles things around. It might not be as desirable but at least you could understand what the pattern of burn-up would be for each one of these particular things.

This is what these folks are doing now is they’re taking computer models like this, running lots and lots of different configurations, and trying to figure out what is the physics of this, not just the physics of the nuclear process but what is the physics of the container and all the material science problems that live behind that.

So, let’s leave nuclear behind and say I want to also go out and explore some things about wind power. So, I’ll just move this over to a different model, and say, ‘Computer, load my recent wind farm research.’

So, here I go out to the Internet, I’ve loaded up a bunch of terrain data, and I take the models I’ve been working on and I basically get them placed on this hilltop. In this case we have a wind farm that’s been there.

As we talk to people now who are in the business of designing some of these wind turbines, they’re getting more and more sophisticated, even to the point where people are putting sensors on the blades, changing the geometry of these things in real time in order to deal with the wind patterns.

So, here we can assume that we’ve been recording the wind patterns over a period of time, and we know how they tend to flow through this valley and impact on each of these different wind turbines at different times.

What I really am trying to understand is, is there a better way or a different geometry that would produce more power from the typical wind we get in this place.

‘Computer, load my flow models.’

So, here I’ve been working with some different people on different models for rotor flow and other aspects, and so I’ll basically look at this rotor model. Here I’ve got some. I can see how the wind actually impinges on the blades and what happens.

What I want to do now is work on refining this. One of the unknown questions is what’s the optimal angle for the pitch of the blade in this particular configuration.

So, here I’m going to actually tap on this to select it, and I’m going to chip this down to what I think will be a typical kind of configuration in the future where your desk will actually be an active surface. It will be display capable, it will be touch sensitive, and probably have at least some part of that you’ll be able to use a stylus on for high resolution capability.

Here on this piece of today special kind of Plexiglas with a coating on it I have the ability to project images on it. So, I’m going to basically flick this thing up onto that model, and then I’m going to essentially start to use some gestures in order to interact with this model.

The first thing I want to do is sort of tip it back so I can push on it and sort of tip it back to what I want, and then I want to zoom in on one of these blades. We’ll kind of use a gesture to do that.

So, now I can see the model and I can see an animation of the wind going over it. If you get nice laminar flow, it would be all green. Clearly it’s not all green, so there’s some losses in this model.

So, what I can do is essentially take the model and say I want to change some of the parameters. This was a 4.1 degree pitch angle, and I want to make it a 10 degree pitch angle. So, I’ll substitute that, and the model changes accordingly.

Now, what it’s giving me is essentially a control capability so that I can now do minor refinements of this more directly.

One of the things that people are really good at is actually looking at simulations like this and trying to make sort of integrating it all with the visual cortex and their understanding of the problem to get it right, even though it would be very hard to describe parametrically exactly what you might consider to be the optimal flow condition.

So, here I’ll actually use the gesture system and sort of come and say I want to take control of this parameter, and I want to actually be able to bend it a little bit directly. Clearly if I bend it up that far, the thing gets too red. So, I say, OK, that’s too much, so let’s essentially sort of lower it back down a little bit until it gets all green, and then say OK, well, that looks like a good flow.

So, I say now I’ve been able to make macroscopic changes in the model and then fine tune it by computer visualization and direct coupling of my own gestures into the execution of the model.

So, I say, OK, that’s OK, so let’s take that whole model and I’ll just bring it back down off the desktop and onto the wind farm, and then I could apply that to the whole model and rerun it and see what the new power output would be in this environment.

So, that’s that demo, so we’ll go on from there.

So, that’s the kind of things that I wanted to show you today to give you an idea of what I think computing and man-machine interaction will be like. It’s to me an exciting and important time. There’s never been as much flux in computing itself as I think we’re going to see in the next 10 to 15 years, and that’s even before you get into the questions like are we going to move to a completely different model of computation, so, for example, quantum computers.

We actually have a research project at Microsoft, have had now for almost five years, in quantum computation, and it’s something I’m particularly interested in, because when I look at some of these super scale data and computing problems, it isn’t clear to me that even if you dream about scaling these things up to the level that we have in some of these cloud capabilities, that even then we get solutions to some of these problems.

If you want to think about a world where we need to design new drugs or new materials to solve problems like battery storage for an all-electric society, I think that the answer to these things may, in fact, come through synthetic materials, and this is where you get this interesting combination of nanotechnology and new fabrication ideas, with the ability to compute what these material properties should be.

These are things we can’t do today. Despite the progress on Moore’s Law, even with the dream of these very, very high scale computing facilities, it’s still not plausible to think about finding solutions to some of those problems. And yet I think if perhaps we can invent things and build things like quantum machines, we might, in fact, find a solution to some of these problems.

So I love working in this space, and I think that the combination of the multidisciplinary approach to these problems and the ability to influence very large-scale societal problems and the policies that will govern what our lives are like and the lives of our children and grandchildren, it’s a very interesting time. I don’t think we’ve ever had such an opportunity as a society and certainly not as an engineering community, and I really look forward to continuing to participate in it.

So, that’s the end of the formal remarks. We have about half an hour left, and I’d like to use that time for just an open question and answer session. You can ask anything about anything, and if I have an answer, I’ll offer it. Thanks a lot. (Applause.)

Related Posts