University of Illinois at Urbana-Champaign
Craig Mundie College Tour
November 4, 2009
CRAIG MUNDIE: Thank you. Let me begin by thanking you for coming and spending a little time with me this afternoon.
My job at Microsoft is an interesting one. I’ve been there 17 years, and originally was hired by Bill Gates to work on non-PC computing. So, in 1992, that was something that not very many people were thinking about, but Bill was quite convinced that the day would come where we would put microprocessors and software into a great many things, and indeed we all live in a world today where that has largely started to happen.
You know, I really enjoy what I get to do, because it’s a very diverse job. I worked for the last 10 years with Bill Gates on thinking about the long-term evolution of technology and in particular the company’s strategy as it diversified beyond the traditional personal computer business, and when Bill decided to retire about two and a half to three years ago, we divided his job between Ray Ozzie and me, and I inherited the part that was responsible for the company’s global research operations.
At Microsoft we actually do research in a centralized way for the whole company, and then we do development within the different business groups.
So, today, we have I think the world’s largest computer science research operation, about 950 Ph.D.s around the world doing basic research in computing and a few related areas.
I also have enjoyed the job in terms of its role in technology policy. Today, I think there’s a general agreement by economists around the world that the single biggest driver of economic expansion globally for the last 40 years has been information technology. And as a result, governments everywhere in the world now have come to appreciate the importance it plays in their own economies in the evolution of their societies.
And indeed as we look to have computing involved in every aspect of our life, and we begin to encounter a whole new set of challenges for those of us that are involved with it at a technical level, for example, today, computing really has become critical infrastructure for our society. It’s not really any different than electricity or running water: we depend on it. And yet the evolution of computing up to the present time really started just thinking we were building tools, originally for scientists or engineers, and even though it’s expanded at a dramatic rate, we haven’t gone back and done much rethinking about computing and how we do it and how we build software and how we ultimately make it meet the needs of a global society that is dependent on it.
And so in the time we have together today I want to try to explain to you a little bit about how I think computing is going to evolve. In fact, I think some of the biggest changes that we’ve seen in computing in the last 40 years are likely to take place in the next five to 10. And some of those trends are very clear today, and we know that they’ll begin to be evidenced in products in the marketplace in as little as two or three years.
And yet if you look around, as I get a chance to do on a global basis, and talk to people in universities and elsewhere, you find that by and large they still tend to think about computing the way it’s always been.
So, my visits to this campus and several others this week are part of a longstanding activity that Bill Gates and I have always enjoyed and undertaken, which was to visit with faculties and students whenever we get a chance. Bill had a very specific program where he would dedicate a week to doing that. I always did it on an ad hoc basis as I traveled around the country and the world. And when Bill retired, I decided to at least take up that same mantle and dedicate a week in the United States each year to visiting college campuses in a more intensive way.
So, today, I did meet with people from the administration of the school on the research side at a roundtable with some of the faculty in a multidisciplinary group, had a small group meeting with some students, and then this meeting.
In this one we’ll spend, I don’t know, maybe an hour and a half together, and it will be two parts. I’ll explain a little bit of how I think computing will actually change, and I’ll talk by virtue of some demonstrations that I’ve arranged about how I think people are actually going to use computing in the future.
So, let me tell you a little bit about some of the big changes that are underway, first the microprocessor itself, which is at the heart of computers and in many places now that are not things we call computers, like our cars and our cell phones and our televisions.
The computer itself is going to go through a radical transformation in the next few years. The microprocessor will become a high core count, heterogeneous architecture machine, and with it brings a great many challenges in terms of how we program.
My own relationship to the University of Illinois goes back to when I started a supercomputer company in 1983. It was done in Boston, and it was called Alliant. And we set out to build highly parallel machines for technical computing, and we decided to focus on automatic parallelization, and some of the key work that was done there was done here by Dr. Dave Cook.
So, Dave actually became an advisor to the company, we ended up licensing some of his technology, and the machines that we ultimately built and marketed came back to Illinois as building blocks as part of the big Cedar project that was done here in the late ’80s.
So, throughout that time, we’ve had a chance to look at many of the challenges that exist in writing parallel programs, but in the past it was reserved mostly for people in high-performance computing. But if every microprocessor is now going to take on the architectures that used to be supercomputers or specialty machines 10 and 20 years ago, then, in fact, we’re going to have to have parallelism go mainstream. So, that’s going to start happening in a big way probably around 2012.
Another big change that’s coming is display technologies. If you go to the movies frequently these days, you can put on glasses and watch stereoscopic movie presentations. And I think young people certainly, because it’s most easily done with cartoons and animation, are the first ones that are going to grow up expecting that more and more they’ll see things in actual three-dimensional display, and the technology there is evolving in interesting ways, and I think we’ll see that become quite economical.
In addition, the ability to have large display surfaces and those that essentially have some type of vision or sensing capability will also happen in this next three- to five-year period of time, and we can envision a time not too far in the future where it may not cost significantly more to put up a big wall panel that is essentially an interactive touch display surface than it does today to hang a piece of sheetrock and tape it and mud it and sand it and paint it.
And if that’s true, then, in fact, what you may find is that many of the computers that you’ll deal with in the future will be rooms, not devices that you sit in front of. So, in both the home environment and certainly the office environment, the office of the computer may, in fact, be the office itself, and it will be integrated into the entire space in which you work.
Another big issue that we face is giving people a new way to deal with computers. There are really just a couple things that drive fundamental changes in the computing platform and its widespread adoption, and one of them is, in fact, changes in the underlying computer hardware; that becomes an enabler for software people.
When you get one of this big performance changes, and, for example, the parallelism that we expect in the next few years could give us a performance enhancement in the order of a factor of 10 to 100. And so that’s so big a change as a step function, that if can harness it, we should clearly attempt to solve a different class of problems or at least do the old problems a new way, and so we have to think about what will be possible.
One of the things that we think will be possible is to move beyond the graphical user interface. That has been the driving force in man-machine interaction basically for almost 20 years, and it’s time to not leave it behind completely but add something to it that will expand the market for computing and make it approachable and useful to people in a much bigger way.
One of the things that we think allows this to happen is that the computer, through this processing capability and the arrival of more and more sensing technologies, will take on gradually more and more of the human sensory capabilities, the ability to see, hear, speak. And we’ve certainly experimented with those for the last few years, quite a few years, but mostly we’ve done it one at a time, and our success has been in part limited by the amount of computing we could throw at that problem.
So, when we get this big improvement of a factor of 10 or 100, then our hope is that we’ll be able to do both a good job on many of these human interaction-like technologies, but we’ll also be able to do them in an integrated way.
Up to this point, we’ve tended to use handwriting recognition or touch as an alternative way to operate the graphical user interface that we already know, and I think when we can integrate them together in what we call NUI as a successor to GUI, where NUI stands for the natural user interface, that we will be able to expand the market quite dramatically.
So, let me start by showing you a little video to give you an idea of what it might be like when you, in fact, have a completely alternative way, in this case a video camera, a special video camera, to interact with computers.
In May of this year we disclosed that next year we hope to bring to market as part of our Xbox gaming system a new type of camera. And this camera has a microphone array built into it, but also a special type of video capability that allows the computer to have the ability to see in three dimensions.
All video cameras up to this point tend to just flatten whatever is out in front of them into a 2D image, and so unlike people who have stereo vision, it’s been kind of hard to give computers stereo vision, certainly in an economical way.
And yet we’ve developed this technology, and in conjunction with Microsoft Research and our Xbox business group, we’ve had this project called Natal.
So, we introduced it at the game show in May, and took a video of some of the people there, and I’ll share that with you to kind of give you an idea of what it might be like to have these technologies in the future.
CRAIG MUNDIE: So, when we show this to people, they get obviously pretty excited, and the question is not whether we can use these technologies strictly within a gaming context, that’s clearly going to be possible; the question is, can we bring it together with voice and speech capabilities to give people a completely different, a holistically different way of interacting with computers.
And so I decided to try to show you what I think some of these higher order ways of using computing will be, but rather than just focus on the computer science itself, because I want to encourage people to think broadly about the role that computing will play in solving really tough problems, I decided to take a couple of the issues that are hot ones on the planet today, pardon the pun, which is like energy and climate, and show you a little bit about how we think the engineering and science community may use these advanced computing and modeling capabilities in the future.
So, let me show you the first demo. What we’ve built sort of is a prototype of a workstation in the future. In the first demo I’m going to use a more traditional interface, because here I want to highlight a different way that we think scientists will work together and build models to help solve very large-scale problems. So, this one is still sort of a mouse-based interface.
Microsoft Research in Cambridge, England started a year or two ago to build a tool that would allow scientists to assemble large-scale models and to do this in a collaborative way, and without writing a lot of programs in the traditional sense.
And so I asked them to put together an example problem that scientists might solve this way, and so they went out and built this model, literally over the last two months, working with a plant biologist from Princeton, and what you see here in the big map is the sort of whole world map, and I want to focus at least a little bit on the United States. So, we’ll zoom in here, just as we would on a Web map, and let me put Illinois right here in the middle.
On the top left you have the original world map, so you can kind of see what’s happening globally. In the bottom we can zoom in on South America, because the problem that we’ve chosen to simulate here is what is the coupling between the changes in the rainforest in the Amazon and the temperature change in the rest of the world. In other words, what role does the rainforest play in sequestering carbon and as a function of how much it holds, how much it gets into the atmosphere and therefore how much continued greenhouse heating do we get.
So, the way this works is down at the bottom we have a deforestation rate, and they run this model at several different rates. The typical model or what we think to be the model today is about .7 percent per year of the Amazon is essentially being cut down and permanently destroyed as a function of development activities.
The way that this works as I show you is they want to do this by being able to model things in a much higher-level way. So, this is a schematic diagram of this climate model, and it starts in the top left corner with a dataset called Had, which is from the Hadley simulation in the UK, which is a big, fine-grained computer simulation of the general climatic evolution.
So, from this at any given point in a half-degree by half-degree latitude and longitude square you can look up a climate prediction over the next hundred years.
And so you take that as a basic climate starting point, and then you start to make specific changes. So, you take a vegetation model and determine what its effect is on CO2 production and consumption.
In this case what they wanted to add was a deforestation model, and from that you can go on and decide, well, whatever the forest doesn’t take up, how much does the ocean take up, and then from what’s left over you update the CO2 model, and from that you can then calculate what you think the climate looks like.
So, historically you’d put a team of people together, different scientists, maybe some programmers, maybe the scientists become programmers, and they have to try to figure out how you write this as one big monolithic model.
But increasingly we think that the way you deal with these large-scale systems is that they have to be composed in a more formal way, and so in this system we call Science Studio as sort of a play on the idea of Visual Studio, which is a tool for program development, here in Science Studio you have the ability to wire up these models. So, you can take predetermined datasets like these shown on the left side, and they have known inputs and outputs, and you can hook them up to different computational elements and basically you can wire up a system that implements that logical diagram I just showed you.
And so this simulation was built by taking in many cases things that were already extant within the science community, and then just adding a few things that represented the unique work that the team wanted to do.
So, let me show you how this might work, and, in fact, why it’s important not just to a climate scientist or a computer scientist.
In April, I was appointed by President Obama to the PCAST, which is a group of science advisors, and one of the things that they’ve been asked to look at is this question of carbon offsets, because, in fact, as a policy matter it is a real question as to how much should any given country spend to buy different types of carbon offsets in an attempt to control the aggregate climate on a global basis. And so there’s a lot of people trying to figure these questions out.
Arguably, the policy people don’t have any type of great interactive tools that would allow them to ask and answer questions and make these policy alternatives.
So, these are real problems, not imaginary problems. And while this model you could say is still fairly coarse, and only couples in a relatively small number of things, it’s my belief that we’re going to need more and more evolution of this kind of complex modeling capability in order to allow us to solve many of these difficult policy, economical, geopolitical and, of course, technical questions.
So, let me just run this for a few years, and you start to see it kind of lights up, and you can see up in the top how the temperatures are starting to change. And the way this works, the scale up here shows it starts with the average temperature recorded in the year 2000 across the globe. If that is time zero, then if it gets cooler it moves to the blue side of the scale, and if it gets hotter it moves toward the red side of the scale. And you can see even in a few years’ time that there’s been at least a local temperature increase in the United States in the south and middle south, but also northern Europe and up toward Greenland and Iceland are also warming. And the little white boxes down here show the chunks of the Amazon that are basically being consumed under the current development activities.
And so if you’re a policy person or a climate scientist and you’re really trying to understand what happens over time, you can either let this run sort of year by year and look at it, or you could essentially sort of jump around to different time periods and say, you know, what’s really happening.
And you can see as function of different climate stages and how much of the forest has actually been eliminated, and here we’re just talking at the margin about the Amazon, that we’re starting to see quite a bit of change in the climate in the United States, at least as predicted by this model.
And this is actually potentially a serious issue, because up here in Illinois if it turns out the average temperature is 4 degrees C hotter than it was in 2000, that could have a significant effect on crops. And so to the extent that the societies depend on food production from growing, it’s hard to know whether the crop yields would be substantially damaged by this type of elevated temperature on a sustained basis.
And so if you’re a policy person, you might want to know, well, how much could you tolerate? If I didn’t have any constraints on this, and development went crazy, and it moved out to say 3 percent per year, then it might be substantially warmer. So, if I said, well, what about just 2050; what’s it look like?
You could say I’m going to take out the big checkbook and try to figure out some economic incentives to allow Brazil to do less or perhaps stop completely, and you can see that as a function of how the Amazon is developed or not has a material impact even in the United States and certainly other parts of the world in the crop-producing areas as to whether or not there’s a big change and how serious it is.
As we have more and more computing power, our ability to look at these things and run different parametric sweeps becomes interesting.
So, here you can look at a graph where on the left it says, well, for different rates of deforestation plotted against time, you know, what’s the CO2 in the atmosphere in parts per million.
On the right, you can start to get some long-cycle intuition about what different roles soil and vegetation play in both either sequestering or releasing carbon into the atmosphere, and you can understand that it may be low in terms of education now, but it may cross over, the soil thing, some time say 30 or 40 years into the future.
The other thing that they thought about as they built this particular model was that, of course, this type of modeling is in its infancy. And so one of the things that the plant biology people pointed out was that people who built and run these models in the past make a simplifying assumption, which is that forests don’t change. And, of course, we know that they do change.
So, as an experiment, they went and changed the model of the forest to one that reflected that it’s a living environment, that there’s a mortality associated with the trees, and as a function of how that plays out when you solve these equations, you can see that over time the forest itself changes shape significantly. So, I’ll run this model a little bit.
What you can see is essentially, as we go across here, that the height of the canopy, the average size of the trees, how much of the growth is low growth versus tall growth, these things all turn out to be a material factor in how much carbon you can sequester in that forest where the older-growth trees actually are able to hold more. And so, by putting this model in, you can try to make a more accurate prediction of what it might be, and that might change your policy choice.
The last thing I’ll show you on this one is, they took and did a side-by-side comparison at the .7 percent deforestation rate, and ran it similarly over this 100-year period. In this case, the colors are a little bit different. You’re looking at what the forest carbon is, so blue is good in that it holds more carbon, and red is bad. And you can see that if you start with a model that the forest changes, that over time this period of the Amazon, even though it’s losing the same amount of trees, that it tends to stay bluer, or at least greener, than it does if you use the other model. And so, this gives you more ideas about where you should make the investments in modeling things more accurately.
I think this is going to be a driver for a whole new class of science where people in each of these disciplines need to be less and less a computer scientist or programmer, and are going to be able to couple their own intuitions more directly in to the work of other people to continue to refine these things. And I think that’s going to be important for science, and ultimately important for the policy people who have to make a lot of these difficult choices.
If you really believed in these models, you might find that a choice that you have to make is, do I spend a lot of money and send it down to buy carbon offsets in the Amazon or some other part of the world, maybe the rainforest in Indonesia, or would I be better off to take that money and invest it in research for genetically modified crops under the belief that over this same 50-year time period, I could make them happy to grow in 5 degree warmer climates.
And so, I think there’s a lot of these kind of tradeoffs that the society is going to have to make, and I think that today more and more we want these things to be governed by science, and not governed by intuition or political, or geopolitical whim, and for that to happen, we’re going to have to get into a new way of thinking about this.
The next demo I want to show you is one where this moves a bit more towards the idea of using natural user interface techniques as a way to do it. And so what you see here is a touch screen. In this case, it’s sensitive; of course this requires some high resolution, to a pen input. So, I might get rid of the mouse, and this one I’ll do with a pen plus, as you’ll see, with voice and gestures as well.
So, the little icons at the top sort of flash. We’ve got cameras here that are observing me and listening to me. There are some sort of face and recognition going on. And handwriting recognition. Another thing that I believe will happen as we approach this time where natural user interface is present is that the computer will also be more anticipatory. We do this in simple ways today, but I think it will increasingly have more context, and use the computing cycle to do that. So, let me show you a demo we put together about how this might be used to do research, to collaborate, to navigate around within an Internet kind of environment, and then drill in and do some modeling based on that.
Here I’m going to use handwriting recognition. And so I’ve been studying zero carbon energy sources. So that’s what I want to pursue here. So, if I write zero carbon in, much like when I fill out an e-mail address today, I type in a little bit of it, and it kind of guesses who I want to write to and fills it in. Increasingly, this thing will be able to finish your sentences or words more and more. So, here I can just make a gesture. For example, I’ll circle it to say, that’s what I wanted, and I want to basically do a search.
And so, it goes out and brings in a huge amount of data that is interesting to me. Here I may have miniature thumbnails of 5,000 things. That’s obviously not a completely interesting way to learn. So, I’ll say, “Computer organize it for me.”
And so here, because we assume more and more these documents all contain meta data, we can begin to have the computer do more things to cluster information and present it in more interesting ways. So, it might organize it by place, or publications, news, colleagues. In this case, I’m trying to do research in particular around different energy sources. And I’ll say, “Zoom into research.”
And so, we get a little higher resolution view of this. And now, what I’m going to do is use eye-tracking to basically look around the screen at different parts. And as I look it just sort of pops open views of each of these things. And in this case, happily, we’ll stop here at the University of Illinois, how convenient. And so I might touch to zoom in on this page. I could read things that were going on here. But what I really want to do is go and navigate around, and I may use in this case my pen to do it, and I want to go back to a blog that I frequent.
In this case, we actually made this one up hypothetically based on an actual one that’s there, but I couldn’t use. But the article on the right, this one called Nuclear Research Review: The Innovators, that article is one that I actually am interested in. And, in fact, I’m really tracking the work of these people who are doing a project around what’s called traveling wave reactors. In fact, this is actually a company called TerraPower. It’s a new startup in Seattle. I’m friends with the people doing it and, in fact, it was funded by Bill Gates and Nathan Myhrvold and some people that I know quite well.
They did a bunch of thinking about this challenge of power going forward, and concluded that there was an opportunity now to take some historical work that had been done on an alternative form of nuclear reactor, and because computers had become so powerful and inexpensive that even a small startup company could harness enough computing power to this task to try to explore whether these new types of reactors were, in fact, commercially feasible or not.
So, they’ve been working on that for the last few years, and the man pictured here is John Gilleland, and he’s the CEO of this new company. And so, let me I know he’s posted a video. So, I may just put a gesture on here to say, go get that video and play it.
So, at a personal level, I’m quite excited about this kind of work, because unlike traditional reactors, this one would actually consume the waste output of the world’s existing reactors. Today, nuclear power systems typically only claim, I’ll say, low single-digit percentages of the power that’s actually in the fuel. And so that’s why we get such a huge amount of highly radioactive waste. This reactor, completely different concept, takes that as fuel and burns it to very high percentages, and therefore it never has a fuel cycle. As John indicated, the burn cycle in one of these things would be maybe 60 years long. And so the life cycle of one power plant is just basically one burn of the fuel.
If this kind of technology can actually be produced, it’s like a silver bullet for some of the energy and environment problems, because the existing waste fuel and byproducts of manufacturing fuel for the existing nuclear power plants would provide electricity in this type of reactor that would power the entire United States for thousands of years. So, without ever mining any new fuel, and of course it would be a zero-carbon source. So, if you could move the society in this direction it would obviously be a powerful thing.
So, the question is, how does a project like this, or how do a group of people in the future collaborate on projects like this. So, I think the way that happens is that journals as we know them today evolve in a way where when you publish papers, for example like this particular article, and you may watch the video, what’s actually embedded as meta-data behind it is, in fact, the models that were used to produce this, and even perhaps their data sets, or test data sets.
So, if you go back to this idea of the Science Studio, what I’m going to do is essentially take this model and drag it and drop it essentially into this Science Studio, and what I’ve really done is to just get the model and the metadata, put it in there, and let me play with that model. So, I think this is how we’re going to see increasing collaboration and people building on each other’s work.
So, in this environment the model has a few basic parameters, burn up, control, the flux and the power that’s created. And if I’m new to this concept of traveling wave reactors, and I want to understand how does the wave actually propagate, I could say, I want to see from zero to 45 years what does this actually look like.
So, I’ll say, “Go do that simulation.” And so I can watch the model propagate, understand how the wave travels, and then look at, in this particular reactor configuration, how well does that particular reactor work. Here I might have some parameters that determine whether it was within the desired design specification or not. And here the red one would sort of indicate that in this case the flux parameter was not what we wanted.
And so the question is, as you’re exploring new things like this, and you have a lot of computing power is it possible to get the computer to help you more. So there’s a set of parameters that affect geometries and the structure of the fuel, and I’m sort of out of ideas, so I may circle these things and say, “Computer run a parameter sweep across five reactor geometries.” So, it will load up different configurations, and then try to start doing a simulation of these things.
And what we’re really doing is just exploring a space where intuition doesn’t really work that well. And of course, this may take a long, long time if I’m sitting here doing it on this one eight-processor machine. And so what I really want to think about is a future where we’ve got a lot of these very high-scale cloud-computing resources. So, as part of my university, or my research consortium, I might access to some of these other facilities. And if I want to use them maybe I could accelerate this.
So you can sort of click and bring these things online, and reduce the time from maybe six hours to one hour, or if you use a really big one maybe it becomes literally seconds. Happily, I’m going to get this thing done in seconds. And what we really are trying to do is show that as we get more and more computer power, our ability to explore things that historically were very difficult becomes quite practical.
This concept of the nuclear traveling wave reactor was actually invented by Edward Teller in his late years at Lawrence Berkeley Labs Lawrence Livermore Labs. And he actually thought this could work, but it really wasn’t possible to pursue it until it became economical to do a lot of this kind of computer simulation and modeling.
So, John Gilleland was kind enough to loan us actually some of the models and data for this, and in this demo I’ll show you that we have the ability to actually look at this data. In fact, I can tip this around in a 3D model of one of the reactor configurations, and see as time goes on, and as the burn changes I can look at in this configuration if I start the burn in the middle perhaps I start the burn at the edge, what’s the relative temperatures of these two things as time goes on? And in the future I think that more and more these things won’t just be this type of modeling and simulation, but will have three-dimensional display capabilities. So I’ll be able to look at this model in three dimensions, and not just examine it in a flat, 2D environment.
So these become powerful tools to help people’s intuition in picking the optimal strategy for the pursuit of this kind of power system. Let’s say you also want to track the evolution of wind power. So, we’ll sort of set aside our modeling on the reactors, and I’ll say, “Computer, load my recent wind farm research.” So, here we’ll go out and get a different set of data. Here’s a terrain map, which we might get off of Bing maps, or some other large-scale database. We’ll overlay the terrain and topography data, and then place the actual wind turbines on here that we’ve been tasked with studying.
And we expect that in this environment sensors are going to become very cheap, and if we’ve had this array there, we probably have been able to record the wind patterns in this particular geography over some period of time. And the question is, could we do a better job, either statically, or even dynamically, in getting more power yield out of these turbines.
So, if these vectors indicate the historical average flows, we want to figure out could we make this turbine better. Today people are looking at obviously different fixed geometries, but even putting sensors into the blades, and having mechanisms to adjust their aerodynamic properties as a function of the actual wind that’s impinging on them. And so, I might say, “Computer load my flow models.”
So here, either by myself, or in conjunction with other people, I might have a set of models that I can use to understand how this particular configuration performs. So, let’s just pick one of these. Now what I want to do is zoom in on one of these turbines, and use it as a proxy for the others in understanding how this works.
So, now what I’m going to do is I’m going to lay this tablet down kind of to simulate what I actually think will be people’s work stations of the future, where your desk surface is both a display and an interaction environment. But I also will have other displays that are in front of me. So, in fact, here I’ve got a curved piece of glass. And today it’s going to be a two-dimensional display, but I’m quite confident that in the relative near future this itself could also be a real three-dimensional display. So when you’re trying to understand these problems that might be even more powerful.
So, what I’m going to do is take one of these things and sort of move it up onto that display, and then I want to use gestures to basically couple myself into this particular simulation.
So, first I’ll tilt it back a little bit, so that I can see the whole thing, and then I’ll zoom in one of the blades where I want to make the adjustments, and play the simulation so I can see what the Laminar flow of the wind across this particular blade is. Now, here the parameters of this model currently show that the pitch angle of the blade is 4.1 degrees. Clearly, the red indicates that it’s not an optimal flow. And so I’ll basically circle this parameter in the model and sort of cross off the 4.1 and put 10 degrees in here. And it will then go ahead and change the model. And as that changes you can see the pitch angle of the blade will change, and that has some obvious impact on the flows.
But, in this case it’s still perhaps not really what I want, and it becomes very difficult to describe parametrically what is optimal flow across a wing like this. But, it might be something if I’m trained in the art that human intuition and visualization might give me a way of homing in on what I think is the right parameter set, in a way that’s difficult to do by just parametric specifications.
So, here I might want to actually couple myself into this model again. So, what I want to do now is just use gestures to essentially take a hold of the model and then I’ll just change some of the structure of the wing. And if I want to make it fatter or tilt it a different way, I can see that actually didn’t help, so I’m going to push it back down a little bit. And I can stop here, because I can see, look, that’s about as green as I’ve been able to get, and so I’ll just stop there and say, all right, this is the level I want.
So, this ability, much like playing the Xbox game where it just becomes sort of natural and intuitive for you to couple yourself into what you see. And this case I think can be applied in some of the scientific arenas, as well.
And so in that environment when I’m done I can take this and bring it back, and bring it down onto the tablet surface again, and transfer those model parameters to all the other wind turbine blades, and then I can run the calculation again and see what I actually manage to deliver in terms of improved energy efficiency.
So, with that I’ll stop the demonstrations, and hopefully have shared with you some of the ideas about both how computing has changed itself, the underlying structure, why I think that’s important in creating a new way for people to interact with problems. Increasingly to do that, we’re going to need multidisciplinary collaboration, and I think one of our challenges is to try to figure out how do we make that more efficient.
And I think the ability to move from programming to modeling, to couple together expert systems and many people’s intuition in trying to solve these problems, or make correct policy choices, all of these things can be improved using these new computer technologies, and the ability to have this kind of interaction in the local environment supplemented by very high-scale, data driven facilities in the cloud, and correspondingly large computational assets.
So, for us it’s an exciting time, one that I hope will encourage you to think about how you can operate in a world of the future, where by the time many of you actually get out of the university, or finish graduate school, it’s conceivable that these are the kind of things that you’ll be using every day, and not just the traditional notions of PCs and cell phones.
So, with that, thank you very much.