Rick Rashid: Day 3 Keynote, Professional Developers Conference 2008 (PDC2008)

Remarks by Rick Rashid, Senior Vice President, Microsoft Research
Microsoft Professional Developers Conference 2008 (PDC2008)
October 29, 2008

ANNOUNCER: Ladies and gentlemen, please welcome Senior Vice President, Microsoft Research, Rick Rashid. (Applause.)

RICK RASHID: Well, this is fun. You know, it’s always a little unusual to come out on stage right after they play a video that has you in it. It’s a little narcissistic. But, you know, it’s fun to be here and to give you a chance to see some of the things that we’re doing in Microsoft Research.

Now, I know over the last few days you’ve been hearing a lot about products, and in the field of computer science the products that we have today are really the result of long years of investment in basic research by government and industry.

You know, when you talk about things like cloud computing, for example, you know, if you’re my age or maybe even a little bit older, you might even remember some of the early DARPA (Defense Advanced Research Projects Agency) funded projects in the 1970s that focused around what today we would call cloud computing, things like the National Software work.

But the investments by government and industry didn’t just lead to the underlying chips and software and things like that that we have today. It was really an investment that led to the notions we have today of computing. I mean, the fundamental concepts of what a computer is, what computation is goes back to people like Turing and Church. The notions about things like type theory, you know, today we talk a lot about types in our programming languages. When I was in school back in the 1970s and first learning a computer programming language we had types.

Interestingly enough in those days we didn’t really understand types very well. Over the last 25 or 30 years, we’ve actually built a theory of types, a type theory that today allows us to analyze and understand programs in ways we were never able to do before.

So, it’s those kinds of investments that really allow us to move forward, and some of what I’ll be talking about today is really how the investments we’re making in basic research today create an environment where not only the field of computer science will prosper in the future, but all fields of science and engineering are going to be affected by what we’re doing today.

Now, ironically Microsoft Research was actually envisioned during a period when a lot of other organizations, a lot of companies were actually cutting back on their investments in basic research in the field of technology. The Microsoft board actually considered a memo back in 1990 that proposed that Microsoft start a basic research organization. And what was unusual about that is that Microsoft in those days was really a very small company. We had just crossed over that year a billion dollars in sales. We really were not the size company that people traditionally associate with long term basic research.

Now, I came into Microsoft in 1991. Microsoft went out and searched for a director for their new lab. They contacted me I think partly because I was someone that they were familiar with. I did a lot of work in operating systems. Microsoft was sort of an operating systems company in those days. So, that made a lot of sense.

My own background has been in developing operating systems and networking technology and programming languages over the years. I’ve got a few things on here just for fun. If you’ve ever heard of NUMA, Non-Uniform Memory Access machines, a type of computer architecture, multiprocessor architecture, that’s a term I invented back in the early 1980s for a paper that I wrote.

I popularized the term micro-kernel for work I did on a micro-program machine back in the early 1980s, an early personal computer.

I did one of the first networked computer games, which was Alto Trek for the Xerox Alto.

And then interestingly enough, I took literally the code for that game, rewrote it, redid it, and eventually we built a game called Allegiance, that Microsoft released in 2000, which was Microsoft’s first online only game. So, that was a lot of fun, too.

Now, if you use a Macintosh or an iPhone, which honestly I would not recommend — (laughter) — but were you to use one, you’d be using code that I wrote more than 25 years ago. The Mach operating system that I developed at Carnegie Mellon became the core OS for the NeXT computer, and then was taken by Steve Jobs and Avie Tevanian from NeXT to Apple, and became the core of the Mac OS and now, of course, the iPhone.

And honestly if you had asked me 25 years ago if I thought the code I was writing back then and systems I was designing were going to be running today on a cell phone, you know, my reaction would have been, what’s a cell phone. (Laughter.) This just goes to show that things really do survive and get used in funny ways.

And, of course, the work that we’ve done over the years, the work I’ve done at Microsoft had a huge impact on Microsoft’s products, Windows and, of course, many versions of UNIX as well.

Now, as I mentioned, basic research I think is core to progress in the field of technology, and when I got to Microsoft, I created an organization that has had a single mission for 17 years. And, in fact, I’m actually the Microsoft executive who’s been doing his job the longest, the same job for 17 years. We’ve always had exactly the same mission statement, I’ve never changed the way I’ve run the organization during that time. It’s been a constant throughout the years.

The Goal of Microsoft Research: Expanding the State of the Art

The key thing for us, I think the key thing for any basic research organization is you’re expanding the state of the art in the fields that you do research.

What’s important about that statement is it has nothing to do with Microsoft. It really has to do with the field of technology. It’s really saying that when we do research, we expect to be at the state of the art. We expect to be pushing the envelope. Because really that’s the only way we’re going to be valuable to Microsoft or for that matter to the broad field of technology. We need to go where the technology takes us.

Now, when we have great ideas, when things do work out, then the second part of that mission statement comes into effect, and that really is when we’ve got great ideas, let’s move them into products as fast as we possibly can. And you’ll hear a little bit later today about some of the things that are actually making their way into our products. And you heard in the video earlier that pretty much every product you see from Microsoft today has something that has come out of the research organization or was built on technology that the research organization created.

Ultimately the goal of Microsoft Research and really the goal of basic research in a society is to make sure that we have a future, to make sure that Microsoft has a future, to make sure that the field of technology has a future.

And again I think the key point to recognize there is a lot of companies were peers of Microsoft in 1991 when Microsoft Research got started. A lot of those companies don’t exist today, and generally speaking they did not make those kinds of long term investments in basic research.

Now, we’re run and organized a lot like a university computer science department. I think again that’s one of the things that distinguishes Microsoft Research from a lot of what has gone before in basic research in computer science and industry, and what’s going on today.

We look like a university in many respects, we act like a university in many respects, we work with the academic community in a significant way. We’re very open in what we do. You can go out to the research.microsoft.com Web site and pretty much find out everything that we do and anybody who works for us.

We’re aggressive about publishing in the open literature. Peer review publication is critical, we believe, to the success of our efforts.

And we invest a huge amount in working with universities. And, in fact, over the 17 years that I’ve been running Microsoft Research, more than 15 percent of the monies that I’ve been able to manage for Microsoft to do basic research have gone directly to universities in the form of research grants, fellowships, technology laboratories, pretty much anything you might imagine.

We also have hundreds of visitors and interns each year, and I’ll talk a little bit more about that later, but again it’s very much like an academic style environment.

Now, the video I think earlier said 800 PhD researchers; we’re now about 850. I filmed the video earlier this year, so we’ve grown in the meantime. And we’ve grown very steadily.

And to put that number of 850 PhD researchers into perspective, that’s a larger faculty, if you want to think of this as faculty, that’s a larger faculty than the entire Carnegie Mellon University or Brown University. That gives you a sense of how large we are.

Building a group of that size over the course of 17 years, it’s equivalent to having created a Berkeley computer science department faculty every year for 17 years. That’s sort of what we’ve had to do.

And we’ve really built that organization around the world. Redmond is certainly our largest single location, but our second largest facility is actually in Beijing. In fact, I’ll be heading to China just after this event for our 10th anniversary celebration in Beijing.

We have a large research lab in Cambridge, England. We have a new lab, for example, now in Cambridge, U.S., Cambridge, Mass, but we don’t call it Microsoft Cambridge, because the other Cambridge got the name first, so we call it Microsoft New England. We have a new facility in the last three years in Bangalore. We have a large group and growing group now in the Bay Area centered around the Mountain View and Silicon Valley campus. So, it gives you a sense of how we’ve grown and what we do.

We have I think easily what people would say is the most distinguished research staff in the world now in the field of computer science. Here is just some of the medals and awards we’ve got this year. It’s been very gratifying to see the amount of recognition that our people are getting. Here is another slide.

To put some of these numbers in perspective, we have more members of the National Academy of Engineering than IBM, than the University of Washington, the whole university, Bell Labs. I mean, we’re not really I think the single strongest organization in the field of computer science.

And as I said, our key goal is pushing forward the state of the art, and we measure ourselves much like a university might measure yourself, in the impact we’re having in the field, in particular the impact we’re having on the ideas and the technologies in the field of computer science.

The Growth of Microsoft Research

We’ve published more than 4,000 papers over the last 17 years, and if you go to a conference today, a major conference in the field of computer science, the chances are awfully good that Microsoft will have anywhere between 10 and 30 percent of all the papers at that conference, that those papers will have Microsoft authors. I mean, that’s just a huge impact in the field. We crossed over IBM research a number of years ago in terms of the publication rate, and our rate of publication keeps going up every year.

We’re also participating in the academic community in a really significant way. Our people are on program committees. We work with universities. We support all sorts of programs, educational programs, research programs around the world.

We run the largest PhD internship program in the field of technology in the world. Each year we’ll have over a thousand PhD interns working in some part of Microsoft Research. And to put that number in perspective, there are as many or more graduate student work hours spend in Microsoft Research each year as there is in virtually any university, even the largest universities in the world. So, in some sense you could argue maybe we’re even more like a university than you might imagine.

Within the United States alone probably 20 percent of the graduates that come out of PhD programs have worked at Microsoft Research at some point in their career. So, again it gives you a sense of what’s going on there.

Not only do we have our own labs, which are shown on this slide, but we’re also working in concert with governments and universities and organizations around the world. You can see some of our collaborative institutions. We have a joint research center with INRIA in France. We have a computational systems biology center in Trento, Italy with the EU and the Trento government and the Italian government. We have joint research laboratories in China. In fact, we actually run a joint PhD program with Shanghai Xi’an Jiao Tong in China, and we do a number of educational programs there. We have a joint institute, what we call a virtual institute in Latin America.

So, we’ve expanded around the world. We’re having an impact in almost every geography.

Now, we also drive technologies into products. A lot of the things that you think about as Microsoft today have come out of Microsoft Research at some point. I was running the DirectX team in the very early days, because a lot of those underlying 3D technologies came out of Microsoft Research and many of the people did.

I ran what became the Windows media division in its very earliest days, because all of that work started as research projects inside Microsoft Research in 1992 that really led to both our work in interactive TV but also our work in streaming Internet media and media more broadly.

And things like Windows Media Audio and the Windows Media Video codec, those all come out of the work we’ve been doing within Microsoft Research.

Our first e-commerce group in the company, I was one of the people running that.

All the work we have done over the years in data mining and SQL has been joint work with the SQL product team.

So, just a huge amount of impact in terms of the products that we generate.

The Tablet PC was originally conceived in our research lab in Cambridge, England, and Chuck Thacker, who helped to found that lab, did the reference design for the Tablet PC that then became used by a lot of companies.

And then new products, things like Robotics Studio, some of you have probably heard about that in the last couple of days, those are now coming out of Microsoft Research.

Tons of technology transfers, I’m not going to go into these in detail, but I’ll mention I think it was yesterday they announced the availability of the CCR DSS technologies, and we’ve been partnering with NASA. This is work going on with Robotics Studio to create an environment that people can go out to and actually program their own Mars Lander, and use the Robotics Studio technologies as a way of seeing what it would be like to run a robot out in space and to be able to get access to the imagery and the models and so forth. There’s a contest associated with that called RoboChamp. There’s a RoboChamp Web site associated with that. And if you’re interested in downloading these technologies and playing with this and getting involved with it, it would be a lot of fun.

And, in fact, the core technologies in Robotics Studio are the things called CCR and DSS. This is our concurrency control runtime and our distributed system services runtime. Those are now as of yesterday I think out and available. There’s a reference to the Web site.

And these technologies, although they were developed originally for our robotics work, and are really allowing us to sort of push into the field of robotics, they turn out to be just broadly valuable and interesting to a lot of people, and, in fact, they’re now being used by a number of companies like Siemens. They’re using it for processing some of the technologies that they’re doing.

So, these are not only being used for sort of traditional robotics applications, but any kind of .NET or embedded application where you want to be able to manage concurrency and manage a lot of things going on at the same time in a large scale distributed environment.

Now, obviously we’re continuing to push. I mean, those are some examples of things that we’ve already transferred into our product teams. We’re continuing to push in our research in a number of different areas.

The Value of Basic Research

But it’s worth talking for a second — I mean, I started out by saying basic research is incredibly valuable. In some sense what we see today in the field of technology is really the result of investments in basic research.

But we’re also in a period right now much like I think was true in the early 1990s when we started Microsoft Research where a lot of people are sort of questioning what kind of investments should be made in basic research and how should we do it.

In fact, over the last eight years or so during the current administration there’s been significant changes in the way the federal government funds and interacts with research in the field of technologies. The role of DARPA has changed considerably, the role of the NSF in funding research has changed, and it’s really caused people within the broad research community to question sort of where we’re going as a country, where we’re going as a world in these areas. And the reports from the National Academy of Engineering, for example, that talk about that, talk about the need for renewal.

I think part of the problem that you run into when people talk about basic research is that they often misunderstand why we do it. What is the value of basic research? Why should you be investing in it? Why should a company like Microsoft be investing in basic research and looking out toward the future?

When you look at basic research, it’s easy to see the products that basic research produces. I mean, clearly basic research is a source of IT, is a source of technology. I just talked about some of the technologies that have gone into Microsoft products.

And that’s important, I mean, that’s really valuable, but that’s an output of research, it’s not I think the reason you should do it.

Similarly, you know, basic research groups, people often say, boy, you’ve got lots of smart people, you should be really great at solving problems. And we are great at solving problems. I love solving problems. When a product group comes to me — or anybody, frankly, comes to me and they’ve got a hard problem to solve, I get really excited about it, I get jazzed, I want to work on it; it’s fun, right? But it’s an outcome of having a basic research group; it’s not the reason you should do it.

Basic research groups are early warning systems. But again that’s a great thing, we do see the future a little bit, we know what’s going on in the labs, we know what some of the impacts might be, but again I wouldn’t think of that as a reason why you should do it.

I think the reason to invest in basic research, whether this is true for a country or whether it’s true for a company like Microsoft, is that basic research gives you an ability to survive when things go wrong. It’s about agility.

If you go back to the writings of Vannevar Bush that really were the basis of what became the U.S. investment in the National Science Foundation and the sort of research and development infrastructure that we have today, what he wrote about wasn’t the technologies that would come out of research that you invested in, it was having a basic research infrastructure so that if something went seriously wrong, you had a new war or you have famine or you had a disease, that you would have the infrastructure of smart people and the wealth of technology you built up over the years that would let you address those issues, let you deal with them, let you survive.

And remember Vannevar Bush and people in that era had just come out of World War II where research was critically important in helping to win that war for the allies.

So, for a company like Microsoft or for the field today, basic research gives us an ability to change quickly. Let’s say we have a new competitor, let’s say we have a new technology that comes to the fore, let’s say the business climate changes; it’s important that we have that reservoir of investment that we can draw upon in difficult times, even difficult economic times, to allow us to innovate out of our problem.

Now, looking forward, there is huge opportunity for research in the field of computer science to have an incredible impact in many, many, many different areas. I’m not going to talk about everything, but I am going to talk about a few areas that I think are very important and interesting that impact not just Microsoft but the world more broadly, and give you a chance to see some of the technologies we’re creating.

The Impact of Research on Software and Software Engineering

One of the areas I wanted to highlight, partly because it’s the PDC and because you’re all developers, and I’m a developer, too, you know, I’ve written maybe 600,000 lines of code and four operating systems at this point, not to mention the games and a few other things, is to talk a bit about the impact that research in the field computer science is having on software and software engineering.

Now, going back to — well, really since the beginning of Microsoft Research, we’ve made significant investments in research in program analysis and software engineering technologies.

From the late 1990s and the early 2000 period, we made incredible strides in being able to prove properties of large scale programs. We developed something called SLAM, and some of you may have heard about that. It was productized as part of the Vista wave as the Static Driver Verifier that’s part of the Vista DDK, Device Developer Kit.

And what was interesting about SLAM, it was the first sort of major use of pure proof technology for really analyzing large scale software components, in this case drivers. The idea behind SLAM is that you could take C or C+, C++ or C# kinds of code, you could transform a program into a binary program, given a set of things you wanted to prove. So, if you had a set of mathematical rules or formulas or models that you wanted to be able to prove was true or false with that particular piece of code, the technology we developed with SLAM allowed you to take the original program, to take the set of things you wanted to prove, written in a mathematical specification language, and from those two produce another program called binary program, a program with only true and false variables, that could then be analyzed and proved using modern theorem proving techniques.

What this allowed us to do was to create a process by which we could take important properties — today I think it’s over 100 properties of programs that we want to be able to demonstrate or prove, and be able to prove that by simply examining the source code. So, it’s an exciting new way of thinking about the software engineering process.

And we’ve been able to carry this forward to today be able to prove properties of programs of millions of lines. In fact, there are some properties we’ve now proven of the entire Windows kernel.

Now, SLAM was productized as the Vista Static Driver Verifier and we’ve enhanced that and now we’ll continue on with Windows 7.

What was interesting about SLAM was it was a system for proving what are called safety properties of programs, things that you might think of as associated with types. And I mentioned earlier this notion of type theory and the importance of type theory in the field.

Well, even more exciting work, additionally exciting is the fact that we can now also prove what are called “liveness properties.” Byron Cook, one of our researchers in our research lab in Cambridge, England, has developed a new set of techniques that basically let us prove whether programs halt, right. So, you’ve probably heard this notion that you can’t prove — you know, there’s the halting problem, you can’t really prove a problem’s halt. Well, you can. You can’t always do it, right, but what Byron has shown is that for very large classes of programs, most of the ones you actually write, we can now actually prove termination or what are broadly called liveness properties. So, not only can we say does this piece of code terminate, but we can also say does this thing happen, right. If you take a lock, will you eventually free it: That’s a liveness property.

So, we can now prove those things, and we’re working to get those technologies into our products, to use our products for analysis purposes, but also to get the technologies out to developers in the next few years.

But it’s super exciting, it’s a completely new area. It’s opened really a new field of logic, and Byron is really getting huge accolades for his work.

We’ve also been expanding on the work we’re doing in software engineering, and for those of you here at the PDC, there are a number of sessions now talking about some of these new research projects that look at things like concurrency analysis, a system called CHESS that we refer to here, code contracts, the idea being able to have very well defined code contracts in .NET programs, Pex, which is really a tool for analyzing and figuring out sort of what you should be testing in a program, what are the key areas to analyze in a piece of software, that gives you information about it and lets you find interesting things that need to be looked for.

But we’re also just expanding our knowledge of the field. One of our researchers, Yuri Gurevich, just recently proved Church’s thesis. That had been outstanding for 40, 50 years now. Super exciting way of thinking about this notion of what computability actually means, especially what you can compute with recursive functions, which is what Church’s thesis is al about.

Pex, CHESS, managed contracts: Those are all technologies available for download. There are talks about that. We have a booth here associated with it so you can get a sense of what’s going on there.

Now, there’s been a lot talked about in the sessions over the last couple of days about software in the cloud and managed software environments, the services transformation.

Research to Program New Kinds of Cloud Infrastructures

One of the issues that we’ve been looking at on the research side is, okay, how are we going to program these new kinds of cloud infrastructures; and by that, what are the new languages, what are the new technologies that would let us really use in much more simple and easy to use ways the power of these cloud infrastructures.

One of the technologies we’ve developed at our research lab in Silicon Valley is something called Dryad, what’s called Dryad LINQ, really harness the power of cluster computing. These are technologies that basically fit in with LINQ, you know, the .NET language, that allow you to think about forming computations. Basically you can think of what Dryad is doing as creating kind of a very sophisticated query, input to a query engine that can then be managed across thousands or tens of thousands of machines. It gives you tremendously more power than you might get with something like a map reduce like mechanism, because it gives you a much more arbitrary computing graph or processing graph that you can work with.

Plus because it’s built against these sort of standard programming languages, .NET programming languages like F# or LINQ, it gives you a tremendous amount of ease in order to be able to build these applications.

And this just gives you a sense of sort of what’s going on. You can find papers about Dryad LINQ out on the Net. We’ve begun to get this technology out to universities and university partners so that they can begin to play with it. I think you should expect to see this more generally available out of Microsoft Research in the coming new year.

Now let’s go on from there.

Feng Zhao: Using Sensor Networks to Address Energy and Environmental Issues

Now I mentioned the impact that computing research is having in a variety of different areas. I think one of the areas that people are particularly concerned about these days are energy, and the environment, not the least of which is because computers use a lot of the world’s energy these days, so we’re also sinners, and we need to, in some sense, redress our sins and do what we can to improve things. So I’m going to invite out now Feng Zhao, who is one of our principal researchers in our research lab in Redmond, to talk about research that he’s doing using sensor networks to really address these new areas of energy and environment. (Applause.)

FENG ZHAO: Thank you, Rick.

I’m Feng Zhao from Microsoft Research. And I’m here to tell you some of the exciting research we are doing on sensors and the computing technology that helps us to understand the energy use as well as understanding our human activities on the environment. I’m going to talk about three specific uses of the sensing and computing technology here today. I’m going to tell you a bit about how we use sensors and sensing to understand how the energy is used in buildings like this. I’m going to tell you about sensing about our cloud computing infrastructure, which you have been hearing about in the past few days. I’m going to tell you about sensing in the natural environment.

Now energy is a scarce resource as we all know and we can improve energy efficiency on two separate fronts. We can make our computers much more energy efficiency, and we can actually use computing technology to improve many things that we do in our lives.

Now servers, desktops, many computing devices consume a large amount of power, electricity power that is. It’s about 1.5 percent of the total U.S. electricity use according to a 2006 EPA report. That’s quite a lot of energy that’s being consumed by those devices, and obviously we can do better and make it much more energy efficient for those devices. The other thing is about the many things that we have in our daily life, such as understanding how this convention center, and how this hall is being cooled and heated, and how the energy is being used.

Now all these actually bring into question how do we have the visibility into the operations of infrastructures like this. Now, we think energy efficient computing is a very exciting new research frontier for computer science, and this is an area that Microsoft Research has very active work going on. Now think about the devices that we all carry and all use. So think about the embedded device, such as cell phones, that runs on very small batteries, tens or hundreds of milliwatts, and think about servers in data centers, and the entire data center typically runs off say tens or hundreds of megawatts. So you see the power difference in terms of the amount of energy used. And there are many interesting tradeoffs between the power, the application performance, and a number of other system metrics.

Now when I went to school years ago and I learned how to tell how long my program was going to run on a particular computer, and how much memory it’s going to use, but I didn’t know much energy, how much power it’s going to consume. I think for the next decade, the research and the community need to look at how much power, how much energy these applications are going to run and going to consume. I think that’s critical to help us understand how do we make these programs and those devices much more energy efficient.

And so these are some of the interesting research and tradeoffs that we’re doing in Microsoft Research. Now I want to show you a sensor device that at Microsoft Research we have designed a prototype. I’m holding in my right hand a sensor device that has sensors that collect the temperature, humidity, and other environmental parameters. It’s a pretty small device, and it’s run off batteries, and therefore they actually use very small processors, and a small amount of memory, and a very low power radio in order to actually last for a while. Changing batteries on those devices is a lot of work. Now the guts of device is, in fact, a very small circuit board. It’s about the size of my thumb. And this has a 16-bit processor, has a little bit of memory, has 10K of RAM, it has 40K of ROM. Think about the Vista desktop that you may have, probably 4 gig of memory, so that’s actually five more zeros than the amount of memory that I have in my hand. So think about how these devices are going to be programmed, going to be used to gather this information that we’re going to use to improve the energy efficiency.

Let’s actually look at some of these sensors in action. We have about 90 sensors instrumented in this hall. These sensors are actually high up near the ceiling, about 30 feet above the ground, and you probably cannot see them because of the lighting. If you look at this image, this an aerial view of the LA Convention Center, and that square that you see is actually the area of this particular hall. And you can see that there are eight rows of those sensors. They’re collecting the temperature, humidity, and that the stage where I’m standing is actually at the top.

Now what I’m going to show you is the actual sensor data that’s been collected over the course of the last few days as the PDC has been going on, and I’m going to show you yesterday starting at around 5:00 a.m. So as you can tell, the room was actually quite cool during the night, and things started to heat up a little bit because the lights were turned on just before the keynote. And as the audience, as you guys come into the hall, you can see that the temperature rises a bit, and the keynote starts at 8:30, and the air conditioning system actually kicks into high gear, and you can see those blue spots of the air vents blasting out some of the cool air in order to regulate the temperature.

You can see some of those hot spots near the front, near the stage. These are probably due to some of those exciting announcements that Microsoft made yesterday about our cloud computing and various other things. Obviously, that excitement is pretty contagious. So visibility, like this, into how the temperature actually varies within this room, and how the image is used is actually very useful for us to understand how the HVAC system, how the cooling system works. And when we actually first started putting up the sensors a couple of days back, Ray Keshall, who is the engineer that manages the facilities, the cooling system here in the convention center, he said, gee, I’ve been working here for quite a number of years and this is actually the first time I’m going to get my hands on this very detailed map, the heat map, of how this hall and how the room is cooled. So I’m really looking forward to sharing this data with him, and I hope that the information like this can really help to improve the energy efficiency of things like building cooling, and so on.

Now a little bit of the technical details of these sensors, as you see, these are arrays of those sensors. And the way it works is that these sensors cull information from around the devices, and it passes the information from one device to another device, just like a bucket brigade. And then there’s a couple of those servers in the gateway that collect the sensor data from all these devices, and they send it to the cloud. And through the Microsoft technology, through the Virtual Earth, we match up the sensor data and the devices, and the sensors with the aerial image, and that’s how you see on the screen those devices are being overlaid on the actual layout of this room.

Now these devices actually produce a fair amount of data. In fact, every day in the past few days it’s collected about 100 megs of the sensor data. They can tell that as we actually deploy more and more of those devices, there’s a huge amount of data that needs to be stored in the cloud, and it makes sense of it and then it will be put into use.

Now let’s look a bit into some of the details into the sensor data. On the left I’m showing you the path of the sensor data from a few of those devices that actually are instrumented here, and the red curve represents a sensor reading of the temperature near the front stage. This is, in fact, one of the sensors right there. So obviously as the time goes on from midnight to around 7-ish, there are some spikes in the temperature. As I mentioned, that’s due to some effects coming from the lighting, and the heat from the lamps. And the blue curve represents the sensor readings of the temperature in the back of the room, and in particular that sensor is near the air conditioning vents. You can clearly see that that reading is actually much better regulated.

Now information like this probably gives you a sense about which part of the room is more comfortable if you want to actually be cool, or where you want to sit if you want to feel the excitement of the keynote.

Now I’m going to move back to talk about the sensing technology and how we can use the same technology to help us improve the energy efficiency of our cloud computing infrastructure. As we know that as the infrastructure scales up, the higher power density, the higher cooling requirements give rise to much higher energy bills. And at Microsoft Research, we’ve been designing those sensors, and putting those sensors in our data centers to monitor where the hot spots are, and how well these machine rooms are cooled.

Now, this is important for Microsoft to make those data centers the most energy efficient operations in the world. It’s also good for our customers, it’s good for the world.

And here is an example of the heat map that’s been collected from sensors that are currently deployed within Microsoft’s data centers. In fact, there are 10,000 of these sensors being deployed across data centers, that Microsoft runs, the top of the image shows the front of the server rack. As many of you know, in computer server rooms there machines are arranged into those racks, and the front rack typically is the cold part, the cool air comes into the server, and it takes the heat out of the servers, and it comes to the back.

The bottom part shows you the distribution of the same rack at the back of these servers. Now you see some of the temperature variance, and sometimes we see actually an even larger variance depending on how hard machines are working, and how well the coolest system is working to take the heat out. Information like this gives us various opportunities to optimize, for example, where to place computing intensive workloads, and how do we move these things around dynamically as resource changes. So this is just one example of how the sensor information can be used to optimize, to make a data centers more energy efficient.

Now, the Data Center Genome Project at Microsoft Research precisely is working on mapping out the genome of a data center. So we aggregate information from the environment, the temperature, humidity, and so on, together with information from servers, how the processor, how the memory, how the other things are working, the network traffic, the power use, the cooling system.

Once you aggregate all this information, then you can have much better visibility, sometimes even real time information about how this entire data center operates, how many more data centers I need to build, do I have enough power, do I have enough cooling. This is actually some of the really exciting work that’s going on to make these huge, warehouse-scale computers much more energy efficient.

Now I’m going to switch gears a little bit to actually bring you back to the world where we’re actually using sensing technology to gather information about how our environment is doing. On this map you see actually a couple of those dots, each dot represents a particular sensor deployment. This is actually coming from the Swiss Alps. Scientists in Switzerland actually have deployed a high number of those sensors to actually try to understand the mountain, the watershed, and how our human activities affect those natural environments.

Mark Palange, he’s a well-known hydrologist from EPFL in Switzerland. He and his research team have set up a number of those deployments up high in the mountains. And Nick from the Swiss Snow and Avalanche Research Institute, he and his institute also set up a high number of those deployments, and what they’d really like to do is to be able to share that instrumentation and share the data they have collected from their separate deployments, among themselves, and also with their research colleagues around the world. What Microsoft Research provides is to actually bring all this data available on a cloud-computing platform.

So here is the underlying technology that we’re providing to the research community. It’s the Senseware Platform, and it does the sensor map as the front end, Web-based portal to browse, to interact with the data. The platform allows you to share sensors. So if you are individual researchers, individual teams, you have some data, you have some sensors, you can register with this platform you can push data to that, and other people can see the information. Now, imagine that I can aggregate all these individual research data streams, I can do very interesting analysis across a much larger, spatial, temporal scale. So this is really the power of the Web, is that you bring all the data into the cloud, and then feed it into the users.

We have released that new platform for a couple of years, and it currently has been adopted by a high number of research teams in universities across the world. I mentioned the Swiss experiment in Switzerland, we also have the university in Singapore, and

Harvard University the researchers are using some of that to gather pollution data on the streets in the City of Cambridge.

Now, let’s take a look at some of the data coming from those sensor deployments. This is an area view of one of the sensor deployments I showed on the earlier map. And this is a  you can see some of the blue sensor stations, that’s near the town of Davos in Switzerland, which by the way is a beautiful place I visited a couple of weeks ago. And you can see that each sensor station is tracking information about the surroundings. Let me just mouse over one of the sensors here. It returns about a dozen of those variables, such as the surface air temperature, the air temperature, the relative humidity, the solar radiation at that particular spot.

Now, trying to collate all these different data streams, and trying to see the trends, and trying to understand how they collate and relate to other variables that other groups have been collecting, is actually enormously helpful to understanding how these things have changed over the course of time. So, in fact, we can actually bring up some of those sensor data streams – on the left hand we actually see some of those data plots. I plotted the air temperature, the surface temperature against the solar radiation, and look at how well they collate, how the peaks are aligned.

In another deployment, scientists are collaborating with NASA and using sensor maps as a way to visualize the sensors deployed around and on top of a glacier near Juneau in Alaska. In fact, what you see here is a three-dimensional view of Virtual Earth, with the sensors, and we can also overlay the data on this three-dimensional terrain. So you can do things such as pan and tilt, and it can actually zoom into that to see where these sensors are placed. You can bring up the data, you can plot out all these temperature distributions, the other variables over that three-dimensional terrain. So this is quite exciting. It actually gives us a greater visibility into how things are changing in the environment.

Now, let me wrap up by telling you that I talked about three things, about sensing in buildings like this, about sensing about our cloud-computing infrastructure, about the environment sensing. And the goal is to really understand how we use energy, and how our activities affected the environment. And the research that we’re doing at Microsoft research has already led to greater visibility into many of those things, as you have just seen.

Thank you very much for your time. Rick? (Applause.)

RICK RASHID: Thanks, Feng.

What Feng didn’t really tell you is that these sensors are also keeping track of whether you’re paying attention, and if you’re starting to drift off or something like that your body gets a little colder. So that’s what’s going on there.

We’re doing a lot of other tings. Feng didn’t really go through all these in detail, but we’re working with many different groups around the world, looking at weather, looking at oceanography, looking at hydrology. It’s sort of ironic, we’re using the cloud to keep track of clouds. So it’s an interesting environment. We’re even working with groups that are looking at the health of the Great Barrier Reef.

Healthcare is another area where computer science, and computing technology, and research are having a big impact in an interesting way. People have come to realize that the underlying structure of life is information technology, right? Your DNA looks a lot like a string, and can be managed and manipulated that way. And when you think about how to analyze it, and how to look at it, the technology that we’ve developed within the computer science community for thinking about how to manipulate and manage strings, and match strings, and do database analysis, can be applied in this area.

One of the sort of Holy Grails in the medical community is this notion that ultimately we will figure out how to treat individuals. Today you’re largely getting a one-size-fits-all kind of healthcare. You’ve got a problem, they give you a drug, but they don’t really whether the drug is going to work right for you or not. This is why they always say these are all the potential side effects. What they’re really saying is that on people with certain genetic make ups it works great, and on other people it can kill them, or cause them to be seriously ill. So you want to be able to understand what’s going on in the genetics, to be able to design medicines and treatments that really work.

Now we’re beginning to be able to do that. Computing technology is allowing us to decode the human genome at incredible speed now, with relatively little cost. For $300 now we can measure 100 points of variation in the nucleotides of an individual. There is actually an X Prize that some day, if we can do the entire genome for a single person for $1,000, you can get this X Prize. The interesting part about that is that people now believe that we’ll be able to do this, that this prize will be claimed within the next two to three years, and there are people out there believing it might be done next year. So we’re making huge strides in this area.

Once we have this information, this gives us the opportunity potentially to do a huge amount of data mining, and data analysis, and really tailor our medications to the appropriate disease. The problem is that it’s a sort of non-traditional kind of data analysis problem. One of the big issues here is that there’s a lot of noise, there’s lots of individuals, it’s not really clear how to get the information out that we need for looking at any particular disease, or particular complex.

One of the ways we’ve been looking at doing that in our research in Redmond is to use what are called graphical methods for doing statistical analysis of information. Basically the idea behind this is that you take advantage of the underlying structure of the data, in this case the structure of these DNA sequences, in order to do a principled statistical analysis. And that will give us more information than we would be able to get through statistical methods applied to the data in an unstructured form. We’re making strong progress here. We’re doing a lot of collaboration, and there is a paralyzable  these graphic models, once you build them are paralyzable. We have them running today in our environment. We’re working with many researchers around the world.

We’re looking at things like ALS and diabetes, and aging, asthma, and HIV. And in particular I’m going to highlight some of the work we’ve been doing in HIV because that’s probably the work we’ve been doing the longest in this area where really we’ve been taking this notion of how do you use, frankly, technology that originally was developed for finding spam and apply them for understanding what the HIV virus is doing in a single individual. And rather than have me talk a lot about this, I’m going to run a quick video of David Heckerman who will tell you a little bit more about what we’re doing in this space.

(Video segment and applause.)

So what’s interesting here is, this isn’t using computers to help doctors, or biologists, or epidemiologists. This is actually using the underlying theories in the field of computer science, the mathematics, and the theory of our field to really tackle problems in this space in a completely new way. We’re beginning to have a better understanding of how the cell works because of our understanding of how programs work. And, likewise, we’re beginning to understand how programs work better by understanding what happens in the cell. Really super exciting work, and honestly if you’d asked me when I started Microsoft Research 17 years ago would we be publishing papers in medical journals, in Nature, and Science, and places like that, I would have said, no way. That’s not what computer scientists do. Well, that’s what computer scientists do today. And not only  we’re Microsoft, right, but we don’t just develop technologies, we don’t just come up with new ideas, but we didn’t write the software, we build platforms and we make them available to people. So you can go out onto CodePlex, you can download some of the software, and people in these research communities are doing that, and they’re taking advantage of what we’re doing, and we’re accelerating our work in this area.

Technology Research in Education

Another area that people talk about a lot is education. And I think in particular science and engineering education has really become an issue, certainly in the United States, but in a number of other countries around the world. Again, there’s a study by the National Academy of Engineering that really highlights the concerns in this area, the fact that we’re really not training our kids with the right kinds of skills to be able to be successful in a technologically based world.

We’ve been working – meaning Microsoft Research has been working in this area of technology for education for a long time. We’ve created things like the Center for Collaborative Technology at the University of Washington. We’ve built platforms that people are using for doing distributed classroom work, like Conference XT. We’ve built a set of technologies that are available for the tablet PC, and we’re getting those technologies incorporated into Microsoft Office so that academics and researchers around the world can use tablets in their instructional environments.

The robotics work that we’ve done is now informing computer science education in universities. We have a Center for Personal Robotics in Education in Georgia Tech and Bryn Mawr. We’re building curricula around robotics. The particular program at Georgia Tech, for example, they gives students a robot at the beginning of their career. They have a chance then to program it during the course of their computer science studies at the university. This motivates students in ways that other technologies don’t. And it actually motivates a broad spectrum of people, so you get a greater diversity of students as a result.

WorldWide Telescope

One of the things that has been super exciting about reaching out into the sort of education community has been the release earlier this year of the World Wide Telescope. Now, the World Wide Telescope is a follow-on of something called the Sky Server that we did a number of years ago. It had a huge impact on the astronomy community by linking together all the data from the great telescopes around the world, and creating what amounts to a 24/7 virtual observatory that people can access from anywhere, and that even amateurs can use to find new astronomical phenomenon. In fact, people have done that. Amateurs have found things in the sky that professional astronomers were not able to do so simply because they have access to that database. Now, we released the World Wide Telescope earlier this year. We have 1.5 million active users at this point. It’s really gotten a lot of attention from the scientific community, and from the educational community. The astronomers really love it.

And what I want to — and here are some great pictures, and things from the WorldWide Telescope.

What I want to do today is just say that as of today, more or less as of the time I’m speaking right now, we’re releasing a new version of the WorldWide Telescope. It’s called the Equinox release, Autumnal Equinox in the Northern Hemisphere, and I’m going to give you a quick demonstration. This is Curtis Wong, who’s headed up the Worldwide Telescope Project and he’s going to be narrating this. This is running live on the Worldwide Telescope as we speak, so can you run the demo, please?

(Video segment.)

RICK RASHID: (Applause.) All right, this last sequence is a lot of fun. We’re pulling out your viewpoint and in just a moment, we’ll be exiting the Milky Way Galaxy. That’s the Milky Way right there. Right? We’re pulling out even farther, and now we’re getting a view of the entire visible universe. This is 21 giga-parsecs of view of the universe. And you see, by the way, we can’t see through the dust cloud of the Milky Way, that’s why this is not a full sphere, because we can’t really see properly through the dust cloud of the Milky Way.

There are half a million galaxies there, 21 giga-parsecs worth of visual information. And for those of you who are Star Wars fans, remember, the Millennium Falcon did the Kessler Run in less than 12 parsecs, you know, a giga-parsec is a lot. So that’s really exciting. You know, what’s even more exciting for me is that we’re getting kids involved with this.

You know, you can go out on the Worldwide Telescope site and see tours through the galaxy produced by children. There’s a six-year-old that did this marvelous tour of the ring nebula, I’d recommend you go out to see that. Jonathan Fey, who is the Worldwide Telescope developer is actually giving a presentation on this work. His session is in 403AB at lunch. So put that in perspective. (Applause.)

OK, talk a little bit more about education. Now, everybody should have a good reason to put pictures of their children in their slides. OK? These are my two youngest boys. They’re now eight and nine. And the reason I’ve got them here — this, by the way, shows that they’re normal children with normal lives — is because of this picture. So my wife took off a semester to home school the kids for a semester earlier this year. And among other things, she decided to teach them how to program in Visual Studio 2008 in C#. (Laughter, Applause.)

So this is my nine-year-old. He understands how to use generic. He knows how to do exception handlers and timers. He knows how to do — he can do console programming and WPF. What he’s working on right in that picture is a WPF game that he’s doing for his brother called Fairy Table, that’s what he wants to call it. Now, honestly, most children — there really aren’t a lot of educational prerequisites to programming in real life. You know, basic arithmetic, basic reading and writing, enough logic to say true/true is true, true/false is false, and false/false is false. That’s about it. Right? There’s not a lot else that you need to have kids know to be able to do it.

But the reality is that not very many children have two parents that have taught computer science at the university level and have the time to teach these kinds how to program. So one wants to be able to look for new ways of getting kids excited at very young ages about developing what’s, frankly, a skill. It’s an empowering skill to be able to program. And I’m going to bring out now Matt MacLaurin who’s going to talk about one of the projects that we’ve done at Microsoft Research in Redmond called Boku. It’s really a system for allowing kids to program and learn on their own. Matt? (Music, applause.)

Boku Demo

MATT MacLAURIN: Well, I got my first computer in 1980, I was 13 years old, and it was a Commodore Pet, 8K of memory, anyone have one of those? And there were two really cool things about this machine: One was that it had BASIC built in. So you turn it on, you had a BASIC prompt right there and you could write code. The other thing that was actually really cool about it is it didn’t have anything else built in. So if you really wanted to make something fun happen, you really had to do it yourself. It was really this message in this moment that programming is actually a fun activity all on its own. I think a lot of us who came into it at that time, you know, really feeling like this is really an amazing new tool that I can use to express my own ideas.

So Boku is a system that we’re doing for introducing kids to programming in much that similar spirit. It’s something where if you look back at those machines which were running BASIC and you’re playing a game, you can always break the game and change it a little bit and then go back and play a little more. And we want that kind of seamlessness and flow.

So I’m going to get into the demo pretty quick here. But first talk about a little bit, Rick mentioned some of the motivations here, and really there’s an increasing interest in a lot of universities of looking at programming as a fundamental life skill, you know, that’s not just for people who want to go into a career of operating system kernel design, but if you want to go into managing nonprofits, understanding how to put together a complex system, how to predict how it’s going to operate, and how to correct it when it’s not operating correctly are very, very good skills.

So I’m going to go over to a demo now and just show you what it looks like to program in Boku. This is sort of our own nod to the flashing BASIC prompt. You know, Boku you just boot it up, he’s jumping around, he really can’t wait for you to get in and start to do some programming. So we’ve got this little level browser here. I’m just going to open up a basic level and show you the basics of the programming model for Boku.

The little donut here, that’s my cursor that I’m moving it around. And by the way, all the programming is done with the Xbox 360 controller, which is a really fun design challenge for us, there’s no keyboard involved at all.

So with the left stick, I’m moving the cursor around, with the right stick, I can pan the camera, I can zoom in and out, and we can shape new worlds as well. But I’m going to get right to the interesting part which is, of course, programming. The tools here — this is basically your sort of compile and run button. This is your character tool, which is really where all the action happens because you’re mainly programming these little characters to do stuff. And then there’s tools for reshaping the world and making new worlds of your own design.

So I’m going to pick the character tool. I’m going to zoom in a little bit so we can see what’s going on, and I’m going to drop a new character into the world. There are a bunch of different characters. I like the saucer because he’s kind of fast and fun to fly around. And so I’m going to give him sort of a little Boku hello world program.

So if I press the Y button here, I opened up the program editor for Boku. And this is entirely a visual, iconic-based language that presents very similar, but in the demo I’m going to show you how it can be used for some more complex operations as well. So what I’m building here is a rule, and on the left-hand side, we have what we call the REN condition for the rule. And this is all expressed in sort of physical terminology because we really want to be able to work with young kids. We’ve been working with kids as young as seven doing a lot of work with seven- to 12-year-old girls with UCSB in Santa Barbara.

And we found that, you know, you really want to keep the programming constructs relatable. So by doing that, everyone understands what vision is, they know what it means to look at something, and so you’re really starting from a natural base of understanding.

So I want to tell this guy to go and find apples in this world. I’m going to tell him that when he sees an apple — these are all the different other kinds of things he can see. He can see other robots, he can see things of a certain color, et cetera. But I’m going to say if you see fruit, then I want you to do something. So over here we have what we call the verb clause. And so here are a bunch of things that the characters can do in this world. And there’s a pretty broad array of actions, but “move” is what we’re interested in.

And once you’ve said what verb you’re talking about, you get a bunch of different options about how you want to modify that verb. And since we had in the first part of our rule a clause that detects apples in the scene, we can now make verbs that refer to that apple. So here I’m going to move towards it. So I haven’t put in an apple yet, so I don’t have enough data to run my program. So I’ll drop one in right here. I’ll even drop in a couple.

And so let’s go ahead and bring up the tool set again and run our program and see what happens. Okay, so you see he moves towards the apple and he kind of lands on it and it squirts out from under him and he goes chasing it off the edge of the cliff, you know, which is always fun to watch. (Laughter.) And so now to spice it up a little bit, I want to make him an apple-collecting machine. So I’m just going to toss in another rule. I’m going to say if you actually bump into it, if you get that close, if you bump into what, again, if you bump into the fruit, then we want you to go ahead and eat it.

So now I can go back to my tools, run the program again, and he not only eats that one, but he immediately then sees the second one and keeps cruising around the world picking up apples.

Let’s make it a little more fun. One of the things we really liked doing usability studies with kids is they always have this little story about what’s going on. So my little story here is going to be that the saucer is kind of greedy, and when he eats too many apples, he’s going to turn green. So let’s spice it up a little bit. So I’m going to toss in another rule, and I’m going to give him another thing to do when he bumps into a fruit.

Now the C++ programmers, you know, pretty much all of us, are going to say, wow, you’ve got first he’s going to bump into the apple and eat it, it’s not going to be there for that second rule to fire, but it’s actually a concurrent rule system. So all the rules are evaluated at the same time, which is actually, as it turns out, a much more natural model for kids to understand.

So I’m going to tell him that every time he bumps into the fruit, I want him to score a point. Notice how nowhere have I talked about variables and constants and sort of the common objects of computer programming, but we have all the same functionality, we just express it in terms that are more intuitive for someone who’s coming from another discipline.

So I’ll actually tell him I want him to score one point when he bumps into the fruit and now here I’m going to introduce a little more complexity and show you some of the more abstract features of the language. I’m going to tell him — he can actually detect with his senses the score has changed. So when he scored two points, we want him to go ahead and switch to another program. And we just say basically switch to page two. Just trying to keep things as intuitive as possible.

Now on page two, he’s not running that other program anymore, so he’s going to stop. And we’re going to have him just sit here for two seconds for dramatic effect, and then I’m going to have him change color by coloring himself green. I know I’m kind of ripping through these menus pretty fast, but you’ll have a chance early next year to play with this for yourself.

So let’s go ahead and run this program now. You see he eats those apples, and then after two seconds, he goes ahead and turns green. Now, I could have another character watching him turn green and shoot him when that happens, and that’s a really interesting way that complexity starts to emerge in these little words. (Laughter.) Because, you know, you can take the simplest little construct. And then as you sort of build these up so, each of the objects has their own programming, there are many actors in the world, you can get some very sophisticated behaviors happening.

Now, since we promised, you know, poor little Johnny that he does get to come and make a game, he really is coming from a base he wants to make it the kind of games he plays already. And it’s important to us that people have a lot of flexibility to really work on their own ideas. You know, we wrestled with the idea of whether we should allow you to shoot in the games, and the kids really very quickly let us know that it was absolutely mandatory that you were able to shoot. (Laughter.)

So now what I’m doing is he’s using a different sense which is this character can also listen to this controller do stuff in my hand. So I said when the game pad left stick moves, then I want this character to move. And the left stick naturally outputs a vector, the movement grid naturally consumes the vector, and so it all sort of plugs together the way I want it to, and I don’t really have to be worried about the underpinnings of the model.

Similarly, I’m going to use the right stick, because I’m a big fan of Robotron, to shoot. And then I can also add all kinds of variations on that. And I’m going to say I’m going to shoot red missiles, because they’re kind of cool. And just to skip ahead a little bit, I’ve opened up our advanced settings because there’s always much more detail. We tried to keep the opening part of the program very simple, but then we want to let people make more complex stuff as they want. So I just made it so that this guy can shoot a heck of a lot of missiles really fast. Okay?

Now, the trees aren’t providing much of a challenge for me, so I’m going to go ahead and put in an AI character and we’ll just look at sort of what our non-player programmers’ behavior looks like. So I’m just going to tell this guy really simple, and I’m going to move kind of fast now because we’re running a little late.

I’m going to tell him that when he sees the saucer, he’s going to move towards it. And then because he’s a maniacal little blimp here, when he sees the saucer, he’s also going to begin shooting immediately, and he should shoot — oops, not that — he should shoot some other color of missile. So he can shoot these same missiles at me. So he’s not much of a challenge there. (Laughter.)

We’re going to try to spice it up a little bit. And I’m going to show you probably my favorite sort of magical feature of the program here, which is we have this little thing called creatables. Many of you have used Dave Unger’s self-programming system, this is sort of prototyping for kids. So I’m going to program this cloud here — because you can program pretty much every object in the world — and I’m just going to tell him that every two seconds I want him to create one of those little fiendish blimps.

And so there’s actually a create verb over here and the basic parameters of the create verb are sort of these stock objects, which are kind of boring. But you wanted to be able to create fully dynamic objects. And since I marked that blimp as creatable, it’s now there available in my create verbs. And so rather than just one blimp as I run now, actually there’s one other thing I have to change here which is because these guys are shooting, I have to make the cloud a little tougher so that he can withstand them.

Now let’s go ahead and run this. And if we just sit here for two seconds, he starts dropping these missiles at me which actually overwhelmed me already. Okay. So that’s hello world for Boku. (Applause.) Thank you. So now since we really do think that what we’re seeing here is that programming is a creative tool, it’s very important that people are able to make a bunch of different creations with it. So we have a really short, just one-minute video showing you some of the diversity of experiences that people can create with Boku. Can we roll that video?

(Video segment.)

RICK RASHID: (Applause.) You can tell Matt really doesn’t enjoy his job at all. (Laughter.)

SecondLight Demo

Now, I know you guys have seen a bunch of Microsoft Surface here. Can we just play a quick video and I just want to show you a little bit about when we were building the research prototypes, this is work done by Andy Wilson on Surface. You know, what Andy was doing — this is a number of years ago — is kind of exploring this idea of what it would be like if any surface could be a virtual Surface and you could interact with it. And so what you see is he took a short-throw video projector, an ordinary table, some cameras, you know, and built basically an interactive computing environment and used this for a lot of the experiments. This is really sort of the early days’ work that Andy did that ultimately led to the creation of Microsoft Surface and the tables that you see outside in the lounge and so forth.

What I’m going to do now is give you a chance to see a technology sort of in that stage of development called SecondLight. It’s a new way of thinking about interacting with displays and surfaces and computers. And for this, I’m going to bring out Steve Hodges and Sharam Izadi from our research lab in Cambridge and they’re going to talk to you about SecondLight and what it does. (Music, applause.) Hey, guys.

PARTICIPANT: All right. Thank you. Thank you, everyone. So there’s one more demo we want to show you this morning. We realize we’re running a little bit short on time, but we’re really excited by this one. So we hope you enjoy it.

PARTICIPANT: Now, Rick mentioned Surface computing. You’ve seen a lot about Surface computing probably in your time here at PDC. There are a number of Microsoft Surfaces around the venue here, and if you haven’t had a chance to look at one yet, we really encourage you to do that. And not just look at it, but actually get your hand on and interact with the technology because it’s a very different user experience. We’ve been developing a new type of Surface computer in our research lab over in Cambridge in the U.K., and we’ve got our prototype SecondLight unit up here on stage now.

What we’re trying to do with this project is not just do surface computing, but actually bring the interaction so that it works in the area above the surface as well, so you can interact in the space above the surface, and really we’re talking about bringing the user experience and the user interface out into the real world.

So we’re going to dim the lights now and go straight into the demo.

PARTICIPANT: Okay, so we’ve seen a lot of exciting new multi-touch technologies at PDC this year, and here we have another multi-touch technology that we’ve developed called SecondLight Light. Now what SecondLight allows you to do — what we’re seeing here is a projection screen and we’ve got a projector underneath that’s projecting onto this surface, and there’s also an infrared camera underneath that can sense as I’m interacting on the surface here.

And we can do all the cool things that you’ve seen in systems like Surface where we can zoom into objects, for example, or rotate objects. Now, one of the things that we’re exploring with SecondLight is a mechanism for actually projecting through the display surface. That’s actually quite a difficult thing to do, and normally when you place — this is actually just a regular piece of tracing paper. If we place a piece of tracing paper above the surface, we don’t see any information. So in this image here, we have an image of the night sky. But wouldn’t it be cool as I put this piece of tracing paper above the surface if we can see the stars or even the constellations?

So here I’m actually going to enable that part of the technology. And we see here, up at the top we see Orion, let me zoom in a little bit. And we see the rest of the constellation down here. Notice that we’re not actually seeing an image on the surface itself, it’s actually just appearing on this tracing paper as I’m holding it above the surface. And we can zoom out and actually get the full overview of the constellation. (Applause.)

Likewise, this is a map of our hometown in Cambridge in the U.K. And if I want to reveal the street names. (Applause.) Or even go up close to where I live — actually look at the street name. One final example just to really demonstrate some of the capabilities of SecondLight. Here we have a cow. Can anyone guess what’s going to happen when we place this over the cow? No, we’re not going to reveal the innards of the cow, we’re just going to reveal some text about the cow. But this really demonstrates that we can project a completely independent image onto this secondary surface up above, which is completely different to the one that’s below.

And it is just a regular piece of tracing paper here that we’re placing on the surface, and we can use sort of cheap, plastic, diffuse surface to kind of reveal this hidden information. (Applause.)

PARTICIPANT: We’ve got one more example.

PARTICIPANT: This is actually a favorite, so I think I should really show this one as well. This is a scroll. Again, an everyday object that you can just pull out, and we can reveal the text about the cow.

PARTICIPANT: Okay, so let’s just take a moment to understand a little bit more what’s going on there. This is a picture of our SecondLight prototype. Inside the unit, we have a projection system that’s shining light onto the underside of the display surface. And it’s really that display surface where we’re adding the innovation. We use a special liquid crystal material for the display, and in its natural state, this liquid crystal material is sort of frosted or milky or diffuse in appearance.

I’ve got a picture here of some of that material set up in our lab in Cambridge. So if I project an image onto the underside of that material, you get to see the image. It’s rendered very clearly on the material. But if I apply a voltage to that material, I can turn it and make it transparent, so I can see right through it. And if I project an image from below when it’s transparent, it’s just like a sheet of glass, the image passes straight through, it’s invisible to the user.

So what we do in SecondLight is we’re actually switching the display between these two states all the time. And, if, if you switch it quickly enough, as you can see, you can make it so that you don’t even see the switching. And that’s what we’re doing, this display is continually switching between those two states. And we synchronize the switching with the projection system and actually project two different images. Whenever the display is diffuse, we’re projecting the first image, and that’s the image that you see when you walk up and start using the unit.

Whenever the display is in its transparent state, we’re projecting a second image, a different image, and that image processes straight through. You don’t see it unless, that is, you’ve got a second diffuser above the first. It’s like a piece of tracing paper or a cheap bit of plastic with a diffuse finish. And that’s how we reveal that extra information. (Applause.)

So inside our unit we also have an infrared camera. That’s how we do the touch detection, that’s how we do multi-touch. The infrared camera can actually be synchronized with the switching of this diffuser. So we’re not only detecting say fingertips in contact with the display, but we can also look through the display when it’s in its transparent state, and then we can get images of the area above the display, so we can see hands, arms, we can users’ faces. In fact, we can see anything that either reflects infrared light or emits infrared light.

So the second demonstration we want to show you leverages that capability.

PARTICIPANT: OK. So switch to the screen. So here we have a magic lens. It’s similar to the ones before, it’s slightly bigger, and it just has some batteries that are powering some infrared LEDs. And that’s used for tracking this surface. So what we can do using that infrared camera that can see through the surface, we can actually make this man follow the surface as we’re moving it around. Or we can actually make the man start running and we can pick the man up, and again, he will follow us as we’re moving the surface around.

Now, you’ll see there’s actually an image of the man down below, and what we’re actually doing as I’m tilting this man over is actually pre-distorting the image that’s coming through the surface, and you can actually see it on the primary surface down here. And what that pre-distortion does is ensure that it’s corrected even when I’m tilting the surface towards myself.

So I’m holding this surface almost vertically at the moment. If I switch to video, you can actually see this in better detail. So here we see Ray’s opening keynote from a couple of days ago. And, again, we have this secondary surface that we can use in conjunction with the primary surface. So down here, we still have an image that we can display on the main surface, and up here we have a tracked image that can be always corrected as we’re tilting the secondary surface.

So imagine being able to pick up windows off your Surface computer and actually use this secondary surface to actually view them. Here I’m just going to tilt this a little bit just to give you a sense of the predistortion that’s happening down there. And you can see that we can go — this is pretty much vertical, this display surface.

Now, one final aspect that I’d like to demonstrate is, again, using the infrared camera below, what we can do is actually touch enable this secondary surface up here. So as I’m touching the surface, we get a bubble that appears under my finger. So this is like almost a mini surface. And, again, all the smarts are inside the SecondLight unit, the projection and the camera that’s doing the sensing, this is just a cheap secondary surface that we can create that also has multi-touch capability. (Applause.)

PARTICIPANT: OK. So that’s the end of the demo. But just before we move off stage, I just want to spend a moment reflecting on what it is you’ve seen here. So for the first time, what we’ve done here is integrated some new technologies into the Surface computer so we can think about bringing the interface out of the display, interacting in the real world above the Surface computer.

You can imagine applications such as looking at three-dimensional data and then taking different slices through that data. You can think of gaming applications where we’re using gestures and projecting into the space in front of the display to deliver compelling new user experiences. We really think this is going to change the nature of surface computing, it has that potential. And that’s why we’re so excited to be working on this project.

Thank you very much.

PARTICIPANT: Thank you. (Applause.)

RICK RASHID: Thanks, guys. Well, with technology like this, now when you look at your computer, you’ll know it’s looking back at you. I hope all of you have really enjoyed getting a chance to see some of these technologies, hearing about some of the opportunities that research is going to bring to many different areas, not just in the field of computer science, but much more broadly. I’ve enjoyed having a chance to show you some of these things, some of which were shown here for the very first time. Thank you very much. (Applause.)

Related Posts