Speech Transcript – Rick Rashid, Microsoft Professional Developers Conference 2003

Remarks by Dr. Richard Rashid
Senior Vice President, Microsoft Research
Microsoft Professional Developers Conference 2003
October 29, 2003
Los Angeles, California

RICK RASHID: Good morning. Yeah, I didn’t realize they were going to run that thing in front of my presentation — that was a list of demands. (Laughter.) So I don’t know if I’ll be able to satisfy all of those during this talk, but we’ll see what we can do.

You know, when I was here yesterday, just during the rehearsal, one of the things I noticed was these enormous collections of chairs, and it looks more like it’s been reproduced with computer graphics after about the first ten rows. I mean, it’s just amazing the collection of chairs and people that are out there.

One of the things I’m going to do in my talk today is really deal with something which frankly has been kind of a frustration for me for the last couple of years, which is I keep getting questions from reporters — and really a lot of different kinds of people following the dot-com boom and bust — a lot of people started to say, ‘Well, I guess we had the boom and now technology is over and we can go on to do something else.’ And so I wanted to really talk about this topic, which is: Are we really done yet with technology?

You get this feeling that people have sort of this apocalyptic view of what’s happened in the computer industry. In fact, even yesterday I was doing a radio interview, and one of the reporters that was interviewing me said, ‘Well, but are computers really going to be able to do anything else that we care about? We can already do our spreadsheets, we can do our documents; what else do we really need to do?’

And there’s been a lot of things in the press, articles written, reports done where people are talking about, ‘Is this really the end of computer science? Does IT really matter anymore? Have we reached the end of IT history?’

So basically, what I want to do is take the view which says we’re really at the beginning. I mean, we’ve barely scratched the surface of what we can accomplish, and I’m going to show you a bunch of things today, some new technologies that are being developed, and really talk about where the future is going in our field.

Now, my job at Microsoft is really to do the basic research work for the company. So, we don’t do the development part, but we do the ‘R’ part of R & D. We’ve grown from about me back in 1991 — a group of one — to now having over 700 researchers working in five labs, really all around the world. We’ve got five labs now. Redmond was our first. Then we started a lab in San Francisco. Cambridge, England was our first outside the U.S. I’m just about to fly to Beijing to celebrate the fifth anniversary of our lab in China. And just in the last two years, we started a lab in Silicon Valley. So we’ve been growing very rapidly and growing our research capabilities.

Now, the approach we’ve been taking is a pure research approach. We publish in the open literature just like a university. In fact, my model for how I put my organization together is really the Carnegie Mellon Computer Science Department model, that that was the department I came from. I was a professor there for 12 years. And so really that’s the approach that we’re taking. We publish our work in the open literature. We’re a very flat structure, critical mass. We’re very focused on moving the state-of-the-art forward and we’re very open to visitors and people from the outside. In fact, just this summer, we had 179 Ph.D. interns in our lab just in Redmond. And to put that in perspective, there’s only about 1,300 Ph.Ds produced every year in the United States. So we’re bringing in a huge number of the people from the Ph.D. programs in to work with us each year.

Now, this is what we try to do. We’re first and foremost about moving the state-of-the-art forward, not about Microsoft, just about moving the state-of-the-art forward. Because, if we can’t do that, we’re not going to be really valuable to either Microsoft or the industry in general.

Our second mission is, if we get something that looks like it makes sense, then we try to work very hard to move it into our products and a lot of the things you’ve seen over the last few days have been technologies that started out in our basic research organization.

So we’ve been very good at doing that. In fact, things like our Digital Media Division really came out of a group I started in 1993 within the research group, and then we spun them out in 1996. Tablet PC came out of work that we did at our research lab in Cambridge. In fact, I remember when Chuck Thacker, who did the reference design for the Tablet, first cut a Sony laptop in half and flipped it over and put a digitizer on it and started doing the original development. So a lot of what you see in Microsoft today is really coming from original work done in our basic research labs.

Ultimately the goal is to make sure that Microsoft has a future. If you think about ten years ago, ten years ago the browser was first created. I mean, I know that. I was in the panel discussion for the 10th anniversary of the browser at the University of Illinois in April. So it’s only been ten years since we’ve had the browser. Things changed dramatically. If Microsoft and the industry can’t change to go along with the changes around us, then we will become irrelevant, and there really won’t be a future for us.

Or another way of saying it: My organization is here to make sure that we’re not done, whether that’s Microsoft or whether that’s the industry as a whole; that we’re helping with the academic community and with the research labs and other companies to move the state-of-the-art forward and continually generate the technology that’s going to make computing exciting in the future.

Now, the good news in all of this is that there’s just a tremendous wave of innovation that we’re riding. In the hardware space, the innovation that’s going on in processors — putting more and more CPU cores on a single die, making the CPU cores faster, adding new capabilities, widening the buses — there’s just a tremendous amount of change that’s happening in the underlying general-purpose processors. But there’s also a tremendous amount of change happening with the graphics CPUs and the memory components and the storage components in the system.

What this really does is it creates an environment in which we can create a whole new category of applications, new things that people haven’t seen before that solve problems that they have in their lives and make their lives more satisfying.

I put here sort of what you might expect of a personal computer around the year 2006. One thing to keep in mind is that you’ve got a terabyte of disk, which I’m speculating that I’ll actually have a terabyte of disk in my machine at home. You could, in fact, store every conversation you’ve ever had from the time you’re born to the time you die. I mean, that’s how much we’re able to do, or will be able to do, as we move forward.

Now, this is my own little history here. Sometimes we forget how far we’ve come. Now, I just wanted to point out how far I’ve personally come over the last 26 years or so, literally about a factor of 40,000 in CPU performance. I worked on the original Xerox Alto when I was a graduate student at the University of Rochester, so I was sort of at the beginning of the personal computer revolution, and the changes have just been incredibly dramatic in that period of time.

Now, on the software side, sometimes people talk a lot about hardware innovation and they miss the software innovation component. There is just a ton of stuff going on there as well. And again this is just really exciting. I think one of the things that I’m most excited about is the technologies that are being created that really begin to convert the act of programming, typically used to be called the art of programming, more and more to being a science of programming. More and more, now, we are developing tools that allow us to automate the testing processes, to be able to, in fact, order the test such that those tests that are most likely to fail for a given change in your application are tested first.

We’re developing tools to do static analysis. In fact, one of the most exciting things we’re doing internally now within our research group is we’ve developed tools that let us prove properties of programs that are several hundred thousand lines of C or C++ or C#. So this has nothing to do with testing; this is saying if I can mathematically define a property I can now prove whether that property is true or false for a particular program.

We’re beginning to use that now for device drivers, something we call our Static Driver Verifier. Early versions of this are starting to make it into our DDKs. And in the long term, I see this as a trend to allow us to really be able to demonstrate that our software does or doesn’t do certain things without having to resort to testing, which is always going to be a probabilistic process.

Also we’re moving technology down, and I’ll talk more about that later in my presentation. We’re really starting to take the kinds of programming tools we’ve historically used for our largest systems and putting them on devices as small as refrigerator magnets and wrist watches, so we can really begin to move the same kinds of computing environments down to the smallest devices.

Now, I know you’ve seen this slide before. This is sort of one of the theme slides for the conference. What I’m going to do is I’m going to take three of those themes — presentation, storage and communication — and talk about some specific technologies that we’re developing in our research organization that attack each of these different areas and really talk about some of the opportunities that innovation is creating in those areas.

So, presentation: This year at SIGGRAPH, Microsoft Research had 11 papers out of the 81 papers at SIGGRAPH this year, so a tremendous showing. And it really covered a broad range of topics within the area of graphics and presentations. But the interesting thing is that you’re beginning to see a tremendous shift in what people think of using graphics processors for. You saw that this year at SIGGRAPH in a significant way. In fact, a number of papers that we had at SIGGRAPH talked about using the GPU not just for the traditional graphics pipeline kinds of complications, but rather to do much more general purpose kinds of computations, simulations and analysis.

Part of that is because GPUs themselves are changing. They’re not just boxes anymore. In fact, if you’ve some of the “Avalon” demos earlier in the conference, you’re beginning to see the power of the GPU being used not just for traditional 3D graphics, but for doing user interface work, scaling, rotating, sub-pixel fonts and shading, being able to create a much higher quality visual experience for the user, and being able to support a much wider selection of display types.

You can begin to use these GPUs for even more, because they’re really general purpose parallel pipeline processors now. The pixel shaders and the vertex shaders are beginning to be the kinds of processing units that you can do a broader range of computations on. So, things like nontraditional graphics rendering, simulations, physics, being able to do things I just mentioned, font rendering, and display management and manipulation. So there is a lot we’re beginning to use our GPUs for. We’re really changing our view of what that other processing unit is doing in our system.

Here’s an example of one of the papers that we did for SIGGRAPH this year. This is an example of using the GPU to do a nontraditional form of rendering. What you’re seeing there in the top is the underlying CPU is doing a very coarse grain version of the 3-dimensional object, and simulating its behavior. What you then see is that it’s sending that coarse grain information to the GPU, but along with that it’s sending a surface displacement map, which is really an image that provides the geometry of the object in much finer detail. And I think I can back up here and get the little video thing to show up a little bit better. You can see that surface displacement map is really a 2-dimensional rendering of the 3-dimensional geometry of the object.

What we’re able to do then is create on the GPU in real-time very fine-grained versions of these 3-dimensional objects, which is not the version that the underlying CPU is operating on. So, we’re able to do much finer grain, much more detailed 3-dimensional objects as a result.

Here’s another use of the GPU for nontraditional graphics rendering. Here what you’re seeing is something that is called pre-computed radiant transfers. And what you’re really seeing here is that there are objects that have there’s a glossy object, objects that are translucent. They have all sorts of interesting sort of luminescent properties. They self-reset. You get interesting kinds of internal shadows and transparencies in the object. What you’re seeing is that we can now, in real-time, compute that radiant transfer and display in real-time these kinds of objects that are sort of translucent, that have these internal reflections and self-shadows. That’s never been done before in real-time. So this is exciting. This is again was part of our SIGGRAPH this year.

We can also do global effects, global illumination effects. What you’re seeing with the bunny and the little rubber ducky there is that we can do the fine-grained bump and weave and get the self occlusion and shadowing associated with that on the surface, but we can also do the coarser grained shadowing, like the bunny’s head shadowing his body. And, again, this can be done in real time at 60 frames per second.

Here’s another nontraditional use of the GPU. In this case, we’re doing view-dependent displacement mapping. The idea behind this is that the underlying mesh is actually a fairly flat mesh. All the detail you’re seeing is really a displacement map that’s been created for the object. And what we’re able to do now is, in real-time at 60 frames per second on a current GPU, we’re able to create the illusion of a very textured surface. In this case, bark with all the occlusion of the lighting, being able to make it look like it’s very real in this context.

Another problem that we showed off at SIGGRAPH this year is water rendering. Now, water rendering is something which started making its way to videogames and PC games. It’s a relatively difficult problem, but it’s mostly been solved in its simplest form. Basically, with water reflection, what you have is both a reflection and a refraction. So, you’re able to see underneath the surface of the water, but you’re also able to see things that reflect on that surface. Now, in the normal way that this is rendered, you get these big waves, and they try to compute what looks like a fairly smooth surface on the water. But water actually doesn’t have a smooth surface, that’s a cheat. Water actually has a lot of little wavelets, and there’s something called the Fresnel effect that really defines how reflection/refraction occur. The complicated equation you see there is relative to the normal against the surface of the water, and that normal is really the normal to the microfacet in the surface of the water. So, there’s a much more complex effect that goes on.

And now for this one I wanted to show you in real-time and live. So, I’m going to bring out Behrooz Chitsaz who is our C Program Manager in Microsoft Research. He’s going to give you a real-time demo of the water texture effect.

BEHROOZ CHITSAZ: Good morning, everyone. So this demo is going to be all about generating reality on consumer-grade hardware. So the demo that I’m going to show you is actually running on a P4, 3 gigahertz P4 machine, 1 gig of ram, and a high end graphics card. So, we’re going to use the power of the GPU that Rick was just talking about in order to render this graphic here.

Let’s go ahead and get started. OK, so what you see here is essentially a virtual space that I can interact with. I can go around and look around here. You can see some pretty interesting water effects here. You can see a reflection of the sun, and you see some ripples around, which is quite nice. But let’s take a look at some of the other places. You can see that as we go away from the reflection of the water, the water looks pretty still. This is essentially what you get today with this kind of hardware that I’m running on.

I can sort of move around here, let’s go and turn around. You can see that I can interact with this. Let’s move back. I wanted to show you something else. If you look at the reflection of the figure in the water, you’ll see that as I move up and down, the intensity doesn’t change. While in reality, when you move up and down, and your distance from the water decreases and increases, the intensity actually changes. Let’s put some of that formula into action and make use of our GPU. I’m going to turn on the effect right now, and for those of you in the front seats, you might get wet. No, I’m just kidding. OK. Three, two, one. Now what you see is you can actually see a lot of ripples in the water, you can see if I turn around, previously all this was all completely solid, it looked solid. Now, you can see ripples in the water. There’s a lot more definition here.

Let me go closer here, and I’ll show you the effects of the reflection in the water, as well. If I move up and down you’ll notice that the reflection of the figure in the water actually changes in intensity.

(Applause.)

You like that. All right. OK. Let’s go around, let’s just go inside that structure there, that wooden structure, and I’ll show you some more interesting effects here. OK. I’ll just turn around, and what you’ll see, look at the reflection of the wooden structure, and the light coming through it. Just move out, and let’s just go up, and look around. So what we’re doing is we’re actually using the power of the GPU to generate a much more realistic effect. We’re actually working on a lot of other things, as well, such as generating grass, fur, different types of terrain, so it’s not just water that we’re working on. That’s what I wanted to show you today.

Thank you very much.

RICK RASHID: Thanks, Behrooz.

I think the key thing here is that we’re just scratching the surface of what we can begin to do with graphics processors. Internally now in our research group we’re beginning to look at, how can we provide the power of these GPUs to traditional programmers, through more traditional like programming languages. So there are some really exciting things that are going to come out of that space in the future.

Let me turn my attention to storage, I mean, we talked about the changes that have been going on in the GPUs, there’s also been a tremendous amount of change going on in storage. We now are at the point where literally for under US$1,000, I can go out and buy a terabyte of disk. It wasn’t that long ago that a terabyte was all there was on the Internet. So we’ve moved what was the whole world down to what now can be held by a single person. And there’s just a tremendous amount of change that’s gong on there. There’s a report at Berkeley, just recently in the last couple of weeks, where they have come out with an estimate now that each year actually, the year for which this was computed was 2002 — so in 2002, they worked out that roughly 5 exabytes, or 5 million terabytes of data was created and stored on paper, optical media, hard disk drives, and other kinds of storage devices. To put that in perspective, about three years before that they estimate that it was about 2 exabytes, or 2 million terabytes of data, and it’s virtually all the difference in the time between 1999 and 2002 in terms of new data being generated, most of that new data, 90 percent of it, was being stored on hard drives. So the hard drive has really changed the way we think about storing and managing information as time goes on.

Now, back in 1998, we brought out one of the very first really massive databases that was available for free to anyone on the Internet. This is the original TerraServer. It was a terabyte database — actually about three terabytes at the time — of images of the Earth’s surface, here at the Space Needle in Seattle. I’m sure many of you have been out to that site. A couple of years ago, we converted it to a Web service, and that’s been used by many people now outside of Microsoft, including the USGA for doing soil surveys. So that was a very exciting experiment for us, and a very exciting opportunity to bring information to people on the Web.

What we’ve been doing more recently is Jim Gray and his team in a research lab in San Francisco have been working with the astronomy community to take that same notion of the TerraServer, flip it around, and say, ‘Can we now bring the stars to everybody on the Internet, and be able to create what amounts to a virtual observatory that’s on 24 hours a day, 7 days a week, accessible to scientists and everyday people?’ So that’s what we’re doing. We’ve been working with a number of scientists in the astronomy community at the Sloan Digital Sky Survey, Cal Tech, Johns Hopkins, and other places to create something called a Sky Server. And I’m extremely privileged today, my demo guy for the Sky Server is going to actually be Jim Gray. Jim is the winner of the Turing Award, probably one of the best known, best scientists in the field of databases.

Jim, could you come on out.

JIM GRAY: Thanks, Rick.

Good morning. Thanks for getting up so early. What I’d like to talk to you about, let me see if I can get it to come up, what I’d like to talk to you about is the Sloan Digital Sky Survey, and the Web site that we’ve built to make it easy to get access to the Sloan Digital Sky Survey data.

The Web site is at skyserver.sdss.org. This Web site was built by Alex Szalay and his students at Johns Hopkins University. I’ve been helping them with the Web Services and the database aspects of the design. But, most of the work you’ve going to be seeing here is really the work of Alex and his students. So the Web site is a pretty good description of what’s going on.

Sloan Digital Sky Survey is a survey of the Northern third of the universe. The survey is about half done, which means that we have now about 100 million objects, and about half a million spectra of those objects. The survey is about 10 terabytes of pixel data — pictures — and about 1 terabyte of record data, or catalogue data. The catalogue is, as I said, about 100 million objects, and their attributes. It’s about 3 billion records in all, sitting inside of a SQL Server database. So the Sky Server is a way of getting at that data. If you just have the data on disk sitting around it’s pretty hard to get at it, you have to have some way of asking questions of it, and so this is our attempt to make it easy to ask questions of that data.

The data has been online now for about two years. This is second data release, it just came out about a month ago, and that second data release is in the neighborhood of about a terabyte of catalogue data. Since this is such great data, it’s a very good way of teaching astronomy and computational science. So the project that we have done along the way is to try and put, and Jordan Reddick at Johns Hopkins has done this, is to try and put lesson plans of several different projects, about 150 hours of online education, on this Web site, using the data from the telescope as the driver for the experiment. So the student can actually work directly with the data.

There are lesson plans at all levels, but I’m going to skip to the advanced level, and just show you an example of the Hubble diagram and how students can use data from the Sloan Sky Survey, the SDSS, to rediscover and just follow the logic that Hubble went through in discovering that the universe is expanding. And it’s pretty easy to follow — that’s the big bang. Going to the next slide, you go a little further, and they give you an exercise, and you’re supposed to go off and mess around with the data, and discover the Hubble log. So it’s actually a very nice collection, I’ve learned a lot of astronomy by doing these exercises. If you’re not an astronomer, which I’m not, this is actually great.

I’d like to now segue to the tools that we’ve built to look at the data. And I’ll start out, and in fact use primarily for my demos, a tool based on a Web service that we built at Johns Hopkins. And it’s going very slowly, but it’s coming.

So the guy at the right, this little cut-out here was produced by a Web service, and the window here was produced by a Web service. And what I’d like you to do is, I’m just going to invert the image, and what we did at that point was ask the Web service to give the opposite, basically flip every bite, XOR it with 255. But, I can point to a particular object here, and what happened is, it went off and looked in the database and found that object, and told us what its properties are, and the Web service also made a little cut out of that object, which came back here. And we can zoom out, we can ask for outlines of the objects. And what’s going on here is that a message is being sent to Baltimore, there’s an IIS Web server there, that Web server is turning around and calling this Web service, and I’ll show you the Web service in just a moment.

The Web service is going off, pulling from the database a bunch of JPEGs, mosaicing them onto a canvas, then going into the database and pulling all the objects that are in the database and drawing their outlines on this picture and then taking that picture and sending it back to the Web browser.

If you are interested, you can actually look at lots and lots of objects all at one time and, as you’ll notice, here these objects are showing up. And what I’ll do is I’ll invert them and put outlines around them, so that it will be a little bit easier to see them.

So the system is not very responsive right now. Usually these things come back very quickly, and it’s a very convenient way of looking at the information. In fact, right now nothing is coming back. Slowly. OK?

So if you want to be very sophisticated, what you can do is say, ‘I’m interested in objects which have very high red shift, where Z, the red shift, is greater than 4.’ Then I can submit that query, and that query is going to go off and look in the database and find all the very high red shift objects. And in fact, I only asked for the top 10 of them. I can send them to the list, and now we can get those images, and they’ll start showing up on the screen.

So far, we have been operating in pixel space. And what I’ll do is go back to the navigation view — should I wait for one of these guys to show up? No, I will just take the first one, and hope for the best. So what I’m going to do is go from pixel space into record space. So we are going to go and look at one of those objects and explore it in some detail. So here is that object that we were looking at, and the picture of it will come back in a moment. Here’s its spectrogram. Here are all of the lines in that spectrogram, and those are records that are coming out of the database. I may have mentioned that there are about three billion records in the database, and these are some of them. And astronomers spend most of their time working with tools that explore this record space. But if we go back — well, I haven’t — I still haven’t gotten the cutout back yet, so I’ll just give up on that for a while and keep going.

Associated with this object we have about a thousand attributes, and here they are. And so you can do data mining in this record space, and in fact, there’s a SQL interface that allows you to send SQL to this database. People have built other tools that use the SQL interface.

So I have shown you — oops, well, that’s too bad. We got an error from one of the guys, but otherwise the GIFs are coming back.

So what I have shown you is a particular archive, this Sloan Digital Sky Survey. What Rick was talking about though was our desire to take all of the archives of all of the telescopes and glue them together into one international database called the Worldwide Telescope, or sometimes call the Virtual Observatory.

So what we have been doing is building a Web service using .NET and using SQL Server and IIS, and this Web service is called SkyQuery.NET. This is the portal to it, and there’s quite a bit of documentation on the site. I don’t have a lot of time to tell you about it, but when we started it had about four sites online, and these sites were essentially clones of the architecture of the Sloan Digital Sky Survey Sky Server. But one of the things was at Cal Tech, one of them was in Chicago, and in fact one of them was in Cambridge, England, the Isaac Newton telescope. And what I am going to do is to take a sample query and submit that query to the SkyQuery system. What’s going on here is that query is being sent to the portal. The portal is sending the query to Cambridge, England, saying, How big is your answer? — sending it to Baltimore, How big is your answer? The answer is coming back. And then it’s optimizing that query and then sending out and asking for the data sets from those two systems, combining those data sets and giving us the answer down here. I think the response time in that was on the order of 10 or 15 seconds. Oftentimes it’s more responsive than that.

So that’s SkyQuery. It’s a federation of Web services — about 10 of them now — and a portal that takes those Web services and glues them together and makes it look like all of the telescopes in the world are just one great big database. Astronomers are really excited about hits. They are building a lot of tools on top of this, and this is one of the main thrusts of the Virtual Observatory effort sponsored by the National Science Foundation and also going on in Europe.

So everything I’ve shown you is public domain, and we have a site called www.skyserver.org, and www.skyserver.org has a copy of the Web site. It has a one-gigabyte copy of the database that we were working with. It has the full database design. It has all of the data mining queries that we have been working with. It has the spatial search algorithms, and a bunch of research reports. So if you go to this site you can download all of that information, and you can build a Web server that gives more or less the same demo that I am giving you here from your laptop.

So, in particular, my laptop is backstage, and it has my Sky Server loaded on it. And what I’d like to do now is to flip to an example just to show you what it’s like to work with the database. So here is — I’ve got Visual Studio already started in my project, and what I am going to do is first start without debugging, just to show you what the thing is like. This is a slightly different interface, but you see the same objects. So that’s what we were doing earlier. So now what I am going to do is step through it in debug mode, and show you how easy it is to actually — oops, I did a start without debugging, again. Excuse me. Let’s just start.

OK, so what’s happening is the project is getting registered with IIS, the debugger is attaching, everything wonderful is happening, and I have got some break points set in the project, and now I’m saying ‘get image.’ And now we are stepping through — first this is the ASP page — and I guess I’ll just do F5 and sort of step through it. OK? We are now in the C# code, excuse me. And I should have mentioned that when I started out I did this — it’s about 500 lines of C#, and Maria Nieto-Santisteban, one of Alex’s students, turned that from a 500-line hack to an honest-to-God program with exception handling and a few other things, and a lot of astronomy. And so the code we’re actually looking at here is Maria’s. And so we are validating the inputs. We are going to go off and paint the canvas. Here’s the SQL statement that goes off and gets all of the pictures. They’re going to cover this area of the canvas — it’s a spatial search. We are going to get a tile. We are going to get another tile and paint it on — paint a third tile on the canvas. Good, the canvas is painted. We are going to return the canvas. We are going to convert that canvas to a JPEG, and there’s the picture. OK? (Applause.)

So that is basically how it is to program this thing. The code is actually very, very simple, and I encourage you to download it.

So, the takeaways from this are databases are actually a pretty good way of storing astronomy data. Web services are a pretty good way of publishing information. And the .NET toolkit and platform is a wonderful way of building these kinds of distributive apps.

With that, Rick? (Applause.)

RICK RASHID: Great, thanks, Jim. (Applause.) One of the things that I’m really excited about from the Sky Server and the things we’re doing related to that is we are beginning to see the opportunity to really accelerate science, take advantage of the enormous amount of data being collected in the different sciences; really use the XML standard schema way of thinking about that information, use the Web services, create these federated databases, and now allow scientists to operate on that data basically from anywhere in the world in a standard form with standard programming languages. So I think it’s a really exciting way of thinking about how the world is changing because of the amount of storage we now have.

I want to transition to communications. Communications is changing dramatically as well. Increasingly, people really use the computer as a means for interacting with each other, for creating their work group, for solving the problems of their organizations. I know there’s a lot of blogging going around the PDC this time. I think that’s one of the ways in which you are beginning to see the communication tools that exist in the PC environment really change the way people get information and share information with each other.

Now, I am going to bring out another one of our researchers, Lili Cheng. Lili is a senior researcher, very distinguished computer scientist. But also what’s interested about Lili is she’s a registered architect, and teaches part-time at the NYU School of Design. So she has a quite varied background. Lili heads up our social computing group, and she’s going to show some of the work that that social computing group is doing in terms of helping to create new kinds of user interfaces and tools for person-to-person communications. Lili?

LILI CHENG: (Applause.) All right, thanks, Rick. When we started thinking about social user interfaces, we asked ourselves if there were some fundamental new concepts on which we could build a new user experience. And we wanted to understand how people think about the people they care about most.

So we did something rather simple. We went to a mall, and we asked people to draw the people they care about. And very quickly people, drew diagrams similar to the one that you see next to me, and we saw a few interesting things. First of all, people immediately put themselves in a very prominent place — in this case it was center — and it probably took them about 30 seconds to note the people that they cared about most, and they would group these people, and the groups tended to be very dynamic. And then we looked at the contacts that they have on their computers, and we found that most people actually don’t have contact lists, and if they do, they tend to be outdated and represented as alphabetical lists. So we tried to see if we could do something better.

So we created this application called the Personal Map. OK, so this is me and the people that I interact with that I care about. And I didn’t do anything to organization or create this information. We are pulling to information by looking at how people communicate. So what this does is this looks at who I e-mail and who I e-mail together, who I might CC on an e-mail, and builds these models of the people that I care about. So you see me in the center. The more frequently I e-mail someone, the closer they are to me, and then again people are grouped together based on who I e-mail together. And if I move this slider, you will see that groups of people who are similar will sort of cluster together. And as I move this slider the other direction, you will see that bigger group of people, like the blue group of people on the top, split out into smaller projects. All right. So I can actually shift my center of view. We have been working quite a bit with people on the Longhorn team. So I click on Hillel and I sent Hillel to the center, you will see that I interact with him with a few distinct groups of people.

So when you see these visualizations, you should really think of this in a few separate ways. One is the visual image, but the other is a model of who the user cares about. So, for example, if I e-mail Hillel, maybe the people related to the Windows team could become more visible in my e-mail applications. So it doesn’t always have to be very rich visualization sets.

So I’ll go back to my slides. So the next thing we do is we say, ‘We know who you care about — what do you actually do with these people?’ And typically, what you want to do is communicate and share with them. So think today of a wedding invitation that you get in the mail. It’s a very rich experience. And think about if you would actually want to share that wedding invitation via e-mail. Typically your answer is no. So in this sketch it shows the storyboard of what your communication could be like in the future. Imagine getting an e-mail that has a very rich presentation, and that as you start communicating with people that you care about, you see conversations that you are having start to bubble up, and you can kind of see what e-mail is important to you. And maybe if there is a lot of activity around a particular conversation, a jewel-like element could appear and you could open these conversation spaces up and they actually become places for your sharing, so your sharing can very easily move from very light-weight interaction into a more complex space, where you can share more complicated information.

So we took those storyboards, and we built out particular pieces of them. In this case it’s shared. So today when I share information with a group, we tend to just dump everything into an unstructured share, and it gets very disorganized very quickly. You can see a little picture of that on the side. So we wanted to say, ‘Were there other ways that we could extract views of shares that were more people-centric and that could give me a better overview of what’s important?’ So what we’ve done is we actually monitored some of the network traffic to see who is contributing to the share, and we derived project structure information out of the way that people name their folders to give other views of information. So in this case, for a share: Who is important here and what are the important places and projects and data that people are sharing together?

So aside from how you represent people that you care about and creating better communication environments, we are also looking at how people tell stories and share information around data. So, for example, if you get a photo, often it’s not the photo that is really interesting — it’s the stories and context in which this photo was taken. And you see a lot of this today with blogs. And what’s so interesting to me about blogs is the personal nature of how people are represented. You can really get a sense of who people are by the way they are sharing their photos and telling their stories in their time-based journals.

So today we see a lot of this. We see a lot of photo-sharing and storytelling through e-mail or on blog sites. So we’ve tried to create an application on the Web that lets people share photos and tell stories and interact in new ways. So this is Wallet, and basically we have here is, it know who I am, it opens to my blog — so it starts with a personal view for me. And I shared a bunch of photos and things like that with people that I care about. And on the left side you see my social network. And, again, I haven’t authored this. I haven’t had to come into a blank screen and say who my friends are try to continually update the list of the people that I care about. So, for example, if I click on Sean here, and this loads his blog, and I can see who is related to Sean. This lets me navigate around the space of friends. And one of the things that we do in this space is you can identify people, comment on other people’s pictures. And as you interact with all of the pictures and comments and stories in this space, you become connected to other people, and this really builds your relationship with other people.

And then I can do things like content of Sean. So not only can I see the content that Sean has authored, but I can also see all of the activity in the space. And I think what’s interesting is often you’ll see Sean isn’t the person uploading content about himself. His friends are uploading pictures about him and commenting on them. So some of the information that we see in the system is stuff that he’s authored, but we can also get more information from the larger group of people.

And then we can start doing kind of interesting queries, so I can start building relationships between things. So what in this system relates Sean to me? How are Sean and I connected? And we can see pictures that we appear in together, friends that we have in common and things like that. So it’s really an easy way to navigate around a big space and share stories and experiences with your friends.

So obviously not everybody wants to go to a Web site all the time. Sometimes you want your tools to come to you. So what’s interesting I think about Wallet is we’re starting to see probably a third of the information that is input into Wallet isn’t actually people going to the site and adding things. We have a lot of tools to add this content from anywhere. So we have a little bar part here from Longhorn, and I can actually click on myself here and you can — click through here, and you can see it’s easy for me to add pictures and comments right here from my desktop, and I can come in here and see who else is looking at this. So this is the same information — my social network and pictures and stories that I’m sharing in a very, very lightweight peripheral way in my user interface.

RICK RASHID: Great, thanks a lot, Lili. (Applause.)

So I want to wrap up by talking about some of the ways that we’re trying to bring a lot of these ideas together and some of the new opportunities that are being created.

One of the projects I’ve been working on for the last three years that’s now moved into the product organization is something that we created called Smart Personal Objects. You may have heard about that earlier this year. We did a presentation at CES around it. We’ll be talking more about it later at COMDEX.

But the idea is we create a platform for small devices to be able to take the same kind of software that you would be developing in a PC type of environment and put that on the smallest possible device and then connect that device to the outside world, so that, for example, you can create wrist watches, like the one I have on right here, that are able to constantly get information 24 hours a day from a wide area network all around the United States and Canada. It’s a low-power chipset, but it’s technology that lets us make intelligent devices that are not computing devices per se, they’re everyday objects that are better because they’re smart.

We’re not trying to turn refrigerators into supercomputers. We’re trying to make, for example, your watch give you not just time, but timely information, or to create alarm clocks that know about your calendar and the traffic and figure out when it should wake you up. Things like that, where we can take advantage of information about you, about the world around you and the device, and what’s going on in your PC and on the Internet and make the devices sensitive to that and intelligent about you.

So here is the basic concept. This is the network that we’ve created. We’re broadcasting now nationwide. You’ll begin to see these devices come out over the next few months.

And the key idea here again is to create a new category of smart devices, initially with wrist watches, but eventually with other types of devices as well.

And, in fact, interestingly enough, in this device is a tiny version of the Common Language Runtime. You can use Visual Studio .NET to develop for these devices in the same way you would be doing it on a PC and we have the full environment for the watch, for example, running in a PC environment or on a PDA as well as running on a watch.

So we’ve really been able to tie all these things together, wrap them up and say we can now have an ecology of smart devices from the very smallest to the very largest to participate in the same general computing environment and that can be smart about each other and interact with each other in intelligent ways.

It’s even got a garbage collector, by the way. (Laughter.) I’ve never seen it running but I’m sure it’s there somewhere. Actually, for a small device, garbage collector is even more important because of the fragmentation problems you run into, so a little counterintuitive.

The last thing I’m going to talk about is work that we’re doing to really change the learning experience, make it easier to train and to learn using computing tools. We’ve been working with a lot of universities, in particular MIT, Brown, Carnegie Mellon and others, to really develop a new kind of electronic learning environment, really aid pedagogy in the same way that things like the Sky Server are beginning to aid science and make science work better.

This is the old way of thinking about computing in the classroom. You see computing, you see the classroom, you don’t much see the students. That’s the way I think people started out thinking about how computing was going to change the learning experience.

What we’re trying to do is create an integrated learning experience, and actually I’m going to show you a short video that gives you a sense of the platform, the software platform we’re creating, upon which you can build different kinds of educational solutions.

(Video segment.)

(Applause.)

RICK RASHID: Now that was, of course, a dramatization, as they say, but that software environment, the things that you saw there, are actually being used in courses now at some universities. Particularly, the University of Washington has a professional masters course that’s using that software environment exclusively now for about 120 students in that program.

So this is a real software environment, again built around Web services. All the information that you saw being created there is actually going live into SQL Server as it’s being created, including the notes, the video, the annotations that are being created and then students are able to get at that Web site that’s been created that way and access the information.

So what we’re really trying to do with this is work with our university partners to create an infrastructure that they can use for creating their own applications and really give them the tools to make that easy for them to do. And, in fact, several of the applications you saw there were, in fact, applications developed at various universities.

And for the final demo I’m going to bring out John San Giovanni, who’s a technical evangelist in our University Relations Group, and he’s going to show off a couple of the really cool applications around the Tablet PC that have been created by our university partners working with us. John? (Applause.)

JOHN SAN GIOVANNI: Thanks, Rick.

So in the past year since the Tablet PC has launched, it’s been really exciting to see some of the amazing things that faculty and academic research professors, as well as sort of rock start student developers are doing around these new rich ink APIs, and building really powerful applications that really use ink as a first-class vector data type. In fact, in general it’s been a very exciting year for academic computing. There is ubiquitous and aggressive deployment of 802.11 networks on campuses all over the world, as well as really impressive hardware advancements. For example, this is a next generation HP Tablet PC that has 2 gigs of RAM and a gigahertz Pentium-M and 60 gig hard drive and 3-D acceleration, built-in wireless and it’s really not much bigger than the standard student binder.

So with that being the case, it’s really exciting to think about these new generation notebook computers that really empower the learners in interesting ways.

As Rick mentioned, Microsoft Research works very closely with hundreds of top-tier academic research universities worldwide. And I actually work on MSR’s University Relations Team as a technical evangelist, and I wanted to show you a couple of the applications that have been built by our partners at Brown and MIT.

So the first application that I’m going to show is called Math Pad. So this application, appropriately named, was developed by a team of mathematicians and computer scientists at Brown and really the vision of Math Pad was to take this very free form sketching, where you start doing mathematics ideation, you’re sketching down some notations and also doing some conceptual diagrams, but then taking that sophisticated interaction model and marrying it to a very, very powerful math engine on the back side, in this case MatLab. This software is developed under the leadership of Professor Andries van Dam at Brown and actually most of the code was developed by Joseph LaViola, a PhD student and actually special thanks to Joseph for his diligence in getting this demo working for today’s conference.

So what I wanted to show you was I’m just sort of going to sketch a few mathematical expressions here and you can see it recognized that. I’ll also do, let’s see, “y=a of x”, recognize that as well.

And what I can do now is with a quick little gesture, since we’re hooking into MatLab on the back end, I can just do a little gesture for graphing and now it will quickly graph this diagram. (Applause.)

And then actually I can tell the system I want to hold that plot, introduce maybe another expression, “a of x squared,” recognize that. Unfortunately, my handwriting isn’t stellar. There we go. And now, with that, spot held I’m going to introduce this graph on top of it as well. (Applause.) And, of course, this is all very interactive. So now I can go through and I can kind of evolve this and say, actually let’s take a look at “a to the x cubed”. This is a research talk after all. And you can see it changes in real time. In fact, I can even action directly on the graph and say, you know what, I’m really most interested in this from -1 to 1, for example. And you can see that it will redraw the graph in real time. (Applause.)

So this sort of interactive sketching is quite exciting, and really you can imagine how this transforms the learning experience when you’re first kind of diving into the universe of complex math.

So that’s one example. Let me give you a little bit more of a complex example by loading the standard sort of mathematician’s pendulum overview.

Now, typically we would sketch some math and actually in the interest of time I had previously written these expressions. And now what I’m going to do is I’m going to sort of draw a diagram of a pendulum, but the cool thing is that typically in the application of paper, the feature set of paper would end right about here. But now, since this is a digital tool, has a really powerful back-end math engine, I can say, you know what, the expressions that dictate the activity of this object are defined by these expressions, and now I just need to jot a quick gesture to designate the point of rotation and now I should be able to just run this animation, and all of a sudden the paper literally comes alive. (Cheers, applause.)

Now, of course, no math class is complete without a baseball example, so let’s go ahead and I’m going to load another example here.

Now, these are some expressions that kind of generally define the movement of an object through space, for example, a baseball. So what I’m going to do is I’m going to draw our colleague the baseball player over here. You can see I clearly missed my calling. Put a little bat in his hands or a hoagie sandwich, depending on your perspective. And then I’m also going to draw the playing field, the home run wall, and, of course, the ball itself.

OK, now what I’m going to do is much like I did in the pendulum example, I’m going to say the expressions that really define the movement of the sphere of the ball are these, and then what I’m going to do is in order to sort of tag the object in the sketch that it’s associated in, I’m just going to kind of hover over my diagram, and you can see that it’s illuminating each of my sketches as I hover. Well, I’m actually interested in the ball so I’m going to tap that.

And then also I’ve made some expressions that have to do with the playing field itself, so I’m going to designate that.

And now, very similarly, I should be able to run the expressions. So this is our player, it’s the bottom of the 9th, the bases are loaded, two outs; let’s see if he makes the home run. The mathematician gurus in the room have probably already done the math and said, no, it’s not quite going to clear, but let’s see. So he hits it and come on, big money, big money, but no, it doesn’t make it.

But you know what, one of the cool things about being a developer is that you can skew the rules to your advantage. So, of course, in this case we’re going to skew the rules of gravity a little bit — (laughter) — and drop gravity down. (Applause.) Why not? There we go, and we’ll see how that recognition turned out; looks good. So let’s run it now. And the crowd goes wild. (Laughter, applause.)

Awesome. So this is really again a beautiful example from Brown on how to make these rich ink experiences that are way beyond just traditional note taking or diagrams.

The next example I’m going to show you is appropriately called a Magic Paper and this was done by the AI lab at MIT under the leadership of Randall Davis. And actually the code was developed collaboratively between the team at MIT and also a team of consultants from the Leszynski Group, some of whom are represented here at PDC.

Now, the cool thing about Magic Paper and my disclaimer is that this is pre-alpha code and, in fact, the only way they’d let me show it is if I offered the disclaimer that, in fact, the shape recognizer isn’t even done yet. So I figured this audience would appreciate that more than most, but I think you’re totally going to dig it so I’m going to show it anyway. But we will have to sketch things multiple times. I just want to set that expectation. But this is a research talk after all.

So what I’m going to do is this is very much in the same spirit as Math Pad, but sort of from the opposite direction. So what I’m going to do is I’m going to kind of sketch a few spheres here in this two-dimensional space and then what I’m going to do is I’m going to draw sort of an inclined plane here. (Laughter.) As expected. OK, so there’s an incline plane and I’ll do another one over here and then I’m going to do sort of a little — let’s do like a little catch basin type thing here.

Now what I need to do is I need to tell Magic Paper which components of this diagram are part of the background and which are the dynamic components so I’ll do that just by sort of locking these objects to the background with a quick triangular gesture.

And now we as humans, of course, can look at this diagram and say, OK, we can sort of play through what’s going to happen in this model. And again, because Magic Paper hooks in on the back-end to a highly sophisticated Newtonian physics simulator, in this case working model, then I can say “animate,” it’s going to pass these arguments off to a working model and we’ll see this sort of sketch come alive in a very dynamic way. (Applause.)

Now, the next thing I’m going to do, let’s dive back into Magic Paper.

RICK RASHID: We should actually finish up here. We’re going to keep these people too long. We’re running over time.

JOHN SAN GIOVANNI: Can I just give one more example?

RICK RASHID: OK, one more example.

JOHN SAN GIOVANNI: One more example, OK. (Applause.)

So what I’m quickly going to do is I’m just going to erase the marker that tags that basin to the background. Of course, if I animate this, it’s all just going to fall off the screen. But what I’m going to do is I’m going to introduce a new primitive. I’m actually going to move this over a little bit and I’m going to sketch a little anchor here. I’m going to lock that to the background.

Now, what I’m going to do is there’s a new primitive called the spring. Let’s see how it does with this. So I’m going to draw this spring, OK, looks like it recognized that and now let’s see how this plays out. (Laughter.) Boing, and it will fall down and one, two, three, awesome. (Applause.)

In conclusion, really, Microsoft Research believes that the student notebook of tomorrow is really this highly-sophisticated, digital mobile device and through these examples from Brown and MIT, as well as other applications from other universities, you can see that universities are already using Tablet PC as an intelligent, interactive tool for learning.

Thank you for your time. (Cheers, applause.)

RICK RASHID: And one other thing I’ll mention that goes along with that is if you go out to the Microsoft Research Web site, research.microsoft.com, you can get a lot more information and pointers to a lot of the different projects that are going on.

So I’m just going to wrap up. We’re running a little bit late. But I think the key point here is we’re not done yet. There’s a tremendous amount of new technology in the pipe. We really have an opportunity to revolutionize education and science and to really change the kinds of applications that we build with computing.

I want to thank you all for being a great audience and staying for a little bit of extra time. Thanks. (Applause.)

Related Posts

Rick Rashid: Microsoft Research 15th Anniversary Celebration

A transcript of remarks by Microsoft Senior Vice President Rick Rashid, University of Washington President Mark Emmert, Microsoft Corporate Vice President Dan Ling and other Microsoft officials during the celebrations of the 15 anniversary of Microsoft Research, September 26, 2006.